threads
listlengths
1
2.99k
[ { "msg_contents": "Hi!\n\nI have to migrate application from Delphi/MSSQL to Kylix/<free_rdbms>.\nOne of things used in this app is INSERTED/DELETED tables in triggers.\nIt's something like NEW and OLD records, but may contain multiple rows.\n\nMy questions are:\nHow difficult would it be to implement such a feature in PostgreSQL?\nWhat impact would it have on pgsql performance/memory reqiurements?\nWould it be useful for somebody else?\n\nThanks in advance,\nTom\n\n-- \n.signature: Too many levels of symbolic links\n", "msg_date": "Wed, 22 Aug 2001 10:44:12 +0200", "msg_from": "Tomasz Zielonka <tomek@mult.i.pl>", "msg_from_op": true, "msg_subject": "Extending triggers capabilities (inserted/deleted pseudo tables)" } ]
[ { "msg_contents": "\n> DBMS should be independent from the OS settings as far as\n> possible especially in the handling of data. Currently we\n> could hardly judge if we are running on a locale or not from\n> the dbms POV and it doesn't seem a dbms kind of thing in the\n> first place. I'm a dbms guy not an OS guy and really dislike\n> the requirement for users to export LC_ALL=C. \n\nYup, I can second that.\nAlso note, that currently a locale aware index might get corrupted if \nyou do an OS upgrade (that changes the collation: e.g. add the ?\nsymbol). \nI sortof think, that pg locale support is not yet up to prime time. \nIf we had something that conformed to the Spec (per column lang and \ncollation), then yes I would make it mainstream, but as is ?\n\nAndreas\n", "msg_date": "Wed, 22 Aug 2001 11:13:20 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "RE: Locale by default?" } ]
[ { "msg_contents": "Tom,\n\nTeodor finally made patches to current CVS, please review and apply them asap\nto be in sync with development (last time it was kind of problem)\n\ngistpatch.gz\n\n1. gistpatch.gz - core GiST changes, all gist contrib modules were fixed reflecting\n all changes. Added linear-time split algorithm for R-tree GiST opclass\n All regression tests passed.\n\n2. btree_gist.tar.gz - Btree implementation for GiST with\n support of int4 and timestamps. Should go into contrib\n\nWe're online right now and waiting for your reply\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83", "msg_date": "Wed, 22 Aug 2001 16:58:21 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": true, "msg_subject": "GiST patches for 7.2 (please apply) " }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n> Tom,\n> \n> Teodor finally made patches to current CVS, please review and apply them asap\n> to be in sync with development (last time it was kind of problem)\n> \n> gistpatch.gz\n> \n> 1. gistpatch.gz - core GiST changes, all gist contrib modules were fixed reflecting\n> all changes. Added linear-time split algorithm for R-tree GiST opclass\n> All regression tests passed.\n> \n> 2. btree_gist.tar.gz - Btree implementation for GiST with\n> support of int4 and timestamps. Should go into contrib\n> \n> We're online right now and waiting for your reply\n> \n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 22 Aug 2001 10:23:38 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: GiST patches for 7.2 (please apply)" }, { "msg_contents": "Looking at the revised version of initGISTstate, I am thinking that\nI missed a step in updating the main indexing code for the new\npg_opclass definition. It seems to me that if opckeytype is not 0,\nthat datatype ought to be used to declare the index column type from\nthe beginning. Then initGISTstate wouldn't need to go through all\nthese pushups to develop a correct tuple descriptor for the index.\n\nAny objections?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 22 Aug 2001 11:39:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: GiST patches for 7.2 (please apply) " }, { "msg_contents": "On Wed, 22 Aug 2001, Tom Lane wrote:\n\n> Looking at the revised version of initGISTstate, I am thinking that\n> I missed a step in updating the main indexing code for the new\n> pg_opclass definition. It seems to me that if opckeytype is not 0,\n> that datatype ought to be used to declare the index column type from\n> the beginning. Then initGISTstate wouldn't need to go through all\n> these pushups to develop a correct tuple descriptor for the index.\n>\n> Any objections?\n>\n\nTom,\n\nif you look into previous patch we sent (patch_72_systbl.gz)\nyou could find patch for ./src/backend/catalog/index.c which\nis exact implementation of what 'seems to you' :-)\nTeodor thought you don't like this idea and he moved functionality to\ninitGISTstate. But he'd prefer his original implementation.\nSo, if you have no objection, we could prepare NEW patch to\ncurrent CVS with old implementation.\n\n\n> \t\t\tregards, tom lane\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Wed, 22 Aug 2001 18:53:58 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": true, "msg_subject": "Re: GiST patches for 7.2 (please apply) " }, { "msg_contents": "Oleg Bartunov <oleg@sai.msu.su> writes:\n> if you look into previous patch we sent (patch_72_systbl.gz)\n> you could find patch for ./src/backend/catalog/index.c which\n> is exact implementation of what 'seems to you' :-)\n\nSo you did. I'm not sure why I thought that was a bad idea when I\nwas reviewing the patch. Today it's obviously the right thing ;-)\n\n> Teodor thought you don't like this idea and he moved functionality to\n> initGISTstate. But he'd prefer his original implementation.\n\nRoger, I'll stick it in.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 22 Aug 2001 12:07:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: GiST patches for 7.2 (please apply) " }, { "msg_contents": "OK. You may apply patches to contrib modules (from the last patch)\nand new btree module. Teodor will sent patch to core tomorrow.\n\n\tOleg\nOn Wed, 22 Aug 2001, Tom Lane wrote:\n\n> Oleg Bartunov <oleg@sai.msu.su> writes:\n> > if you look into previous patch we sent (patch_72_systbl.gz)\n> > you could find patch for ./src/backend/catalog/index.c which\n> > is exact implementation of what 'seems to you' :-)\n>\n> So you did. I'm not sure why I thought that was a bad idea when I\n> was reviewing the patch. Today it's obviously the right thing ;-)\n>\n> > Teodor thought you don't like this idea and he moved functionality to\n> > initGISTstate. But he'd prefer his original implementation.\n>\n> Roger, I'll stick it in.\n>\n> \t\t\tregards, tom lane\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Wed, 22 Aug 2001 19:17:48 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": true, "msg_subject": "Re: GiST patches for 7.2 (please apply) " }, { "msg_contents": "Oleg Bartunov <oleg@sai.msu.su> writes:\n> Teodor finally made patches to current CVS, please review and apply them asap\n> to be in sync with development (last time it was kind of problem)\n\nChecked and committed. Note I did not commit your change to the cube\nregression test:\n\n*** ./contrib/cube/expected/cube.out.orig\tWed Aug 22 16:04:42 2001\n--- ./contrib/cube/expected/cube.out\tWed Aug 22 16:26:25 2001\n***************\n*** 130,142 ****\n SELECT '1e700'::cube AS cube;\n cube \n -------\n! (inf)\n (1 row)\n \n SELECT '-1e700'::cube AS cube;\n cube \n --------\n! (-inf)\n (1 row)\n \n SELECT '1e-700'::cube AS cube;\n--- 130,142 ----\n SELECT '1e700'::cube AS cube;\n cube \n -------\n! (Inf)\n (1 row)\n \n SELECT '-1e700'::cube AS cube;\n cube \n --------\n! (-Inf)\n (1 row)\n \n SELECT '1e-700'::cube AS cube;\n\nsince on my machine \"inf\" appears to be the correct result. Is this a\nplatform dependency, or just a lack of synchronization somewhere else?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 22 Aug 2001 14:36:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: GiST patches for 7.2 (please apply) " }, { "msg_contents": "Oh, one other comment --- the rtree_gist code had a bunch of functions\ndeclared like\n\nGISTENTRY * gbox_compress(PG_FUNCTION_ARGS);\nBOX *gbox_union(PG_FUNCTION_ARGS);\nGIST_SPLITVEC * gbox_picksplit(PG_FUNCTION_ARGS);\nbool gbox_consistent(PG_FUNCTION_ARGS);\nfloat * gbox_penalty(PG_FUNCTION_ARGS);\nbool * gbox_same(PG_FUNCTION_ARGS);\n\nThis is not portable. The declaration of any V1-style fmgr-callable\nfunction must be exactly\n\nDatum foo(PG_FUNCTION_ARGS);\n\nno more and no less. You can't shortcut by assuming that pointers are\nthe same size as Datum, or that bool is the same size as Datum, or that\nthe generated machine code will be the same anyway. (There are machines\nthat have different register conventions for returning pointers and\nintegers, even though they're the same size.) If you're going to put up\nwith the notational cruft of the V1 calling convention for arguments,\ndon't blow the portability advantages by not doing it for results too.\n\nI fixed this in rtree_gist.c, but did not look to see if similar\nproblems exist elsewhere.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 22 Aug 2001 14:44:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: GiST patches for 7.2 (please apply) " }, { "msg_contents": "\n\nTom Lane wrote:\n> \n> Oleg Bartunov <oleg@sai.msu.su> writes:\n> > Teodor finally made patches to current CVS, please review and apply them asap\n> > to be in sync with development (last time it was kind of problem)\n> \n> Checked and committed. Note I did not commit your change to the cube\n> regression test:\n> \n> *** ./contrib/cube/expected/cube.out.orig Wed Aug 22 16:04:42 2001\n> --- ./contrib/cube/expected/cube.out Wed Aug 22 16:26:25 2001\n> ***************\n> *** 130,142 ****\n> SELECT '1e700'::cube AS cube;\n> cube\n> -------\n> ! (inf)\n> (1 row)\n> \n> SELECT '-1e700'::cube AS cube;\n> cube\n> --------\n> ! (-inf)\n> (1 row)\n> \n> SELECT '1e-700'::cube AS cube;\n> --- 130,142 ----\n> SELECT '1e700'::cube AS cube;\n> cube\n> -------\n> ! (Inf)\n> (1 row)\n> \n> SELECT '-1e700'::cube AS cube;\n> cube\n> --------\n> ! (-Inf)\n> (1 row)\n> \n> SELECT '1e-700'::cube AS cube;\n> \n> since on my machine \"inf\" appears to be the correct result. Is this a\n> platform dependency, or just a lack of synchronization somewhere else?\n> \n> regards, tom lane\n\nOn my box FreeBSD4.3 it looks as 'Inf'. Very similar that this is\nplatform dependency.\n", "msg_date": "Wed, 22 Aug 2001 23:34:37 +0400", "msg_from": "Teodor <teodor@stack.net>", "msg_from_op": false, "msg_subject": "Re: GiST patches for 7.2 (please apply)" }, { "msg_contents": "Teodor <teodor@stack.net> writes:\n> Tom Lane wrote:\n>> ... on my machine \"inf\" appears to be the correct result. Is this a\n>> platform dependency, or just a lack of synchronization somewhere else?\n\n> On my box FreeBSD4.3 it looks as 'Inf'. Very similar that this is\n> platform dependency.\n\nI'm inclined to just remove that part of the \"cube\" regression test,\nthen. It's not telling us anything very important about the behavior\nof the cube datatype, so I think it's not worth dealing with a platform\ndependency. Objections anyone?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 22 Aug 2001 20:41:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: GiST patches for 7.2 (please apply) " }, { "msg_contents": "you're right. nothing special.\n\n\tOleg\nOn Wed, 22 Aug 2001, Tom Lane wrote:\n\n> Teodor <teodor@stack.net> writes:\n> > Tom Lane wrote:\n> >> ... on my machine \"inf\" appears to be the correct result. Is this a\n> >> platform dependency, or just a lack of synchronization somewhere else?\n>\n> > On my box FreeBSD4.3 it looks as 'Inf'. Very similar that this is\n> > platform dependency.\n>\n> I'm inclined to just remove that part of the \"cube\" regression test,\n> then. It's not telling us anything very important about the behavior\n> of the cube datatype, so I think it's not worth dealing with a platform\n> dependency. Objections anyone?\n>\n> \t\t\tregards, tom lane\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Thu, 23 Aug 2001 08:01:02 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": true, "msg_subject": "Re: GiST patches for 7.2 (please apply) " } ]
[ { "msg_contents": "It has come to our attention that many applications which use libpq\nare vulnerable to code insertion attacks in strings and identifiers\npassed to these applications. We have collected some evidence which\nsuggests that this is related to the fact that libpq does not provide\na function to escape strings and identifiers properly. (Both the\nOracle and MySQL client libraries include such a function, and the\nvast majority of applications we examined are not vulnerable to code\ninsertion attacks because they use this function.)\n\nWe therefore suggest that a string escaping function is included in a\nfuture version of PostgreSQL and libpq. A sample implementation is\nprovided below, along with documentation.\n\n-- \nFlorian Weimer \t Florian.Weimer@RUS.Uni-Stuttgart.DE\nUniversity of Stuttgart http://cert.uni-stuttgart.de/\nRUS-CERT +49-711-685-5973/fax +49-711-685-5898\n\nIndex: libpq.sgml\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/doc/src/sgml/libpq.sgml,v\nretrieving revision 1.66\ndiff -u -r1.66 libpq.sgml\n--- libpq.sgml\t2001/08/10 22:50:09\t1.66\n+++ libpq.sgml\t2001/08/22 15:58:02\n@@ -827,6 +827,42 @@\n </itemizedlist>\n </sect2>\n \n+<sect2 id=\"libpq-exec-escape-string\">\n+ <title>Escaping strings for inclusion in SQL queries</title>\n+<para>\n+<function>PGescapeString</function>\n+ Escapes a string for use within an SQL query.\n+<synopsis>\n+size_t PGescapeString (char *to, const char *from, size_t length);\n+</synopsis>\n+If you want to include strings or identifiers which have been received\n+from a source which is not trustworthy (for example, because they were\n+transmitted across a network), you cannot directly include them in SQL\n+queries for security reasons. Instead, you have to quote special\n+characters which are otherwise interpreted by the SQL parser.\n+</para>\n+<para>\n+<function>PGescapeString</> performs this operation. The\n+<parameter>from</> points to the first character of the string which\n+is to be escaped, and the <parameter>length</> parameter counts the\n+number of characters in this string (a terminating NUL character is\n+neither necessary nor counted). <parameter>to</> shall point to a\n+buffer which is able to hold at least one more character than twice\n+the value of <parameter>length</>, otherwise the behavior is\n+undefined. A call to <function>PGescapeString</> writes an escaped\n+version of the <parameter>from</> string to the <parameter>to</>\n+buffer, replacing special characters so that they cannot cause any\n+harm, and adding a terminating NUL character. The single or double\n+quote characters which are required for strings and identifiers,\n+respectively, are not added to the result string.\n+</para>\n+<para>\n+<function>PGescapeString</> returns the number of characters written\n+to <parameter>to</>, not including the terminating NUL character.\n+Behavior is undefined when the <parameter>to</> and <parameter>from</>\n+strings overlap.\n+</para>\n+\n <sect2 id=\"libpq-exec-select-info\">\n <title>Retrieving SELECT Result Information</title>\n \n\nIndex: fe-exec.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/interfaces/libpq/fe-exec.c,v\nretrieving revision 1.107\ndiff -u -r1.107 fe-exec.c\n--- fe-exec.c\t2001/08/17 15:11:15\t1.107\n+++ fe-exec.c\t2001/08/22 15:58:40\n@@ -56,6 +56,69 @@\n static int\tgetNotify(PGconn *conn);\n static int\tgetNotice(PGconn *conn);\n \n+/* ---------------\n+ * Escaping arbitrary strings to get valid SQL strings/identifiers.\n+ *\n+ * Replaces \"\\\\\" with \"\\\\\\\\\", \"\\0\" with \"\\\\0\", \"'\" with \"\\\\'\", and \n+ * \"\\\"\" with \"\\\\\\\"\". length is the length of the buffer pointed to by\n+ * from. The buffer at to must be at least 2*length + 1 characters\n+ * long. A terminating NUL character is written.\n+ * ---------------\n+ */\n+\n+size_t\n+PGescapeString (char *to, const char *from, size_t length)\n+{\n+\tconst char *source = from;\n+\tchar *target = to;\n+\tunsigned int remaining = length;\n+\n+\twhile (remaining > 0) {\n+\t\tswitch (*source) {\n+\t\tcase '\\0':\n+\t\t\t*target = '\\\\';\n+\t\t\ttarget++;\n+\t\t\t*target = '0';\n+\t\t\t/* target and remaining are updated below. */\n+\t\t\tbreak;\n+\t\t\t\n+\t\tcase '\\\\':\n+\t\t\t*target = '\\\\';\n+\t\t\ttarget++;\n+\t\t\t*target = '\\\\';\n+\t\t\t/* target and remaining are updated below. */\n+\t\t\tbreak;\n+\n+\t\tcase '\\'':\n+\t\t\t*target = '\\\\';\n+\t\t\ttarget++;\n+\t\t\t*target = '\\'';\n+\t\t\t/* target and remaining are updated below. */\n+\t\t\tbreak;\n+\n+\t\tcase '\"':\n+\t\t\t*target = '\\\\';\n+\t\t\ttarget++;\n+\t\t\t*target = '\"';\n+\t\t\t/* target and remaining are updated below. */\n+\t\t\tbreak;\n+\t\t\n+\t\tdefault:\n+\t\t\t*target = *source;\n+\t\t\t/* target and remaining are updated below. */\n+\t\t}\n+\t\tsource++;\n+\t\ttarget++;\n+\t\tremaining--;\n+\t}\n+\n+\t/* Write the terminating NUL character. */\n+\t*target = '\\0';\n+\t\n+\treturn target - to;\n+}\n+\n+\n \n /* ----------------\n * Space management for PGresult.\n\nIndex: libpq-fe.h\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/interfaces/libpq/libpq-fe.h,v\nretrieving revision 1.71\ndiff -u -r1.71 libpq-fe.h\n--- libpq-fe.h\t2001/03/22 04:01:27\t1.71\n+++ libpq-fe.h\t2001/08/22 15:59:02\n@@ -240,6 +240,9 @@\n \n /* === in fe-exec.c === */\n \n+\t/* Quoting strings before inclusion in queries. */\n+\textern size_t PGescapeString (char *to, const char *from, size_t length);\n+\n \t/* Simple synchronous query */\n \textern PGresult *PQexec(PGconn *conn, const char *query);\n \textern PGnotify *PQnotifies(PGconn *conn);", "msg_date": "22 Aug 2001 18:11:06 +0200", "msg_from": "Florian Weimer <Florian.Weimer@RUS.Uni-Stuttgart.DE>", "msg_from_op": true, "msg_subject": "Escaping strings for inclusion into SQL queries" }, { "msg_contents": "On Wed, Aug 22, 2001 at 05:16:44PM +0000, Florian Weimer wrote:\n> We therefore suggest that a string escaping function is included in a\n> future version of PostgreSQL and libpq. A sample implementation is\n> provided below, along with documentation.\n\nI use Perl, which (through DBD::Pg) has a \"quote\" function available,\nbut I think this is a very good idea to include in the library.\n\nI only have one issue - the SQL standard seems to support the use\nof '' to escape a single quote, but not \\'. Though PostgreSQL has\nan extended notion of character string literals, I think that the\nusual policy of using the standard interface when possible should\napply.\n-- \nChristopher Masto Senior Network Monkey NetMonger Communications\nchris@netmonger.net info@netmonger.net http://www.netmonger.net\n\nFree yourself, free your machine, free the daemon -- http://www.freebsd.org/\n", "msg_date": "Thu, 23 Aug 2001 14:09:24 -0400", "msg_from": "Christopher Masto <chris@netmonger.net>", "msg_from_op": false, "msg_subject": "Re: Escaping strings for inclusion into SQL queries" }, { "msg_contents": "Christopher Masto <chris@netmonger.net> writes:\n\n> I only have one issue - the SQL standard seems to support the use\n> of '' to escape a single quote, but not \\'. Though PostgreSQL has\n> an extended notion of character string literals, I think that the\n> usual policy of using the standard interface when possible should\n> apply.\n\nThe first version escaped ' with ''. I changed it when I noticed that\nif \\' is used instead, the same function can be used for strings\n('...') and identifiers (\"...\").\n\nIn addition, you have to replace \\ with \\\\, so you are forced\nto leave the grounds of the standard anyway.\n\n-- \nFlorian Weimer \t Florian.Weimer@RUS.Uni-Stuttgart.DE\nUniversity of Stuttgart http://cert.uni-stuttgart.de/\nRUS-CERT +49-711-685-5973/fax +49-711-685-5898\n", "msg_date": "23 Aug 2001 22:17:05 +0200", "msg_from": "Florian Weimer <Florian.Weimer@RUS.Uni-Stuttgart.DE>", "msg_from_op": true, "msg_subject": "Re: Re: Escaping strings for inclusion into SQL queries" }, { "msg_contents": "Florian Weimer <Florian.Weimer@rus.uni-stuttgart.de> writes:\n\n> We therefore suggest that a string escaping function is included in a\n> future version of PostgreSQL and libpq. A sample implementation is\n> provided below, along with documentation.\n\nWe have now released a description of the problems which occur when a\nstring escaping function is not used:\n\nhttp://cert.uni-stuttgart.de/advisories/apache_auth.php\n\nWhat further steps are required to make the suggested patch part of\nthe official libpq library?\n\nThanks,\n-- \nFlorian Weimer \t Florian.Weimer@RUS.Uni-Stuttgart.DE\nUniversity of Stuttgart http://cert.uni-stuttgart.de/\nRUS-CERT +49-711-685-5973/fax +49-711-685-5898\n", "msg_date": "30 Aug 2001 15:21:16 +0200", "msg_from": "Florian Weimer <Florian.Weimer@RUS.Uni-Stuttgart.DE>", "msg_from_op": true, "msg_subject": "Re: Escaping strings for inclusion into SQL queries" }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n> It has come to our attention that many applications which use libpq\n> are vulnerable to code insertion attacks in strings and identifiers\n> passed to these applications. We have collected some evidence which\n> suggests that this is related to the fact that libpq does not provide\n> a function to escape strings and identifiers properly. (Both the\n> Oracle and MySQL client libraries include such a function, and the\n> vast majority of applications we examined are not vulnerable to code\n> insertion attacks because they use this function.)\n> \n> We therefore suggest that a string escaping function is included in a\n> future version of PostgreSQL and libpq. A sample implementation is\n> provided below, along with documentation.\n> \n> -- \n> Florian Weimer \t Florian.Weimer@RUS.Uni-Stuttgart.DE\n> University of Stuttgart http://cert.uni-stuttgart.de/\n> RUS-CERT +49-711-685-5973/fax +49-711-685-5898\n\n[ Attachment, skipping... ]\n\n[ Attachment, skipping... ]\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 30 Aug 2001 18:43:25 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Escaping strings for inclusion into SQL queries" }, { "msg_contents": "> Florian Weimer <Florian.Weimer@rus.uni-stuttgart.de> writes:\n> \n> > We therefore suggest that a string escaping function is included in a\n> > future version of PostgreSQL and libpq. A sample implementation is\n> > provided below, along with documentation.\n> \n> We have now released a description of the problems which occur when a\n> string escaping function is not used:\n> \n> http://cert.uni-stuttgart.de/advisories/apache_auth.php\n> \n> What further steps are required to make the suggested patch part of\n> the official libpq library?\n\nWill be applied soon. I was waiting for comments before adding it to\nthe patch queue.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 30 Aug 2001 18:43:55 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Escaping strings for inclusion into SQL queries" }, { "msg_contents": "Perhaps I'm not thinking correctly but isn't it the job of the application\nthat's using the libpq library to escape special characters? I guess I don't\nsee a down side though, if it's implemented correctly to check and see if\ncharacters are already escaped before escaping them (else major breakage of\nexisting application would occur).. I didn't see the patch but I assume that\nsomeone took a look to make sure before applying it.\n\n\n-Mitch\n\n----- Original Message -----\nFrom: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\nTo: \"Florian Weimer\" <Florian.Weimer@rus.uni-stuttgart.de>\nCc: <pgsql-hackers@postgresql.org>\nSent: Thursday, August 30, 2001 6:43 PM\nSubject: Re: [HACKERS] Escaping strings for inclusion into SQL queries\n\n\n> > Florian Weimer <Florian.Weimer@rus.uni-stuttgart.de> writes:\n> >\n> > > We therefore suggest that a string escaping function is included in a\n> > > future version of PostgreSQL and libpq. A sample implementation is\n> > > provided below, along with documentation.\n> >\n> > We have now released a description of the problems which occur when a\n> > string escaping function is not used:\n> >\n> > http://cert.uni-stuttgart.de/advisories/apache_auth.php\n> >\n> > What further steps are required to make the suggested patch part of\n> > the official libpq library?\n>\n> Will be applied soon. I was waiting for comments before adding it to\n> the patch queue.\n\n\n", "msg_date": "Thu, 30 Aug 2001 19:07:36 -0400", "msg_from": "\"Mitch Vincent\" <mvincent@cablespeed.com>", "msg_from_op": false, "msg_subject": "Re: Escaping strings for inclusion into SQL queries" }, { "msg_contents": "It is. Application is responsible to call PGescapeString (included in the\npatch in question) to escape command that may possibly have user-specified\ndata... This function isn't called automatically.\n\nOn Thu, 30 Aug 2001, Mitch Vincent wrote:\n\n> Perhaps I'm not thinking correctly but isn't it the job of the application\n> that's using the libpq library to escape special characters? I guess I don't\n> see a down side though, if it's implemented correctly to check and see if\n> characters are already escaped before escaping them (else major breakage of\n> existing application would occur).. I didn't see the patch but I assume that\n> someone took a look to make sure before applying it.\n> \n> \n> -Mitch\n> \n> ----- Original Message -----\n> From: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\n> To: \"Florian Weimer\" <Florian.Weimer@rus.uni-stuttgart.de>\n> Cc: <pgsql-hackers@postgresql.org>\n> Sent: Thursday, August 30, 2001 6:43 PM\n> Subject: Re: [HACKERS] Escaping strings for inclusion into SQL queries\n> \n> \n> > > Florian Weimer <Florian.Weimer@rus.uni-stuttgart.de> writes:\n> > >\n> > > > We therefore suggest that a string escaping function is included in a\n> > > > future version of PostgreSQL and libpq. A sample implementation is\n> > > > provided below, along with documentation.\n> > >\n> > > We have now released a description of the problems which occur when a\n> > > string escaping function is not used:\n> > >\n> > > http://cert.uni-stuttgart.de/advisories/apache_auth.php\n> > >\n> > > What further steps are required to make the suggested patch part of\n> > > the official libpq library?\n> >\n> > Will be applied soon. I was waiting for comments before adding it to\n> > the patch queue.\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n> \n> \n\n", "msg_date": "Thu, 30 Aug 2001 19:32:58 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": false, "msg_subject": "Re: Escaping strings for inclusion into SQL queries" }, { "msg_contents": "\"Mitch Vincent\" <mvincent@cablespeed.com> writes:\n\n> Perhaps I'm not thinking correctly but isn't it the job of the application\n> that's using the libpq library to escape special characters?\n\nYes, it is.\n\n> I guess I don't see a down side though, if it's implemented\n> correctly to check and see if characters are already escaped before\n> escaping them (else major breakage of existing application would\n> occur)..\n\nYou can't do this automatically because the strings needing escaping\nare not marked in any way at the moment.\n\n-- \nFlorian Weimer \t Florian.Weimer@RUS.Uni-Stuttgart.DE\nUniversity of Stuttgart http://cert.uni-stuttgart.de/\nRUS-CERT +49-711-685-5973/fax +49-711-685-5898\n", "msg_date": "31 Aug 2001 02:37:26 +0200", "msg_from": "Florian Weimer <Florian.Weimer@RUS.Uni-Stuttgart.DE>", "msg_from_op": true, "msg_subject": "Re: Escaping strings for inclusion into SQL queries" }, { "msg_contents": "Ok, I misudnerstood, I had long included my own escaping function in\nprograms that used libpq, I thought the intent was to make escaping happen\nautomatically..\n\nThanks!\n\n-Mitch\n\n----- Original Message -----\nFrom: \"Alex Pilosov\" <alex@pilosoft.com>\nTo: \"Mitch Vincent\" <mvincent@cablespeed.com>\nCc: <pgsql-hackers@postgresql.org>\nSent: Thursday, August 30, 2001 7:32 PM\nSubject: Re: [HACKERS] Escaping strings for inclusion into SQL queries\n\n\n> It is. Application is responsible to call PGescapeString (included in the\n> patch in question) to escape command that may possibly have user-specified\n> data... This function isn't called automatically.\n>\n> On Thu, 30 Aug 2001, Mitch Vincent wrote:\n>\n> > Perhaps I'm not thinking correctly but isn't it the job of the\napplication\n> > that's using the libpq library to escape special characters? I guess I\ndon't\n> > see a down side though, if it's implemented correctly to check and see\nif\n> > characters are already escaped before escaping them (else major breakage\nof\n> > existing application would occur).. I didn't see the patch but I assume\nthat\n> > someone took a look to make sure before applying it.\n> >\n> >\n> > -Mitch\n> >\n> > ----- Original Message -----\n> > From: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\n> > To: \"Florian Weimer\" <Florian.Weimer@rus.uni-stuttgart.de>\n> > Cc: <pgsql-hackers@postgresql.org>\n> > Sent: Thursday, August 30, 2001 6:43 PM\n> > Subject: Re: [HACKERS] Escaping strings for inclusion into SQL queries\n> >\n> >\n> > > > Florian Weimer <Florian.Weimer@rus.uni-stuttgart.de> writes:\n> > > >\n> > > > > We therefore suggest that a string escaping function is included\nin a\n> > > > > future version of PostgreSQL and libpq. A sample implementation\nis\n> > > > > provided below, along with documentation.\n> > > >\n> > > > We have now released a description of the problems which occur when\na\n> > > > string escaping function is not used:\n> > > >\n> > > > http://cert.uni-stuttgart.de/advisories/apache_auth.php\n> > > >\n> > > > What further steps are required to make the suggested patch part of\n> > > > the official libpq library?\n> > >\n> > > Will be applied soon. I was waiting for comments before adding it to\n> > > the patch queue.\n> >\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 6: Have you searched our list archives?\n> >\n> > http://www.postgresql.org/search.mpl\n> >\n> >\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n", "msg_date": "Thu, 30 Aug 2001 21:27:28 -0400", "msg_from": "\"Mitch Vincent\" <mvincent@cablespeed.com>", "msg_from_op": false, "msg_subject": "Re: Escaping strings for inclusion into SQL queries" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Your patch has been added to the PostgreSQL unapplied patches list at:\n> \n> http://candle.pha.pa.us/cgi-bin/pgpatches\n> \n> I will try to apply it within the next 48 hours.\n> \n> > It has come to our attention that many applications which use libpq\n> > are vulnerable to code insertion attacks in strings and identifiers\n> > passed to these applications. We have collected some evidence which\n> > suggests that this is related to the fact that libpq does not provide\n> > a function to escape strings and identifiers properly. (Both the\n> > Oracle and MySQL client libraries include such a function, and the\n> > vast majority of applications we examined are not vulnerable to code\n> > insertion attacks because they use this function.)\n\nI think the real difference is what I complained in another mail to this\nlist - \nin postgresql you can't do PREPARE / EXECUTE which could _automatically_\ndetect \nwhere string escaping is needed or just eliminate the need for escaping.\nIn postgreSQL you have to construct all queries yourself by inserting\nyour \nparameters inside your query strings in right places and escaping them\nwhen \nneeded. That is unless you use an interface like ODBC/JDBS that fakes\nthe \nPREPARE/EXECUTE on the client side and thus does the auto-escaping for\nyou .\n\n\nI think that this should be added to TODO\n\n* make portable BINARY representation for frontend-backend protocol by\nusing \n typsend/typreceive functions for binary and typinput typoutput for\nASCII\n (as currently typinput==typreceive and typoutput==typsend is suspect\nthe \n usage to be inconsistent). \n\n* make SQL changes to allow PREPARE/EXECUTE in main session, not only in\nSPI\n\n* make changes to client libraries to support marshalling arguments to\nEXECUTE\n using BINARY wire protocol or correctly escaped ASCII. The binary\nprotocol \n would be very helpful for BYTEA and other big binary types.\n\n\n> > We therefore suggest that a string escaping function is included in a\n> > future version of PostgreSQL and libpq. A sample implementation is\n> > provided below, along with documentation.\n\nWhile you are at it you could also supply a standard query delimiter\nfunction\nas this is also a thing that seems to vary from db to db.\n\n------------------\nHannu\n", "msg_date": "Fri, 31 Aug 2001 08:52:35 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Escaping strings for inclusion into SQL queries" }, { "msg_contents": "Barry Lind wrote:\n> \n> I agree with Hannu, that:\n> \n> * make SQL changes to allow PREPARE/EXECUTE in main session, not only\n> in SPI\n\nA more ambitious project would be \n\n* develop an ANSI standard SQL/CLI compatible postgreSQL client library,\n change wire protocol and SQL language as needed ;)\n\n> is an important feature to expose out to the client. My primary reason\n> is a perfomance one. Allowing the client to parse a SQL statement once\n> and then supplying bind values for arguments and executing it multiple\n> times can save a significant amount of server CPU, since the parsing and\n> planning of the statement is only done once, even though multiple\n> executions occur. This functionality is available in the backend\n> (through SPI) and plpgsql uses it, but there isn't anyway to take\n> advantage of this SPI functionality on the client (i.e. jdbc, odbc, etc.)\n> \n> I could see this implemented in different ways. One, by adding new SQL\n> commands to bind or execute an already open statement, or two, by\n> changing the FE/BE protocol to allow the client to open, parse,\n> describe, bind, execute and close a statement as separate actions that\n> can be sent to the server in one or more requests. (The latter is how\n> Oracle does it).\n\nThe latter is also the ODBS and JDBC wiew of how it is done. The current \nPG drivers have to fake it all on client side. \n\n> \n> I also would like to see this added to the todo list.\n> \n\n------------\nHannu\n", "msg_date": "Fri, 31 Aug 2001 21:03:57 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Escaping strings for inclusion into SQL queries" }, { "msg_contents": "I agree with Hannu, that:\n\n * make SQL changes to allow PREPARE/EXECUTE in main session, not only \nin SPI\n\nis an important feature to expose out to the client. My primary reason \nis a perfomance one. Allowing the client to parse a SQL statement once \nand then supplying bind values for arguments and executing it multiple \ntimes can save a significant amount of server CPU, since the parsing and \nplanning of the statement is only done once, even though multiple \nexecutions occur. This functionality is available in the backend \n(through SPI) and plpgsql uses it, but there isn't anyway to take \nadvantage of this SPI functionality on the client (i.e. jdbc, odbc, etc.)\n\nI could see this implemented in different ways. One, by adding new SQL \ncommands to bind or execute an already open statement, or two, by \nchanging the FE/BE protocol to allow the client to open, parse, \ndescribe, bind, execute and close a statement as separate actions that \ncan be sent to the server in one or more requests. (The latter is how \nOracle does it).\n\nI also would like to see this added to the todo list.\n\nthanks,\n--Barry\n\n\nHannu Krosing wrote:\n> Bruce Momjian wrote:\n> \n>>Your patch has been added to the PostgreSQL unapplied patches list at:\n>>\n>> http://candle.pha.pa.us/cgi-bin/pgpatches\n>>\n>>I will try to apply it within the next 48 hours.\n>>\n>>\n>>>It has come to our attention that many applications which use libpq\n>>>are vulnerable to code insertion attacks in strings and identifiers\n>>>passed to these applications. We have collected some evidence which\n>>>suggests that this is related to the fact that libpq does not provide\n>>>a function to escape strings and identifiers properly. (Both the\n>>>Oracle and MySQL client libraries include such a function, and the\n>>>vast majority of applications we examined are not vulnerable to code\n>>>insertion attacks because they use this function.)\n>>>\n> \n> I think the real difference is what I complained in another mail to this\n> list - \n> in postgresql you can't do PREPARE / EXECUTE which could _automatically_\n> detect \n> where string escaping is needed or just eliminate the need for escaping.\n> In postgreSQL you have to construct all queries yourself by inserting\n> your \n> parameters inside your query strings in right places and escaping them\n> when \n> needed. That is unless you use an interface like ODBC/JDBS that fakes\n> the \n> PREPARE/EXECUTE on the client side and thus does the auto-escaping for\n> you .\n> \n> \n> I think that this should be added to TODO\n> \n> * make portable BINARY representation for frontend-backend protocol by\n> using \n> typsend/typreceive functions for binary and typinput typoutput for\n> ASCII\n> (as currently typinput==typreceive and typoutput==typsend is suspect\n> the \n> usage to be inconsistent). \n> \n> * make SQL changes to allow PREPARE/EXECUTE in main session, not only in\n> SPI\n> \n> * make changes to client libraries to support marshalling arguments to\n> EXECUTE\n> using BINARY wire protocol or correctly escaped ASCII. The binary\n> protocol \n> would be very helpful for BYTEA and other big binary types.\n> \n> \n> \n>>>We therefore suggest that a string escaping function is included in a\n>>>future version of PostgreSQL and libpq. A sample implementation is\n>>>provided below, along with documentation.\n>>>\n> \n> While you are at it you could also supply a standard query delimiter\n> function\n> as this is also a thing that seems to vary from db to db.\n> \n> ------------------\n> Hannu\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n> \n> \n\n\n", "msg_date": "Fri, 31 Aug 2001 10:03:04 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: Escaping strings for inclusion into SQL queries" }, { "msg_contents": "For consistency with the rest of the libpq API, the function should be\ncalled PQescapeString, not PGescapeString.\n\nBruce Momjian writes:\n\n>\n> Your patch has been added to the PostgreSQL unapplied patches list at:\n>\n> \thttp://candle.pha.pa.us/cgi-bin/pgpatches\n>\n> I will try to apply it within the next 48 hours.\n>\n> > It has come to our attention that many applications which use libpq\n> > are vulnerable to code insertion attacks in strings and identifiers\n> > passed to these applications. We have collected some evidence which\n> > suggests that this is related to the fact that libpq does not provide\n> > a function to escape strings and identifiers properly. (Both the\n> > Oracle and MySQL client libraries include such a function, and the\n> > vast majority of applications we examined are not vulnerable to code\n> > insertion attacks because they use this function.)\n> >\n> > We therefore suggest that a string escaping function is included in a\n> > future version of PostgreSQL and libpq. A sample implementation is\n> > provided below, along with documentation.\n> >\n> > --\n> > Florian Weimer \t Florian.Weimer@RUS.Uni-Stuttgart.DE\n> > University of Stuttgart http://cert.uni-stuttgart.de/\n> > RUS-CERT +49-711-685-5973/fax +49-711-685-5898\n>\n> [ Attachment, skipping... ]\n>\n> [ Attachment, skipping... ]\n>\n> [ Attachment, skipping... ]\n>\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n>\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Sat, 1 Sep 2001 09:53:48 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Escaping strings for inclusion into SQL queries" }, { "msg_contents": "Florian Weimer writes:\n\n> The first version escaped ' with ''. I changed it when I noticed that\n> if \\' is used instead, the same function can be used for strings\n> ('...') and identifiers (\"...\").\n\nLast time I checked (15 seconds ago), you could not escape \" with \\ in\nPostgreSQL. The identifer parsing rules are a bit different from strings.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Sat, 1 Sep 2001 09:57:00 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Re: Escaping strings for inclusion into SQL queries" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n\n> Florian Weimer writes:\n> \n> > The first version escaped ' with ''. I changed it when I noticed that\n> > if \\' is used instead, the same function can be used for strings\n> > ('...') and identifiers (\"...\").\n> \n> Last time I checked (15 seconds ago), you could not escape \" with \\ in\n> PostgreSQL. The identifer parsing rules are a bit different from strings.\n\nYes, we misread the lexer description. I'm sorry about that.\n\nIn addition, there seems to be a bug in the treatment of \"\" escapes in\nidentifiers. 'SELECT \"\"\"\";' yields the error message 'Attribute '\"\"'\nnot found ' (not '\"'!) or even 'Attribute '\"\"\\' not found', depending\non the queries executed before.\n\nFor identifiers, comparing the characters to a white list is probably\na more reasonable approach.\n\n-- \nFlorian Weimer \t Florian.Weimer@RUS.Uni-Stuttgart.DE\nUniversity of Stuttgart http://cert.uni-stuttgart.de/\nRUS-CERT +49-711-685-5973/fax +49-711-685-5898\n", "msg_date": "03 Sep 2001 18:03:37 +0200", "msg_from": "Florian Weimer <Florian.Weimer@RUS.Uni-Stuttgart.DE>", "msg_from_op": true, "msg_subject": "Re: Re: Escaping strings for inclusion into SQL queries" }, { "msg_contents": "\nOK, can you supply an updated patch?\n\n\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> \n> > Florian Weimer writes:\n> > \n> > > The first version escaped ' with ''. I changed it when I noticed that\n> > > if \\' is used instead, the same function can be used for strings\n> > > ('...') and identifiers (\"...\").\n> > \n> > Last time I checked (15 seconds ago), you could not escape \" with \\ in\n> > PostgreSQL. The identifer parsing rules are a bit different from strings.\n> \n> Yes, we misread the lexer description. I'm sorry about that.\n> \n> In addition, there seems to be a bug in the treatment of \"\" escapes in\n> identifiers. 'SELECT \"\"\"\";' yields the error message 'Attribute '\"\"'\n> not found ' (not '\"'!) or even 'Attribute '\"\"\\' not found', depending\n> on the queries executed before.\n> \n> For identifiers, comparing the characters to a white list is probably\n> a more reasonable approach.\n> \n> -- \n> Florian Weimer \t Florian.Weimer@RUS.Uni-Stuttgart.DE\n> University of Stuttgart http://cert.uni-stuttgart.de/\n> RUS-CERT +49-711-685-5973/fax +49-711-685-5898\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 3 Sep 2001 16:28:49 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: Escaping strings for inclusion into SQL queries" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n\n> OK, can you supply an updated patch?\n\nYes, I'm going to update it. Shall I post it here?\n\nCould anybody have a look at the parser issue?\n\n-- \nFlorian Weimer \t Florian.Weimer@RUS.Uni-Stuttgart.DE\nUniversity of Stuttgart http://cert.uni-stuttgart.de/\nRUS-CERT +49-711-685-5973/fax +49-711-685-5898\n", "msg_date": "03 Sep 2001 22:59:54 +0200", "msg_from": "Florian Weimer <Florian.Weimer@RUS.Uni-Stuttgart.DE>", "msg_from_op": true, "msg_subject": "Re: Re: Escaping strings for inclusion into SQL queries" }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> \n> > OK, can you supply an updated patch?\n> \n> Yes, I'm going to update it. Shall I post it here?\n\nSure, or patches list.\n\n> Could anybody have a look at the parser issue?\n\nI am unsure how it is supposed to behave. Comments? Does the standard\nsay anything?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 3 Sep 2001 17:19:44 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: Escaping strings for inclusion into SQL queries" }, { "msg_contents": "Florian Weimer writes:\n\n> In addition, there seems to be a bug in the treatment of \"\" escapes in\n> identifiers. 'SELECT \"\"\"\";' yields the error message 'Attribute '\"\"'\n> not found ' (not '\"'!) or even 'Attribute '\"\"\\' not found', depending\n> on the queries executed before.\n\nA bug indeed.\n\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/parser/scan.l,v\nretrieving revision 1.88\ndiff -u -r1.88 scan.l\n--- scan.l 2001/03/22 17:41:47 1.88\n+++ scan.l 2001/09/03 22:11:46\n@@ -375,7 +375,7 @@\n return IDENT;\n }\n <xd>{xddouble} {\n- addlit(yytext, yyleng-1);\n+ addlit(yytext+1, yyleng-1);\n }\n <xd>{xdinside} {\n addlit(yytext, yyleng);\n===end\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Tue, 4 Sep 2001 00:17:51 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Re: Escaping strings for inclusion into SQL queries" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> A bug indeed.\n\n> <xd>{xddouble} {\n> - addlit(yytext, yyleng-1);\n> + addlit(yytext+1, yyleng-1);\n> }\n\nI don't follow. xddouble can only expand to two quote marks, so how\ndoes it matter which one we use as the result? This seems unlikely\nto change the behavior. If it does, I think the real bug is elsewhere.\n\nI do see a bug here --- I get\n\nregression=# select \"\"\"\";\nNOTICE: identifier \"\"\" [ lots o' rubouts ] @;�\" will be truncated to \"\"\"\"\nERROR: Attribute '\"\"' not found\nregression=#\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 03 Sep 2001 20:10:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: Escaping strings for inclusion into SQL queries " }, { "msg_contents": "Tom Lane writes:\n\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > A bug indeed.\n>\n> > <xd>{xddouble} {\n> > - addlit(yytext, yyleng-1);\n> > + addlit(yytext+1, yyleng-1);\n> > }\n>\n> I don't follow. xddouble can only expand to two quote marks, so how\n> does it matter which one we use as the result?\n\naddlit() expects the first argument to be null-terminated and implicitly\nuses that null byte at the end of the supplied argument to terminate its\nown buffer. It expects to copy <doublequote><null> (new version), whereas\nit got (old version) <doublequote><doublequote> and left the buffer\nunterminated, which leads to random behavior, as you saw.\n\nSince there are only a few calls to addlit(), I didn't feel like\nre-engineering the whole interface to be prettier. It does look like a\nperformance-beneficial implementation.\n\nA concern related to the matter is that if you actually put such an\nidentifier into your database you basically make it undumpable (well,\nunrestorable) because no place is prepared to handle such a thing.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Tue, 4 Sep 2001 02:40:38 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Re: Escaping strings for inclusion into SQL queries " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> I don't follow. xddouble can only expand to two quote marks, so how\n>> does it matter which one we use as the result?\n\n> addlit() expects the first argument to be null-terminated and implicitly\n> uses that null byte at the end of the supplied argument to terminate its\n> own buffer.\n\nHmm, so I see:\n\n\t/* append data --- note we assume ytext is null-terminated */\n\tmemcpy(literalbuf+literallen, ytext, yleng+1);\n\tliterallen += yleng;\n\nGiven that we are passing the length of the desired string, it seems\nbug-prone for addlit to *also* expect null termination. I'd suggest\n\n\tmemcpy(literalbuf+literallen, ytext, yleng);\n\tliterallen += yleng;\n\tliteralbuf[literallen] = '\\0';\n\ninstead.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 03 Sep 2001 20:44:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: Escaping strings for inclusion into SQL queries " }, { "msg_contents": "I am going to apply this patch with the change that the function name\nwill be PQ* not PG*.\n\n> It has come to our attention that many applications which use libpq\n> are vulnerable to code insertion attacks in strings and identifiers\n> passed to these applications. We have collected some evidence which\n> suggests that this is related to the fact that libpq does not provide\n> a function to escape strings and identifiers properly. (Both the\n> Oracle and MySQL client libraries include such a function, and the\n> vast majority of applications we examined are not vulnerable to code\n> insertion attacks because they use this function.)\n> \n> We therefore suggest that a string escaping function is included in a\n> future version of PostgreSQL and libpq. A sample implementation is\n> provided below, along with documentation.\n> \n> -- \n> Florian Weimer \t Florian.Weimer@RUS.Uni-Stuttgart.DE\n> University of Stuttgart http://cert.uni-stuttgart.de/\n> RUS-CERT +49-711-685-5973/fax +49-711-685-5898\n\n[ Attachment, skipping... ]\n\n[ Attachment, skipping... ]\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 3 Sep 2001 20:59:41 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Escaping strings for inclusion into SQL queries" }, { "msg_contents": "\nPatch removed at the request of the author. Author will resubmit.\n\n> It has come to our attention that many applications which use libpq\n> are vulnerable to code insertion attacks in strings and identifiers\n> passed to these applications. We have collected some evidence which\n> suggests that this is related to the fact that libpq does not provide\n> a function to escape strings and identifiers properly. (Both the\n> Oracle and MySQL client libraries include such a function, and the\n> vast majority of applications we examined are not vulnerable to code\n> insertion attacks because they use this function.)\n> \n> We therefore suggest that a string escaping function is included in a\n> future version of PostgreSQL and libpq. A sample implementation is\n> provided below, along with documentation.\n> \n> -- \n> Florian Weimer \t Florian.Weimer@RUS.Uni-Stuttgart.DE\n> University of Stuttgart http://cert.uni-stuttgart.de/\n> RUS-CERT +49-711-685-5973/fax +49-711-685-5898\n\n[ Attachment, skipping... ]\n\n[ Attachment, skipping... ]\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 4 Sep 2001 13:30:54 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Escaping strings for inclusion into SQL queries" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n\n> Patch removed at the request of the author. Author will resubmit.\n\nI've attached the fixed version of the patch below. After the\ndiscussion on pgsql-hackers (especially the frightening memory dump in\n<12273.999562219@sss.pgh.pa.us>), we decided that it is best not to\nuse identifiers from an untrusted source at all. Therefore, all\nclaims of the suitability of PQescapeString() for identifiers have\nbeen removed.\n\n-- \nFlorian Weimer \t Florian.Weimer@RUS.Uni-Stuttgart.DE\nUniversity of Stuttgart http://cert.uni-stuttgart.de/\nRUS-CERT +49-711-685-5973/fax +49-711-685-5898\n\n\nIndex: doc/src/sgml/libpq.sgml\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/doc/src/sgml/libpq.sgml,v\nretrieving revision 1.68\ndiff -u -r1.68 libpq.sgml\n--- doc/src/sgml/libpq.sgml\t2001/09/04 00:18:18\t1.68\n+++ doc/src/sgml/libpq.sgml\t2001/09/04 18:32:05\n@@ -827,6 +827,42 @@\n </itemizedlist>\n </sect2>\n \n+<sect2 id=\"libpq-exec-escape-string\">\n+ <title>Escaping strings for inclusion in SQL queries</title>\n+<para>\n+<function>PQescapeString</function>\n+ Escapes a string for use within an SQL query.\n+<synopsis>\n+size_t PQescapeString (char *to, const char *from, size_t length);\n+</synopsis>\n+If you want to include strings which have been received\n+from a source which is not trustworthy (for example, because they were\n+transmitted across a network), you cannot directly include them in SQL\n+queries for security reasons. Instead, you have to quote special\n+characters which are otherwise interpreted by the SQL parser.\n+</para>\n+<para>\n+<function>PQescapeString</> performs this operation. The\n+<parameter>from</> points to the first character of the string which\n+is to be escaped, and the <parameter>length</> parameter counts the\n+number of characters in this string (a terminating NUL character is\n+neither necessary nor counted). <parameter>to</> shall point to a\n+buffer which is able to hold at least one more character than twice\n+the value of <parameter>length</>, otherwise the behavior is\n+undefined. A call to <function>PQescapeString</> writes an escaped\n+version of the <parameter>from</> string to the <parameter>to</>\n+buffer, replacing special characters so that they cannot cause any\n+harm, and adding a terminating NUL character. The single quotes which\n+must surround PostgreSQL string literals are not part of the result\n+string.\n+</para>\n+<para>\n+<function>PQescapeString</> returns the number of characters written\n+to <parameter>to</>, not including the terminating NUL character.\n+Behavior is undefined when the <parameter>to</> and <parameter>from</>\n+strings overlap.\n+</para>\n+\n <sect2 id=\"libpq-exec-select-info\">\n <title>Retrieving SELECT Result Information</title>\n \nIndex: src/interfaces/libpq/fe-exec.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/interfaces/libpq/fe-exec.c,v\nretrieving revision 1.108\ndiff -u -r1.108 fe-exec.c\n--- src/interfaces/libpq/fe-exec.c\t2001/08/21 20:39:53\t1.108\n+++ src/interfaces/libpq/fe-exec.c\t2001/09/04 18:32:09\n@@ -56,6 +56,62 @@\n static int\tgetNotify(PGconn *conn);\n static int\tgetNotice(PGconn *conn);\n \n+/* ---------------\n+ * Escaping arbitrary strings to get valid SQL strings/identifiers.\n+ *\n+ * Replaces \"\\\\\" with \"\\\\\\\\\", \"\\0\" with \"\\\\0\", and \"'\" with \"''\".\n+ * length is the length of the buffer pointed to by\n+ * from. The buffer at to must be at least 2*length + 1 characters\n+ * long. A terminating NUL character is written.\n+ * ---------------\n+ */\n+\n+size_t\n+PQescapeString (char *to, const char *from, size_t length)\n+{\n+\tconst char *source = from;\n+\tchar *target = to;\n+\tunsigned int remaining = length;\n+\n+\twhile (remaining > 0) {\n+\t\tswitch (*source) {\n+\t\tcase '\\0':\n+\t\t\t*target = '\\\\';\n+\t\t\ttarget++;\n+\t\t\t*target = '0';\n+\t\t\t/* target and remaining are updated below. */\n+\t\t\tbreak;\n+\t\t\t\n+\t\tcase '\\\\':\n+\t\t\t*target = '\\\\';\n+\t\t\ttarget++;\n+\t\t\t*target = '\\\\';\n+\t\t\t/* target and remaining are updated below. */\n+\t\t\tbreak;\n+\n+\t\tcase '\\'':\n+\t\t\t*target = '\\'';\n+\t\t\ttarget++;\n+\t\t\t*target = '\\'';\n+\t\t\t/* target and remaining are updated below. */\n+\t\t\tbreak;\n+\n+\t\tdefault:\n+\t\t\t*target = *source;\n+\t\t\t/* target and remaining are updated below. */\n+\t\t}\n+\t\tsource++;\n+\t\ttarget++;\n+\t\tremaining--;\n+\t}\n+\n+\t/* Write the terminating NUL character. */\n+\t*target = '\\0';\n+\t\n+\treturn target - to;\n+}\n+\n+\n \n /* ----------------\n * Space management for PGresult.\nIndex: src/interfaces/libpq/libpq-fe.h\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/interfaces/libpq/libpq-fe.h,v\nretrieving revision 1.72\ndiff -u -r1.72 libpq-fe.h\n--- src/interfaces/libpq/libpq-fe.h\t2001/08/21 20:39:54\t1.72\n+++ src/interfaces/libpq/libpq-fe.h\t2001/09/04 18:32:09\n@@ -251,6 +251,9 @@\n \n /* === in fe-exec.c === */\n \n+\t/* Quoting strings before inclusion in queries. */\n+\textern size_t PQescapeString (char *to, const char *from, size_t length);\n+\n \t/* Simple synchronous query */\n \textern PGresult *PQexec(PGconn *conn, const char *query);\n \textern PGnotify *PQnotifies(PGconn *conn);", "msg_date": "04 Sep 2001 20:42:47 +0200", "msg_from": "Florian Weimer <Florian.Weimer@RUS.Uni-Stuttgart.DE>", "msg_from_op": true, "msg_subject": "Re: Escaping strings for inclusion into SQL queries" }, { "msg_contents": "\nHas this been resolved?\n\n\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > Tom Lane writes:\n> >> I don't follow. xddouble can only expand to two quote marks, so how\n> >> does it matter which one we use as the result?\n> \n> > addlit() expects the first argument to be null-terminated and implicitly\n> > uses that null byte at the end of the supplied argument to terminate its\n> > own buffer.\n> \n> Hmm, so I see:\n> \n> \t/* append data --- note we assume ytext is null-terminated */\n> \tmemcpy(literalbuf+literallen, ytext, yleng+1);\n> \tliterallen += yleng;\n> \n> Given that we are passing the length of the desired string, it seems\n> bug-prone for addlit to *also* expect null termination. I'd suggest\n> \n> \tmemcpy(literalbuf+literallen, ytext, yleng);\n> \tliterallen += yleng;\n> \tliteralbuf[literallen] = '\\0';\n> \n> instead.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 7 Sep 2001 16:16:48 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: Escaping strings for inclusion into SQL queries" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Has this been resolved?\n\nPeter applied his patch, but I am planning to also change addlit to not\nrequire null termination, because I believe we'll get bit again if we\ndon't.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 07 Sep 2001 16:26:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: Escaping strings for inclusion into SQL queries " }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> \n> > Patch removed at the request of the author. Author will resubmit.\n> \n> I've attached the fixed version of the patch below. After the\n> discussion on pgsql-hackers (especially the frightening memory dump in\n> <12273.999562219@sss.pgh.pa.us>), we decided that it is best not to\n> use identifiers from an untrusted source at all. Therefore, all\n> claims of the suitability of PQescapeString() for identifiers have\n> been removed.\n> \n> -- \n> Florian Weimer \t Florian.Weimer@RUS.Uni-Stuttgart.DE\n> University of Stuttgart http://cert.uni-stuttgart.de/\n> RUS-CERT +49-711-685-5973/fax +49-711-685-5898\n> \n\n> Index: doc/src/sgml/libpq.sgml\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/doc/src/sgml/libpq.sgml,v\n> retrieving revision 1.68\n> diff -u -r1.68 libpq.sgml\n> --- doc/src/sgml/libpq.sgml\t2001/09/04 00:18:18\t1.68\n> +++ doc/src/sgml/libpq.sgml\t2001/09/04 18:32:05\n> @@ -827,6 +827,42 @@\n> </itemizedlist>\n> </sect2>\n> \n> +<sect2 id=\"libpq-exec-escape-string\">\n> + <title>Escaping strings for inclusion in SQL queries</title>\n> +<para>\n> +<function>PQescapeString</function>\n> + Escapes a string for use within an SQL query.\n> +<synopsis>\n> +size_t PQescapeString (char *to, const char *from, size_t length);\n> +</synopsis>\n> +If you want to include strings which have been received\n> +from a source which is not trustworthy (for example, because they were\n> +transmitted across a network), you cannot directly include them in SQL\n> +queries for security reasons. Instead, you have to quote special\n> +characters which are otherwise interpreted by the SQL parser.\n> +</para>\n> +<para>\n> +<function>PQescapeString</> performs this operation. The\n> +<parameter>from</> points to the first character of the string which\n> +is to be escaped, and the <parameter>length</> parameter counts the\n> +number of characters in this string (a terminating NUL character is\n> +neither necessary nor counted). <parameter>to</> shall point to a\n> +buffer which is able to hold at least one more character than twice\n> +the value of <parameter>length</>, otherwise the behavior is\n> +undefined. A call to <function>PQescapeString</> writes an escaped\n> +version of the <parameter>from</> string to the <parameter>to</>\n> +buffer, replacing special characters so that they cannot cause any\n> +harm, and adding a terminating NUL character. The single quotes which\n> +must surround PostgreSQL string literals are not part of the result\n> +string.\n> +</para>\n> +<para>\n> +<function>PQescapeString</> returns the number of characters written\n> +to <parameter>to</>, not including the terminating NUL character.\n> +Behavior is undefined when the <parameter>to</> and <parameter>from</>\n> +strings overlap.\n> +</para>\n> +\n> <sect2 id=\"libpq-exec-select-info\">\n> <title>Retrieving SELECT Result Information</title>\n> \n> Index: src/interfaces/libpq/fe-exec.c\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/interfaces/libpq/fe-exec.c,v\n> retrieving revision 1.108\n> diff -u -r1.108 fe-exec.c\n> --- src/interfaces/libpq/fe-exec.c\t2001/08/21 20:39:53\t1.108\n> +++ src/interfaces/libpq/fe-exec.c\t2001/09/04 18:32:09\n> @@ -56,6 +56,62 @@\n> static int\tgetNotify(PGconn *conn);\n> static int\tgetNotice(PGconn *conn);\n> \n> +/* ---------------\n> + * Escaping arbitrary strings to get valid SQL strings/identifiers.\n> + *\n> + * Replaces \"\\\\\" with \"\\\\\\\\\", \"\\0\" with \"\\\\0\", and \"'\" with \"''\".\n> + * length is the length of the buffer pointed to by\n> + * from. The buffer at to must be at least 2*length + 1 characters\n> + * long. A terminating NUL character is written.\n> + * ---------------\n> + */\n> +\n> +size_t\n> +PQescapeString (char *to, const char *from, size_t length)\n> +{\n> +\tconst char *source = from;\n> +\tchar *target = to;\n> +\tunsigned int remaining = length;\n> +\n> +\twhile (remaining > 0) {\n> +\t\tswitch (*source) {\n> +\t\tcase '\\0':\n> +\t\t\t*target = '\\\\';\n> +\t\t\ttarget++;\n> +\t\t\t*target = '0';\n> +\t\t\t/* target and remaining are updated below. */\n> +\t\t\tbreak;\n> +\t\t\t\n> +\t\tcase '\\\\':\n> +\t\t\t*target = '\\\\';\n> +\t\t\ttarget++;\n> +\t\t\t*target = '\\\\';\n> +\t\t\t/* target and remaining are updated below. */\n> +\t\t\tbreak;\n> +\n> +\t\tcase '\\'':\n> +\t\t\t*target = '\\'';\n> +\t\t\ttarget++;\n> +\t\t\t*target = '\\'';\n> +\t\t\t/* target and remaining are updated below. */\n> +\t\t\tbreak;\n> +\n> +\t\tdefault:\n> +\t\t\t*target = *source;\n> +\t\t\t/* target and remaining are updated below. */\n> +\t\t}\n> +\t\tsource++;\n> +\t\ttarget++;\n> +\t\tremaining--;\n> +\t}\n> +\n> +\t/* Write the terminating NUL character. */\n> +\t*target = '\\0';\n> +\t\n> +\treturn target - to;\n> +}\n> +\n> +\n> \n> /* ----------------\n> * Space management for PGresult.\n> Index: src/interfaces/libpq/libpq-fe.h\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/interfaces/libpq/libpq-fe.h,v\n> retrieving revision 1.72\n> diff -u -r1.72 libpq-fe.h\n> --- src/interfaces/libpq/libpq-fe.h\t2001/08/21 20:39:54\t1.72\n> +++ src/interfaces/libpq/libpq-fe.h\t2001/09/04 18:32:09\n> @@ -251,6 +251,9 @@\n> \n> /* === in fe-exec.c === */\n> \n> +\t/* Quoting strings before inclusion in queries. */\n> +\textern size_t PQescapeString (char *to, const char *from, size_t length);\n> +\n> \t/* Simple synchronous query */\n> \textern PGresult *PQexec(PGconn *conn, const char *query);\n> \textern PGnotify *PQnotifies(PGconn *conn);\n> \n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 7 Sep 2001 17:26:05 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Escaping strings for inclusion into SQL queries" }, { "msg_contents": "\nPatch applied. Thanks.\n\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> \n> > Patch removed at the request of the author. Author will resubmit.\n> \n> I've attached the fixed version of the patch below. After the\n> discussion on pgsql-hackers (especially the frightening memory dump in\n> <12273.999562219@sss.pgh.pa.us>), we decided that it is best not to\n> use identifiers from an untrusted source at all. Therefore, all\n> claims of the suitability of PQescapeString() for identifiers have\n> been removed.\n> \n> -- \n> Florian Weimer \t Florian.Weimer@RUS.Uni-Stuttgart.DE\n> University of Stuttgart http://cert.uni-stuttgart.de/\n> RUS-CERT +49-711-685-5973/fax +49-711-685-5898\n> \n\n> Index: doc/src/sgml/libpq.sgml\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/doc/src/sgml/libpq.sgml,v\n> retrieving revision 1.68\n> diff -u -r1.68 libpq.sgml\n> --- doc/src/sgml/libpq.sgml\t2001/09/04 00:18:18\t1.68\n> +++ doc/src/sgml/libpq.sgml\t2001/09/04 18:32:05\n> @@ -827,6 +827,42 @@\n> </itemizedlist>\n> </sect2>\n> \n> +<sect2 id=\"libpq-exec-escape-string\">\n> + <title>Escaping strings for inclusion in SQL queries</title>\n> +<para>\n> +<function>PQescapeString</function>\n> + Escapes a string for use within an SQL query.\n> +<synopsis>\n> +size_t PQescapeString (char *to, const char *from, size_t length);\n> +</synopsis>\n> +If you want to include strings which have been received\n> +from a source which is not trustworthy (for example, because they were\n> +transmitted across a network), you cannot directly include them in SQL\n> +queries for security reasons. Instead, you have to quote special\n> +characters which are otherwise interpreted by the SQL parser.\n> +</para>\n> +<para>\n> +<function>PQescapeString</> performs this operation. The\n> +<parameter>from</> points to the first character of the string which\n> +is to be escaped, and the <parameter>length</> parameter counts the\n> +number of characters in this string (a terminating NUL character is\n> +neither necessary nor counted). <parameter>to</> shall point to a\n> +buffer which is able to hold at least one more character than twice\n> +the value of <parameter>length</>, otherwise the behavior is\n> +undefined. A call to <function>PQescapeString</> writes an escaped\n> +version of the <parameter>from</> string to the <parameter>to</>\n> +buffer, replacing special characters so that they cannot cause any\n> +harm, and adding a terminating NUL character. The single quotes which\n> +must surround PostgreSQL string literals are not part of the result\n> +string.\n> +</para>\n> +<para>\n> +<function>PQescapeString</> returns the number of characters written\n> +to <parameter>to</>, not including the terminating NUL character.\n> +Behavior is undefined when the <parameter>to</> and <parameter>from</>\n> +strings overlap.\n> +</para>\n> +\n> <sect2 id=\"libpq-exec-select-info\">\n> <title>Retrieving SELECT Result Information</title>\n> \n> Index: src/interfaces/libpq/fe-exec.c\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/interfaces/libpq/fe-exec.c,v\n> retrieving revision 1.108\n> diff -u -r1.108 fe-exec.c\n> --- src/interfaces/libpq/fe-exec.c\t2001/08/21 20:39:53\t1.108\n> +++ src/interfaces/libpq/fe-exec.c\t2001/09/04 18:32:09\n> @@ -56,6 +56,62 @@\n> static int\tgetNotify(PGconn *conn);\n> static int\tgetNotice(PGconn *conn);\n> \n> +/* ---------------\n> + * Escaping arbitrary strings to get valid SQL strings/identifiers.\n> + *\n> + * Replaces \"\\\\\" with \"\\\\\\\\\", \"\\0\" with \"\\\\0\", and \"'\" with \"''\".\n> + * length is the length of the buffer pointed to by\n> + * from. The buffer at to must be at least 2*length + 1 characters\n> + * long. A terminating NUL character is written.\n> + * ---------------\n> + */\n> +\n> +size_t\n> +PQescapeString (char *to, const char *from, size_t length)\n> +{\n> +\tconst char *source = from;\n> +\tchar *target = to;\n> +\tunsigned int remaining = length;\n> +\n> +\twhile (remaining > 0) {\n> +\t\tswitch (*source) {\n> +\t\tcase '\\0':\n> +\t\t\t*target = '\\\\';\n> +\t\t\ttarget++;\n> +\t\t\t*target = '0';\n> +\t\t\t/* target and remaining are updated below. */\n> +\t\t\tbreak;\n> +\t\t\t\n> +\t\tcase '\\\\':\n> +\t\t\t*target = '\\\\';\n> +\t\t\ttarget++;\n> +\t\t\t*target = '\\\\';\n> +\t\t\t/* target and remaining are updated below. */\n> +\t\t\tbreak;\n> +\n> +\t\tcase '\\'':\n> +\t\t\t*target = '\\'';\n> +\t\t\ttarget++;\n> +\t\t\t*target = '\\'';\n> +\t\t\t/* target and remaining are updated below. */\n> +\t\t\tbreak;\n> +\n> +\t\tdefault:\n> +\t\t\t*target = *source;\n> +\t\t\t/* target and remaining are updated below. */\n> +\t\t}\n> +\t\tsource++;\n> +\t\ttarget++;\n> +\t\tremaining--;\n> +\t}\n> +\n> +\t/* Write the terminating NUL character. */\n> +\t*target = '\\0';\n> +\t\n> +\treturn target - to;\n> +}\n> +\n> +\n> \n> /* ----------------\n> * Space management for PGresult.\n> Index: src/interfaces/libpq/libpq-fe.h\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/interfaces/libpq/libpq-fe.h,v\n> retrieving revision 1.72\n> diff -u -r1.72 libpq-fe.h\n> --- src/interfaces/libpq/libpq-fe.h\t2001/08/21 20:39:54\t1.72\n> +++ src/interfaces/libpq/libpq-fe.h\t2001/09/04 18:32:09\n> @@ -251,6 +251,9 @@\n> \n> /* === in fe-exec.c === */\n> \n> +\t/* Quoting strings before inclusion in queries. */\n> +\textern size_t PQescapeString (char *to, const char *from, size_t length);\n> +\n> \t/* Simple synchronous query */\n> \textern PGresult *PQexec(PGconn *conn, const char *query);\n> \textern PGnotify *PQnotifies(PGconn *conn);\n> \n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 7 Sep 2001 18:02:32 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Escaping strings for inclusion into SQL queries" }, { "msg_contents": "> Patch applied. Thanks.\n>\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >\n> > > Patch removed at the request of the author. Author will resubmit.\n> >\n> > I've attached the fixed version of the patch below. After the\n> > discussion on pgsql-hackers (especially the frightening memory dump in\n> > <12273.999562219@sss.pgh.pa.us>), we decided that it is best not to\n> > use identifiers from an untrusted source at all. Therefore, all\n> > claims of the suitability of PQescapeString() for identifiers have\n> > been removed.\n\nI found a problem with PQescapeString (I think). Since it escapes\nnull bytes to be literally '\\0', the following can happen:\n1. User inputs string value as \"<null byte>##\" where ## are digits in the\nrange of 0 to 7.\n2. PQescapeString converts this to \"\\0##\"\n3. Escaped string is used in a context that causes \"\\0##\" to be evaluated as\nan octal escape sequence.\n\nFor example, if the user enters a null byte followed by \"47\", and escapes\nit, it becomes \"\\071\" which gets translated into a single digit \"9\" by the\ngeneral parser. Along the same lines, if there is a null byte in a string,\nand it is not followed by digits, the resulting \"\\0\" gets converted back\ninto a null byte by the parser and the string gets truncated.\n\nIf the goal is to \"safely\" encode null bytes, and preserve the rest of the\nstring as it was entered, I think the null bytes should be escaped as \\\\000\n(note that if you simply use \\000 the same string truncation problem\noccurs).\n\n-- Joe\n\n\n\n", "msg_date": "Mon, 10 Sep 2001 23:26:23 -0700", "msg_from": "\"Joe Conway\" <joseph.conway@home.com>", "msg_from_op": false, "msg_subject": "Re: Escaping strings for inclusion into SQL queries" }, { "msg_contents": "\"Joe Conway\" <joseph.conway@home.com> writes:\n\n> I found a problem with PQescapeString (I think). Since it escapes\n> null bytes to be literally '\\0', the following can happen:\n> 1. User inputs string value as \"<null byte>##\" where ## are digits in the\n> range of 0 to 7.\n> 2. PQescapeString converts this to \"\\0##\"\n> 3. Escaped string is used in a context that causes \"\\0##\" to be evaluated as\n> an octal escape sequence.\n\nI agree that this is a problem, though it is not possible to do\nanything harmful with it. In addition, it only occurs if there are\nany NUL characters in its input, which is very unlikely if you are\nusing C strings.\n\nThe patch below addresses the issue by removing escaping of \\0\ncharacters entirely.\n\n> If the goal is to \"safely\" encode null bytes, and preserve the rest of the\n> string as it was entered, I think the null bytes should be escaped as \\\\000\n> (note that if you simply use \\000 the same string truncation problem\n> occurs).\n\nWe can't do that, this would require 4n + 1 bytes of storage for the\nresult, breaking the interface.\n\n-- \nFlorian Weimer \t Florian.Weimer@RUS.Uni-Stuttgart.DE\nUniversity of Stuttgart http://cert.uni-stuttgart.de/\nRUS-CERT +49-711-685-5973/fax +49-711-685-5898", "msg_date": "11 Sep 2001 16:22:55 +0200", "msg_from": "Florian Weimer <Florian.Weimer@RUS.Uni-Stuttgart.DE>", "msg_from_op": true, "msg_subject": "Re: Escaping strings for inclusion into SQL queries" }, { "msg_contents": "\nI think we need this patch. Bytea encoding will be changed to accept\n\\000 rather than \\0 for the same reason. I also agree that the libpq\nenescaping of a C string doesn't need to deal with NULL like bytea does.\n\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n> \"Joe Conway\" <joseph.conway@home.com> writes:\n> \n> > I found a problem with PQescapeString (I think). Since it escapes\n> > null bytes to be literally '\\0', the following can happen:\n> > 1. User inputs string value as \"<null byte>##\" where ## are digits in the\n> > range of 0 to 7.\n> > 2. PQescapeString converts this to \"\\0##\"\n> > 3. Escaped string is used in a context that causes \"\\0##\" to be evaluated as\n> > an octal escape sequence.\n> \n> I agree that this is a problem, though it is not possible to do\n> anything harmful with it. In addition, it only occurs if there are\n> any NUL characters in its input, which is very unlikely if you are\n> using C strings.\n> \n> The patch below addresses the issue by removing escaping of \\0\n> characters entirely.\n> \n> > If the goal is to \"safely\" encode null bytes, and preserve the rest of the\n> > string as it was entered, I think the null bytes should be escaped as \\\\000\n> > (note that if you simply use \\000 the same string truncation problem\n> > occurs).\n> \n> We can't do that, this would require 4n + 1 bytes of storage for the\n> result, breaking the interface.\n> \n> -- \n> Florian Weimer \t Florian.Weimer@RUS.Uni-Stuttgart.DE\n> University of Stuttgart http://cert.uni-stuttgart.de/\n> RUS-CERT +49-711-685-5973/fax +49-711-685-5898\n> \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 12 Sep 2001 00:20:31 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Escaping strings for inclusion into SQL queries" }, { "msg_contents": "\nPatch applied. Thanks.\n\n> \"Joe Conway\" <joseph.conway@home.com> writes:\n> \n> > I found a problem with PQescapeString (I think). Since it escapes\n> > null bytes to be literally '\\0', the following can happen:\n> > 1. User inputs string value as \"<null byte>##\" where ## are digits in the\n> > range of 0 to 7.\n> > 2. PQescapeString converts this to \"\\0##\"\n> > 3. Escaped string is used in a context that causes \"\\0##\" to be evaluated as\n> > an octal escape sequence.\n> \n> I agree that this is a problem, though it is not possible to do\n> anything harmful with it. In addition, it only occurs if there are\n> any NUL characters in its input, which is very unlikely if you are\n> using C strings.\n> \n> The patch below addresses the issue by removing escaping of \\0\n> characters entirely.\n> \n> > If the goal is to \"safely\" encode null bytes, and preserve the rest of the\n> > string as it was entered, I think the null bytes should be escaped as \\\\000\n> > (note that if you simply use \\000 the same string truncation problem\n> > occurs).\n> \n> We can't do that, this would require 4n + 1 bytes of storage for the\n> result, breaking the interface.\n> \n> -- \n> Florian Weimer \t Florian.Weimer@RUS.Uni-Stuttgart.DE\n> University of Stuttgart http://cert.uni-stuttgart.de/\n> RUS-CERT +49-711-685-5973/fax +49-711-685-5898\n> \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 13 Sep 2001 13:00:26 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Escaping strings for inclusion into SQL queriesh" } ]
[ { "msg_contents": "Hi,\n\nThere are two problems when compiling libpq.dll and psql.exe\non Windows. I'm not sure it is the best way to fix them\n(see patch below.) Comments?\n\nRegards,\nMikhail Terekhov\n\nIndex: bin/psql/prompt.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/bin/psql/prompt.c,v\nretrieving revision 1.19\ndiff -C3 -r1.19 prompt.c\n*** bin/psql/prompt.c\t2001/05/06 17:21:11\t1.19\n--- bin/psql/prompt.c\t2001/08/22 18:27:26\n***************\n*** 129,134 ****\n--- 129,135 ----\n \t\t\t\t\t\t\tif (*p == 'm')\n \t\t\t\t\t\t\t\tbuf[strcspn(buf, \".\")] = '\\0';\n \t\t\t\t\t\t}\n+ #ifndef WIN32\n \t\t\t\t\t\t/* UNIX socket */\n \t\t\t\t\t\telse\n \t\t\t\t\t\t{\n***************\n*** 139,144 ****\n--- 140,146 ----\n \t\t\t\t\t\t\telse\n \t\t\t\t\t\t\t\tsnprintf(buf, MAX_PROMPT_SIZE, \"[local:%s]\", host);\n \t\t\t\t\t\t}\n+ #endif\n \t\t\t\t\t}\n \t\t\t\t\tbreak;\n \t\t\t\t\t/* DB server port number */\nIndex: include/libpq/hba.h\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/include/libpq/hba.h,v\nretrieving revision 1.24\ndiff -C3 -r1.24 hba.h\n*** include/libpq/hba.h\t2001/08/16 16:24:16\t1.24\n--- include/libpq/hba.h\t2001/08/22 18:27:26\n***************\n*** 11,17 ****\n--- 11,19 ----\n #ifndef HBA_H\n #define HBA_H\n \n+ #ifndef WIN32\n #include <netinet/in.h>\n+ #endif\n \n #define CONF_FILE \"pg_hba.conf\"\n /* Name of the config file */\n", "msg_date": "Wed, 22 Aug 2001 14:44:23 -0400", "msg_from": "Mikhail Terekhov <terekhov@emc.com>", "msg_from_op": true, "msg_subject": "libpq.dll & psql.exe on Win32" }, { "msg_contents": "Mikhail Terekhov <terekhov@emc.com> writes:\n> There are two problems when compiling libpq.dll and psql.exe\n> on Windows. I'm not sure it is the best way to fix them\n> (see patch below.) Comments?\n\nThe first should probably be conditional on HAVE_UNIX_SOCKETS,\nnot on WIN32. The second change looks okay to me.\n\n\t\t\tregards, tom lane\n\n> Index: bin/psql/prompt.c\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/bin/psql/prompt.c,v\n> retrieving revision 1.19\n> diff -C3 -r1.19 prompt.c\n> *** bin/psql/prompt.c\t2001/05/06 17:21:11\t1.19\n> --- bin/psql/prompt.c\t2001/08/22 18:27:26\n> ***************\n> *** 129,134 ****\n> --- 129,135 ----\n> \t\t\t\t\t\t\tif (*p == 'm')\n> \t\t\t\t\t\t\t\tbuf[strcspn(buf, \".\")] = '\\0';\n> \t\t\t\t\t\t}\n> + #ifndef WIN32\n> \t\t\t\t\t\t/* UNIX socket */\n> \t\t\t\t\t\telse\n> \t\t\t\t\t\t{\n> ***************\n> *** 139,144 ****\n> --- 140,146 ----\n> \t\t\t\t\t\t\telse\n> \t\t\t\t\t\t\t\tsnprintf(buf, MAX_PROMPT_SIZE, \"[local:%s]\", host);\n> \t\t\t\t\t\t}\n> + #endif\n> \t\t\t\t\t}\n> \t\t\t\t\tbreak;\n> \t\t\t\t\t/* DB server port number */\n> Index: include/libpq/hba.h\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/include/libpq/hba.h,v\n> retrieving revision 1.24\n> diff -C3 -r1.24 hba.h\n> *** include/libpq/hba.h\t2001/08/16 16:24:16\t1.24\n> --- include/libpq/hba.h\t2001/08/22 18:27:26\n> ***************\n> *** 11,17 ****\n> --- 11,19 ----\n> #ifndef HBA_H\n> #define HBA_H\n \n> + #ifndef WIN32\n> #include <netinet/in.h>\n> + #endif\n \n> #define CONF_FILE \"pg_hba.conf\"\n> /* Name of the config file */\n\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n\n> http://www.postgresql.org/search.mpl\n", "msg_date": "Wed, 22 Aug 2001 20:55:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: libpq.dll & psql.exe on Win32 " }, { "msg_contents": "Approved with Tom's suggested changes.\n\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n> Hi,\n> \n> There are two problems when compiling libpq.dll and psql.exe\n> on Windows. I'm not sure it is the best way to fix them\n> (see patch below.) Comments?\n> \n> Regards,\n> Mikhail Terekhov\n> \n> Index: bin/psql/prompt.c\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/bin/psql/prompt.c,v\n> retrieving revision 1.19\n> diff -C3 -r1.19 prompt.c\n> *** bin/psql/prompt.c\t2001/05/06 17:21:11\t1.19\n> --- bin/psql/prompt.c\t2001/08/22 18:27:26\n> ***************\n> *** 129,134 ****\n> --- 129,135 ----\n> \t\t\t\t\t\t\tif (*p == 'm')\n> \t\t\t\t\t\t\t\tbuf[strcspn(buf, \".\")] = '\\0';\n> \t\t\t\t\t\t}\n> + #ifndef WIN32\n> \t\t\t\t\t\t/* UNIX socket */\n> \t\t\t\t\t\telse\n> \t\t\t\t\t\t{\n> ***************\n> *** 139,144 ****\n> --- 140,146 ----\n> \t\t\t\t\t\t\telse\n> \t\t\t\t\t\t\t\tsnprintf(buf, MAX_PROMPT_SIZE, \"[local:%s]\", host);\n> \t\t\t\t\t\t}\n> + #endif\n> \t\t\t\t\t}\n> \t\t\t\t\tbreak;\n> \t\t\t\t\t/* DB server port number */\n> Index: include/libpq/hba.h\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/include/libpq/hba.h,v\n> retrieving revision 1.24\n> diff -C3 -r1.24 hba.h\n> *** include/libpq/hba.h\t2001/08/16 16:24:16\t1.24\n> --- include/libpq/hba.h\t2001/08/22 18:27:26\n> ***************\n> *** 11,17 ****\n> --- 11,19 ----\n> #ifndef HBA_H\n> #define HBA_H\n> \n> + #ifndef WIN32\n> #include <netinet/in.h>\n> + #endif\n> \n> #define CONF_FILE \"pg_hba.conf\"\n> /* Name of the config file */\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 22 Aug 2001 21:22:08 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: libpq.dll & psql.exe on Win32" }, { "msg_contents": "I have applied the attached patch that is basically your patch with\nTom's suggestion to use HAVE_UNIX_SOCKETS.\n \n\n> Hi,\n> \n> There are two problems when compiling libpq.dll and psql.exe\n> on Windows. I'm not sure it is the best way to fix them\n> (see patch below.) Comments?\n> \n> Regards,\n> Mikhail Terekhov\n> \n> Index: bin/psql/prompt.c\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/bin/psql/prompt.c,v\n> retrieving revision 1.19\n> diff -C3 -r1.19 prompt.c\n> *** bin/psql/prompt.c\t2001/05/06 17:21:11\t1.19\n> --- bin/psql/prompt.c\t2001/08/22 18:27:26\n> ***************\n> *** 129,134 ****\n> --- 129,135 ----\n> \t\t\t\t\t\t\tif (*p == 'm')\n> \t\t\t\t\t\t\t\tbuf[strcspn(buf, \".\")] = '\\0';\n> \t\t\t\t\t\t}\n> + #ifndef WIN32\n> \t\t\t\t\t\t/* UNIX socket */\n> \t\t\t\t\t\telse\n> \t\t\t\t\t\t{\n> ***************\n> *** 139,144 ****\n> --- 140,146 ----\n> \t\t\t\t\t\t\telse\n> \t\t\t\t\t\t\t\tsnprintf(buf, MAX_PROMPT_SIZE, \"[local:%s]\", host);\n> \t\t\t\t\t\t}\n> + #endif\n> \t\t\t\t\t}\n> \t\t\t\t\tbreak;\n> \t\t\t\t\t/* DB server port number */\n> Index: include/libpq/hba.h\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/include/libpq/hba.h,v\n> retrieving revision 1.24\n> diff -C3 -r1.24 hba.h\n> *** include/libpq/hba.h\t2001/08/16 16:24:16\t1.24\n> --- include/libpq/hba.h\t2001/08/22 18:27:26\n> ***************\n> *** 11,17 ****\n> --- 11,19 ----\n> #ifndef HBA_H\n> #define HBA_H\n> \n> + #ifndef WIN32\n> #include <netinet/in.h>\n> + #endif\n> \n> #define CONF_FILE \"pg_hba.conf\"\n> /* Name of the config file */\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: src/bin/psql/prompt.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/bin/psql/prompt.c,v\nretrieving revision 1.19\ndiff -c -r1.19 prompt.c\n*** src/bin/psql/prompt.c\t2001/05/06 17:21:11\t1.19\n--- src/bin/psql/prompt.c\t2001/08/24 16:55:13\n***************\n*** 129,134 ****\n--- 129,135 ----\n \t\t\t\t\t\t\tif (*p == 'm')\n \t\t\t\t\t\t\t\tbuf[strcspn(buf, \".\")] = '\\0';\n \t\t\t\t\t\t}\n+ #ifndef HAVE_UNIX_SOCKETS\n \t\t\t\t\t\t/* UNIX socket */\n \t\t\t\t\t\telse\n \t\t\t\t\t\t{\n***************\n*** 139,144 ****\n--- 140,146 ----\n \t\t\t\t\t\t\telse\n \t\t\t\t\t\t\t\tsnprintf(buf, MAX_PROMPT_SIZE, \"[local:%s]\", host);\n \t\t\t\t\t\t}\n+ #endif\n \t\t\t\t\t}\n \t\t\t\t\tbreak;\n \t\t\t\t\t/* DB server port number */\nIndex: src/include/libpq/hba.h\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/include/libpq/hba.h,v\nretrieving revision 1.24\ndiff -c -r1.24 hba.h\n*** src/include/libpq/hba.h\t2001/08/16 16:24:16\t1.24\n--- src/include/libpq/hba.h\t2001/08/24 16:55:13\n***************\n*** 11,17 ****\n--- 11,19 ----\n #ifndef HBA_H\n #define HBA_H\n \n+ #ifndef WIN32\n #include <netinet/in.h>\n+ #endif\n \n #define CONF_FILE \"pg_hba.conf\"\n /* Name of the config file */", "msg_date": "Fri, 24 Aug 2001 12:58:43 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: libpq.dll & psql.exe on Win32" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> \t\t\t\t\t\t}\n> + #ifndef HAVE_UNIX_SOCKETS\n> \t\t\t\t\t\t/* UNIX socket */\n> \t\t\t\t\t\telse\n> \t\t\t\t\t\t{\n> ***************\n\nUh, that test is backwards.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 24 Aug 2001 15:57:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: libpq.dll & psql.exe on Win32 " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > \t\t\t\t\t\t}\n> > + #ifndef HAVE_UNIX_SOCKETS\n> > \t\t\t\t\t\t/* UNIX socket */\n> > \t\t\t\t\t\telse\n> > \t\t\t\t\t\t{\n> > ***************\n> \n> Uh, that test is backwards.\n\nThanks. Fixed and committed.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 24 Aug 2001 15:59:30 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: libpq.dll & psql.exe on Win32" } ]
[ { "msg_contents": "I am getting presistant requests from people asking why their patches\nhaven't appeared on the patches list yet. I have had it happen to me. \n\nThis is pretty serious because when you add another day of delay in\nemail delivery to the 1-2 days the patch sits in the patch queue, things\nare getting out of sync.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 22 Aug 2001 16:03:13 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Patch list delay" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I am getting presistant requests from people asking why their patches\n> haven't appeared on the patches list yet. I have had it happen to me. \n\nI believe part of the problem is that pgsql-patches has the same size\nlimit (40K IIRC) as all the other lists for messages to go through\nwithout approval.\n\nISTM that for pgsql-patches a larger limit would be appropriate, at\nleast a couple hundred K.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 22 Aug 2001 18:11:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patch list delay " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I am getting presistant requests from people asking why their patches\n> > haven't appeared on the patches list yet. I have had it happen to me. \n> \n> I believe part of the problem is that pgsql-patches has the same size\n> limit (40K IIRC) as all the other lists for messages to go through\n> without approval.\n> \n> ISTM that for pgsql-patches a larger limit would be appropriate, at\n> least a couple hundred K.\n\nYes. If I am on patches, I expect to receive big files.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 22 Aug 2001 19:04:06 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Patch list delay" } ]
[ { "msg_contents": "Originally, I added --enable-syslog because it used to be an option in\nconfig.h only. However, I wonder why we don't always compile it in, it's\noff by default anyway. The only reason I could think of is a portability\nproblem. Is there any platform that does not supply the standard syslog\ninterface?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 22 Aug 2001 22:44:44 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Remove --enable-syslog?" }, { "msg_contents": "> Originally, I added --enable-syslog because it used to be an option in\n> config.h only. However, I wonder why we don't always compile it in, it's\n> off by default anyway. The only reason I could think of is a portability\n> problem. Is there any platform that does not supply the standard syslog\n> interface?\n\nIf we have configure to test for its existance, I can't see why we\nwouldn't want it enabled in the binary.\n \n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 23 Aug 2001 16:25:32 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Remove --enable-syslog?" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Is there any platform that does not supply the standard syslog\n> interface?\n\nWhy worry? Do AC_CHECK_FUNC(syslog), or some such.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 23 Aug 2001 18:01:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Remove --enable-syslog? " } ]
[ { "msg_contents": "Here is what we install by default and what we could do about it:\n\nc.h\t\t\t[1]\nconfig.h\t\trename to pg_config.h\necpgerrno.h\t\tok\necpglib.h\t\tok\necpgtype.h\t\tok\niodbc/\t\t\t[3]\n iodbc.h\n isql.h\n isqlext.h\nlib/\t\t\t[1]\n dllist.h\nlibpgeasy.h\t\tok\nlibpgtcl.h\t\tok\nlibpq/\t\t\t[1]\n libpq-fs.h\n pqcomm.h\nlibpq++/\t\tok\n pgconnection.h\n pgcursordb.h\n pgdatabase.h\n pglobject.h\n pgtransdb.h\nlibpq++.h\t\tok\nlibpq-fe.h\t\tok\nlibpq-int.h\t\t[1]\nos.h\t\t\trename to pg_config_os.h\npostgres_ext.h\t\tok\npostgres_fe.h\t\t[1]\npqexpbuffer.h\t\t[1]\nsql3types.h\t\t[2]\nsqlca.h\t\t\t[2]\n\n[1] -- The libpq-int.h draws in a lot of internal structure, true to the\nname. Something should be done about that, such as not installing it, or\nmoving it to a \"hidden\" place. Ideas?\n\n[2] -- The ecpg preprocessor will actually include copies of these into\nthe output file when seeing the commands 'exec sql include sql3types;'\netc., thus not really making these include files in the traditional sense.\nThe names are okay for the moment, but I will research this some more.\n\n[3] -- The names clash with the actual iodbc package. Not sure if this is\nintended, but I will evaluate with the odbc group.\n\nThe idea I currently have for the installation layout including the server\nincludes is this:\n\n--prefix=/usr/local/pgsql\n\nlibpq-fe.h\t=> /usr/local/pgsql/include/libpq-fe.h\naccess/xlog.h\t=> /usr/local/pgsql/include/server/access/xlog.h\n\n--prefix=/usr/local\n\nlibpq-fe.h\t=> /usr/local/include/libpq-fe.h\naccess/xlog.h\t=> /usr/local/include/postgresql/server/access/xlog.h\n\npg_config will get an option --server-includedir to point to the files.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Thu, 23 Aug 2001 00:18:32 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Assessment on namespace clean include file names" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> [1] -- The libpq-int.h draws in a lot of internal structure, true to the\n> name. Something should be done about that, such as not installing it, or\n> moving it to a \"hidden\" place. Ideas?\n\nlibpq-int.h was always intended to be strictly internal. I made it part\nof the export fileset when it was created because I feared that there were\nprobably applications out there that were using direct access to fields of\nPGresult, and I wanted to give them breathing room to update their code.\nBut at this point they've had several releases worth of breathing room.\nI see no reason why we can't drop the other shoe and stop exporting\nlibpq-int.h (or more accurately, move it out of the public namespace,\nsame as you propose for backend headers).\n\n> The idea I currently have for the installation layout including the server\n> includes is this:\n\n> --prefix=/usr/local/pgsql\n\n> libpq-fe.h\t=> /usr/local/pgsql/include/libpq-fe.h\n> access/xlog.h\t=> /usr/local/pgsql/include/server/access/xlog.h\n\n\"server\" may not be a great choice if we want to stick libpq-int.h into\nit too...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 23 Aug 2001 19:27:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Assessment on namespace clean include file names " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> I see no reason why we can't drop the other shoe and stop exporting\n>> libpq-int.h (or more accurately, move it out of the public namespace,\n>> same as you propose for backend headers).\n\n>> \"server\" may not be a great choice if we want to stick libpq-int.h into\n>> it too...\n\n> The directory wasn't meant as a place for hiding things to avoid using\n> them, it was supposed to be a really official place for interfacing to the\n> server. If we want to hide things maybe we should have an \"obsolete\"\n> subdirectory or some such.\n\n\"obsolete\" is certainly not the right term either. Maybe we should have\nan \"interfaces\" directory, parallel to \"server\", for internal-ish\nincludes of our interface libraries.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 24 Aug 2001 15:31:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Assessment on namespace clean include file names " }, { "msg_contents": "Tom Lane writes:\n\n> I see no reason why we can't drop the other shoe and stop exporting\n> libpq-int.h (or more accurately, move it out of the public namespace,\n> same as you propose for backend headers).\n\n> \"server\" may not be a great choice if we want to stick libpq-int.h into\n> it too...\n\nThe directory wasn't meant as a place for hiding things to avoid using\nthem, it was supposed to be a really official place for interfacing to the\nserver. If we want to hide things maybe we should have an \"obsolete\"\nsubdirectory or some such.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Fri, 24 Aug 2001 21:33:21 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: Assessment on namespace clean include file names " } ]
[ { "msg_contents": "Great progress today on my Reverse Engineering efforts. However; I have\nsome comments.\n\n1. How can I switch databases (where I would normally use USE)?\n\n2. How do I determine the AccessMethod specified when an index was\ncreated?\n\n3. It would be cool if the catalog objects had comments on them in\npg_description. Very few do.\n\nPeter\n\n\n\n", "msg_date": "Wed, 22 Aug 2001 22:27:23 -0700", "msg_from": "Peter Harvey <pharvey@codebydesign.com>", "msg_from_op": true, "msg_subject": "Reverse Engineering" }, { "msg_contents": "Peter Harvey wrote:\n> \n> Great progress today on my Reverse Engineering efforts. However; I have\n> some comments.\n> \n> 1. How can I switch databases (where I would normally use USE)?\n\nYou just open another connection .\n\nIf you mean psql jou do \n\\c otherdatabasename\n\n> 2. How do I determine the AccessMethod specified when an index was\n> created?\n\nyou can parse it from pg_indexes.indexdef \n\nA great source for reverse engineering is source of pg_dump as it has \nto do all the \"reverse engineering\" in order to dump everything.\n\n> 3. It would be cool if the catalog objects had comments on them in\n> pg_description. Very few do.\n\nYes it would :)\n\n-------------\nHannu\n", "msg_date": "Thu, 23 Aug 2001 08:59:14 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Reverse Engineering" }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> Peter Harvey wrote:\n>> 2. How do I determine the AccessMethod specified when an index was\n>> created?\n\n> you can parse it from pg_indexes.indexdef \n\n... which relies on pg_get_indexdef(index OID).\n\nOr, look at pg_class.relam, which is zero for regular tables and a pg_am\nOID for indexes.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 23 Aug 2001 09:17:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Reverse Engineering " } ]
[ { "msg_contents": "Hiroshi wrote:\n> > > > > There could be DELETE operations for the tuple\n> > > > > from other backends also and the TID may disappear.\n> > > > > Because FULL VACUUM couldn't run while the cursor\n> > > > > is open, it could neither move nor remove the tuple\n> > > > > but I'm not sure if the new VACUUM could remove\n> > > > > the deleted tuple and other backends could re-use\n> > > > > the space under such a situation.\n> > > >\n> > > > If you also save the tuple transaction info (xmin ?) during the\n> > > > select in addition to xtid, you could see whether the tupleslot\nwas\n> > > > reused ?\n> > >\n> > > I think TID itself is available for the purpose as long as\n> > > PostgreSQL uses no overwrite storage manager. If the tuple\n> > > for a saved TID isn't found, the tuple may be update/deleted.\n> > \n> > > If the tuple is found but the OID is different from the saved\n> > > one, the space may be re-used.\n\nspace *was* reused (not \"may be\") \n\n> > \n> > But I meant in lack of an OID (per not mandatory oid), that xmin\n> > might be a valid replacement for detecting, no ?\n> \n> Does *current (ctid, xmin) == saved (ctid, xmin)* mean that\n> they are same ?\n\nYes? but better ask Vadim ? Wraparound issue would be solved by\nFrozenXID\nand frequent vacuum.\n\n> In addtion, xmin wouldn't be so reliable\n> in the near future because it would be updated to FrozenXID\n> (=2) by vacuum.\n\nI thought concurrent vacuum with an open cursor is not at all possible. \nIf it were, it would not be allowed to change ctid (location of row) \nand could be made to not change xmin. \n\n> If we switch to an overwriting smgr we have\n> no item to detect the change of tuples. It may be one of the\n> critical reasons why we shouldn't switch to an overwriting\n> smgr:-).\n\nIf we still want MVCC, we would still need something like xmin\nfor overwrite smgr (to mark visibility).\n\nAndreas\n", "msg_date": "Thu, 23 Aug 2001 09:56:18 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "RE: CURRENT OF cursor without OIDs" }, { "msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> Hiroshi wrote:\n>> In addtion, xmin wouldn't be so reliable\n>> in the near future because it would be updated to FrozenXID\n>> (=2) by vacuum.\n\n> I thought concurrent vacuum with an open cursor is not at all possible. \n> If it were, it would not be allowed to change ctid (location of row) \n> and could be made to not change xmin. \n\nNew-style vacuum can certainly run concurrently with an open cursor\n(wouldn't be of much use if it couldn't). However, new-style vacuum\nnever changes ctid, period. It could change the xmin of a tuple though,\nunder my not-yet-implemented proposal for freezing tuples.\n\nAFAICS, if you are holding an open SQL cursor, it is sufficient to check\nthat ctid hasn't changed to know that you have the same, un-updated\ntuple. Under MVCC rules, VACUUM will be unable to delete any tuple that\nis visible to your open transaction, and so new-style VACUUM cannot\nrecycle the ctid. Old-style VACUUM might move the tuple and make the\nctid available for reuse, but your open cursor will prevent old-style\nVACUUM from running on that table. So, there's no need to look at xmin.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 23 Aug 2001 11:01:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: CURRENT OF cursor without OIDs " }, { "msg_contents": "Tom Lane wrote:\n> \n> \"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> > Hiroshi wrote:\n> >> In addtion, xmin wouldn't be so reliable\n> >> in the near future because it would be updated to FrozenXID\n> >> (=2) by vacuum.\n> \n> > I thought concurrent vacuum with an open cursor is not at all possible.\n> > If it were, it would not be allowed to change ctid (location of row)\n> > and could be made to not change xmin.\n> \n> New-style vacuum can certainly run concurrently with an open cursor\n> (wouldn't be of much use if it couldn't). However, new-style vacuum\n> never changes ctid, period. It could change the xmin of a tuple though,\n> under my not-yet-implemented proposal for freezing tuples.\n> \n> AFAICS, if you are holding an open SQL cursor, it is sufficient to check\n> that ctid hasn't changed to know that you have the same, un-updated\n> tuple. Under MVCC rules, VACUUM will be unable to delete any tuple that\n> is visible to your open transaction, and so new-style VACUUM cannot\n> recycle the ctid. Old-style VACUUM might move the tuple and make the\n> ctid available for reuse, but your open cursor will prevent old-style\n> VACUUM from running on that table. So, there's no need to look at xmin.\n\nAs Tom mentiond once in this thread, I've referred to non-SQL\ncursors which could go across transaction boundaries. TIDs aren't\nthat reliable across transactions.\nOIDs and xmin have already lost a part of its nature. Probably\nI have to guard myself beforehand and so would have to mention\nrepeatedly from now on that if we switch to an overwriting smgr,\nthere's no system item to detect the change of tuples. \n\nregards,\nHiroshi Inoue\n", "msg_date": "Fri, 24 Aug 2001 09:22:57 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: CURRENT OF cursor without OIDs" } ]
[ { "msg_contents": "\n> > I don't understand why you object the idea giving PostgreSQL the\n> > ability to turn off the locale support in configuration/compile\n> > time. In that way, there's no inconveniences for \"many users\".\n> \n> I don't mind at all the ability to turn it off. My point is that the\n> compile time is the wrong time to do it. Many users use binary\n> packages these days, many more users would like to use binary\npackages.\n> But the creators of these packages have to make configuration choices\nto\n> satisfy all of their users. So they turn on the locale support,\nbecause\n> that way if you don't want it you can turn if off. The other way\naround\n> doesn't work.\n\nYup, imho we all understood that and the only (to be validated) concern\nis\nperformance.\n\n> \n> The more appropriate way to handle this situation is to make it a\nruntime\n> option. I agree that the LC_ALL/LC_COLLATE/LANG lattice is confusing\nand\n> fragile. But there can be other ways, e.g.,\n\nYes, that was the (or at least my) main concern.\n \n> initdb --locale=en_US\n> initdb --locale-collate=C --locale-ctype=en_US\n> initdb # defaults to --locale=C\n> \n> or in postgresql.conf\n> \n> locale=C\n> locale_numeric=en_US\n> etc.\n> \n> or\n> \n> SHOW locale;\n> SHOW locale_numeric;\n> \n> That way you always know exactly what situation you're in. I think\nthis\n> was Hiroshi's main concern, the reliance on export LC_ALL, and I agree\n> that this is bad.\n> \n> You say locale in Japan works, except for LC_COLLATE. This concern\nwould\n> be satisfied by the above approach.\n> \n> Comments?\n\nI think that's it :-)\n\nAndreas\n", "msg_date": "Thu, 23 Aug 2001 10:09:26 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "RE: Locale by default?" } ]
[ { "msg_contents": "As I was browsing TODO, I noticed a couple unassigned items that I may be \nable to help with (I haven't worked with the source before):\n\n*Add use of 'const' for variables in source tree\n*Convert remaining fprintf(stderr,...)/perror() to elog()\n\nNeither seemed to be active at all.\n\nI also noticed that this item has been there for a while:\n*Encrpyt passwords in pg_shadow table using MD5 (Bruce, Vince)\n\nAre the mentioned people actually working on the last item?\n\nAll of the items seem to be simple and seperated enough that I might be able \nto learn the source as I go. I can see how the last one might be more \ndifficult, but I don't know. As goes without saying, I really don't know much \nabout postgres, so any kind of schedule is completely unknown. For a cursory \nglance, I used the postgresql-7.1beta6 source, but I will grab the latest to \nactually work on.\n\nAny input about the difficulties or affected areas of code would be \nappreciated. I suppose if the items made it to TODO, they will take more than \na couple minutes to fix.\n\nRegards,\nJeff Davis\n", "msg_date": "Thu, 23 Aug 2001 01:56:36 -0700", "msg_from": "Jeff Davis <list-pgsql-hackers@dynworks.com>", "msg_from_op": true, "msg_subject": "A couple items on TODO" }, { "msg_contents": "> I also noticed that this item has been there for a while:\n> *Encrpyt passwords in pg_shadow table using MD5 (Bruce, Vince)\n\nWhile you are there do you think it's possible to make an mcrypt function?\n:)\n\n", "msg_date": "Thu, 23 Aug 2001 19:35:42 +1000 (EST)", "msg_from": "speedboy <speedboy@nomicrosoft.org>", "msg_from_op": false, "msg_subject": "Re: A couple items on TODO" }, { "msg_contents": "> > I also noticed that this item has been there for a while:\n> > *Encrpyt passwords in pg_shadow table using MD5 (Bruce, Vince)\n> \n> While you are there do you think it's possible to make an mcrypt function?\n> :)\n\nSee contrib/pgcrypto.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 23 Aug 2001 10:42:28 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: A couple items on TODO" }, { "msg_contents": "> As I was browsing TODO, I noticed a couple unassigned items that I may be \n> able to help with (I haven't worked with the source before):\n> \n> *Add use of 'const' for variables in source tree\n\nI would discuss this item with the hackers list and see exactly what\npeople want done with it.\n\n> *Convert remaining fprintf(stderr,...)/perror() to elog()\n\nThe issue here is that some calls can't use elog() because the context\nis not properly set up yet so we need to identify the non-elog error\ncalls and figure out if they should be elog().\n\n> \n> Neither seemed to be active at all.\n> \n> I also noticed that this item has been there for a while:\n> *Encrpyt passwords in pg_shadow table using MD5 (Bruce, Vince)\n\nThis is done. I forgot to mark it. I just marked it now.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 23 Aug 2001 10:44:10 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: A couple items on TODO" }, { "msg_contents": "Jeff Davis writes:\n\n> *Convert remaining fprintf(stderr,...)/perror() to elog()\n\nThis isn't quite as easy as a mechanical conversion, mind you, because\nelog of course has rather complex side effects besides printing out a\nmessage. What we'd need is some sort of option to print a message of a\ngiven category without taking these side effects.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Thu, 23 Aug 2001 17:11:03 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: A couple items on TODO" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Jeff Davis writes:\n>> *Convert remaining fprintf(stderr,...)/perror() to elog()\n\n> This isn't quite as easy as a mechanical conversion, mind you, because\n> elog of course has rather complex side effects besides printing out a\n> message.\n\nAFAIR, elog at NOTICE or DEBUG level isn't really supposed to have any\nside-effects. The bigger issue is that you have to be careful about\nusing it in certain places, mainly during startup or for reporting\ncommunication errors. (send failure -> elog -> tries to send message to\nclient -> send failure -> elog -> trouble)\n\nAlso, I believe most of the printf's in the backend are in debugging\nsupport code that's not even compiled by default. The return on\ninvestment from converting those routines to use elog is really nil.\nThere may be a few remaining printf calls that should be converted to\nelog, but I don't think this is a big issue.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 23 Aug 2001 12:15:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: A couple items on TODO " }, { "msg_contents": "> AFAIR, elog at NOTICE or DEBUG level isn't really supposed to have any\n> side-effects. The bigger issue is that you have to be careful about\n> using it in certain places, mainly during startup or for reporting\n> communication errors. (send failure -> elog -> tries to send message to\n> client -> send failure -> elog -> trouble)\n> \n> Also, I believe most of the printf's in the backend are in debugging\n> support code that's not even compiled by default. The return on\n> investment from converting those routines to use elog is really nil.\n> There may be a few remaining printf calls that should be converted to\n> elog, but I don't think this is a big issue.\n\nClearly not a big issue, but something someone can poke around at to get\nstarted.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 23 Aug 2001 12:46:18 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: A couple items on TODO" }, { "msg_contents": "Tom Lane writes:\n\n> AFAIR, elog at NOTICE or DEBUG level isn't really supposed to have any\n> side-effects. The bigger issue is that you have to be careful about\n> using it in certain places, mainly during startup or for reporting\n> communication errors. (send failure -> elog -> tries to send message to\n> client -> send failure -> elog -> trouble)\n\nIt's especially postmaster.c and the related subroutines elsewhere that\nI'm not happy about. The postmaster would really need a way to report an\nerror to the log and continue in its merry ways. This could probably be\nencapsulated in elog() by setting a flag variable \"I'm the postmaster\" --\nmight even exit already. Note: In my experience, the previous suggestion\nto return to the postmaster main loop on error would not really be useful.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Thu, 23 Aug 2001 21:00:46 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: A couple items on TODO " }, { "msg_contents": "> > As I was browsing TODO, I noticed a couple unassigned items\n> that I may be\n> > able to help with (I haven't worked with the source before):\n> >\n> > *Add use of 'const' for variables in source tree\n>\n> I would discuss this item with the hackers list and see exactly what\n> people want done with it.\n\nI have noticed while working on command.c and heap.c that half the functions\npass 'const char *' and the other half pass just 'char *'. This is a pain\nwhen you have a little helper function like 'is_relation(char *)' and you\nwant to pass a 'const char *' to it and vice versa. ie. Compiler warnings -\nit's sort of annoying.\n\nChris\n\n", "msg_date": "Fri, 24 Aug 2001 09:52:55 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "RE: A couple items on TODO" }, { "msg_contents": "> > > As I was browsing TODO, I noticed a couple unassigned items\n> > that I may be\n> > > able to help with (I haven't worked with the source before):\n> > >\n> > > *Add use of 'const' for variables in source tree\n> >\n> > I would discuss this item with the hackers list and see exactly what\n> > people want done with it.\n> \n> I have noticed while working on command.c and heap.c that half the functions\n> pass 'const char *' and the other half pass just 'char *'. This is a pain\n> when you have a little helper function like 'is_relation(char *)' and you\n> want to pass a 'const char *' to it and vice versa. ie. Compiler warnings -\n> it's sort of annoying.\n> \n\nYes, it can be very annoying.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 23 Aug 2001 22:20:45 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: A couple items on TODO" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> I have noticed while working on command.c and heap.c that half the functions\n> pass 'const char *' and the other half pass just 'char *'. This is a pain\n\nYeah, people have started to use 'const' in new code, but the older\nstuff doesn't use it, which means that the net effect is probably\nmore annoyance than help. I'm afraid that if we attack this in an\nincremental way, we'll end up with code that may have a lot of const\nmarkers in the declarations, but the actual code is riddled with\nexplicit casts to remove const because at one time or another that\nwas necessary in a particular place.\n\nCan anyone think of a way to get from here to there without either\na lot of leftover cruft, or a \"big bang\" massive changeover?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 23 Aug 2001 23:12:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: A couple items on TODO " }, { "msg_contents": "> > > *Add use of 'const' for variables in source tree\n> >\n> > I would discuss this item with the hackers list and see exactly what\n> > people want done with it.\n>\n> I have noticed while working on command.c and heap.c that half the\n> functions pass 'const char *' and the other half pass just 'char *'. This\n> is a pain when you have a little helper function like 'is_relation(char *)'\n> and you want to pass a 'const char *' to it and vice versa. ie. Compiler\n> warnings - it's sort of annoying.\n>\n\nThat's good information, now I have a better idea what I am looking for. I am \nusing Source Navigator (good recommendation I got reading this list). I am \nbasically just trying to find either variables that can be declared const, or \ninconsistancies (as Chris mentions).\n\nIf anyone else has a clearer idea of what the intention of that todo item is, \nlet me know.\n\nJeff\n", "msg_date": "Thu, 23 Aug 2001 21:15:15 -0700", "msg_from": "Jeff Davis <list-pgsql-hackers@dynworks.com>", "msg_from_op": true, "msg_subject": "Re: A couple items on TODO" }, { "msg_contents": "> That's good information, now I have a better idea what I am\n> looking for. I am\n> using Source Navigator (good recommendation I got reading this\n> list). I am\n> basically just trying to find either variables that can be\n> declared const, or\n> inconsistancies (as Chris mentions).\n>\n> If anyone else has a clearer idea of what the intention of that\n> todo item is,\n> let me know.\n\nI assume that since most of the times char * is passed to a function, it is\nsupposed to be unmodifiable. Putting the 'const' there enforces this -\nthereby making PostgreSQL more robust against poxy programming.\n\nChris\n\n", "msg_date": "Fri, 24 Aug 2001 12:39:44 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "RE: A couple items on TODO" }, { "msg_contents": "Tom Lane writes:\n\n> Yeah, people have started to use 'const' in new code, but the older\n> stuff doesn't use it, which means that the net effect is probably\n> more annoyance than help. I'm afraid that if we attack this in an\n> incremental way, we'll end up with code that may have a lot of const\n> markers in the declarations, but the actual code is riddled with\n> explicit casts to remove const because at one time or another that\n> was necessary in a particular place.\n>\n> Can anyone think of a way to get from here to there without either\n> a lot of leftover cruft, or a \"big bang\" massive changeover?\n\nWhat I usually do if I feel a parameter could be made const is to\npropagate the change as far as necessary to the underlying functions.\n>From time to time this turns out to be impossible at some layer. BUT:\nThis is an indicator that you really don't know whether the value is const\nso you shouldn't declare it thus.\n\nIMHO, a better project than putting const qualifiers all over interfaces\nthat you are not familiar with would be to clean up all the -Wcast-qual\nwarnings.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Fri, 24 Aug 2001 16:24:27 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: A couple items on TODO " }, { "msg_contents": "> That's good information, now I have a better idea what I am looking for. I am \n> using Source Navigator (good recommendation I got reading this list). I am \n> basically just trying to find either variables that can be declared const, or \n> inconsistancies (as Chris mentions).\n> \n> If anyone else has a clearer idea of what the intention of that todo item is, \n> let me know.\n\nThis is definately the intent of the TODO item.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 24 Aug 2001 10:57:28 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: A couple items on TODO" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > I have noticed while working on command.c and heap.c that half the functions\n> > pass 'const char *' and the other half pass just 'char *'. This is a pain\n> \n> Yeah, people have started to use 'const' in new code, but the older\n> stuff doesn't use it, which means that the net effect is probably\n> more annoyance than help. I'm afraid that if we attack this in an\n> incremental way, we'll end up with code that may have a lot of const\n> markers in the declarations, but the actual code is riddled with\n> explicit casts to remove const because at one time or another that\n> was necessary in a particular place.\n> \n> Can anyone think of a way to get from here to there without either\n> a lot of leftover cruft, or a \"big bang\" massive changeover?\n\nYou don't need a flag day, if that's what you mean. You just start\nadding const at the lowest levels and grow them upwards. I've done it\nbefore in large old projects. It's easy to do a few functions as a\nfairly mindless task. You get there eventually. The main trick is to\nnever add casts merely to avoid warnings (they will never get removed\nand are a potential source of future bugs), and to make sure that new\nfunctions use const wherever possible (so you don't move backward over\ntime).\n\nIt's worth it in the long run because it gives another way to catch\nstupid bugs at compile time.\n\nIan\n", "msg_date": "24 Aug 2001 15:11:48 -0700", "msg_from": "Ian Lance Taylor <ian@airs.com>", "msg_from_op": false, "msg_subject": "Re: A couple items on TODO" } ]
[ { "msg_contents": "I need to do some OLAP stuff, and I asked previously if there were a way\nto pass multiple parameters to an aggrigate function. i.e.:\n\nselect mycube(value1, value2, value3) from table group by value1;\n\nI looked through the code and it is non-trivial to do, one would have to\nalter the grammar to include a number of parameters, I guess something\nlike this:\n\ncreate aggregate (sfunc = myfunct, sfuncnargs=3, stype = int4, basetype1\n= int4, basetype2 = int4, ....);\n\nThen change the catalog, and the execution, arrg!\n\n(God I wish I could spend the time I want on PostgreSQL! )\n\nAnyway, short of that....\n\nIf I do this:\n\nselect mycube(value1) as d1, dimention(value2) as d2, dimention(value3)\nas d3 group by value1;\n\nCan I safely assume the following:\n\n(1) mycube() will be called first\n(2) Assuming dimention() has no final func, that final func of mycube()\nwill be called last.\n\n\n\n", "msg_date": "Thu, 23 Aug 2001 11:26:42 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "OLAP, Aggregates, and order of operations" }, { "msg_contents": "mlw <markw@mohawksoft.com> writes:\n> I need to do some OLAP stuff, and I asked previously if there were a way\n> to pass multiple parameters to an aggrigate function. i.e.:\n> I looked through the code and it is non-trivial to do,\n\nOffhand I don't know of any fundamental reason why it couldn't be done,\nbut you're right that it'd take a fair amount of work.\n\n> If I do this:\n> select mycube(value1) as d1, dimention(value2) as d2, dimention(value3)\n> as d3 group by value1;\n> Can I safely assume the following:\n> (1) mycube() will be called first\n> (2) Assuming dimention() has no final func, that final func of mycube()\n> will be called last.\n\nThat might be true in the present code, but it strikes me as an awfully\nrisky set of assumptions. Also, it sounds like what you have in mind is\nto have some hidden state that all the aggregate functions will access;\nhow then will you work if there are more than one set of these\naggregates being used in a query?\n\nIf the needed parameters are all the same datatype, maybe you could put\nthem into an array and pass the array as a single argument to the\naggregate.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 23 Aug 2001 12:29:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: OLAP, Aggregates, and order of operations " }, { "msg_contents": "Tom Lane wrote:\n\n> mlw <markw@mohawksoft.com> writes:\n> > I need to do some OLAP stuff, and I asked previously if there were a way\n> > to pass multiple parameters to an aggrigate function. i.e.:\n> > I looked through the code and it is non-trivial to do,\n>\n> Offhand I don't know of any fundamental reason why it couldn't be done,\n> but you're right that it'd take a fair amount of work.\n\nI understand the implications of the work, but it would be VERY cool to be\nable to do this for statistical stuff.\n\n>\n> > If I do this:\n> > select mycube(value1) as d1, dimention(value2) as d2, dimention(value3)\n> > as d3 group by value1;\n> > Can I safely assume the following:\n> > (1) mycube() will be called first\n> > (2) Assuming dimention() has no final func, that final func of mycube()\n> > will be called last.\n>\n> That might be true in the present code, but it strikes me as an awfully\n> risky set of assumptions. Also, it sounds like what you have in mind is\n> to have some hidden state that all the aggregate functions will access;\n> how then will you work if there are more than one set of these\n> aggregates being used in a query?\n\nWhat I was thinking is that I could use the state to hold a pointer returned\nby palloc. I don't think I can handle multiple mycube() calls, but short of\nreworking aggregates, I don't see any other way.\n\n>\n> If the needed parameters are all the same datatype, maybe you could put\n> them into an array and pass the array as a single argument to the\n> aggregate.\n\nHow would you do this without having to make multiple SQL calls?\n\n", "msg_date": "Thu, 23 Aug 2001 15:11:04 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Re: OLAP, Aggregates, and order of operations" }, { "msg_contents": "mlw <markw@mohawksoft.com> writes:\n>> If the needed parameters are all the same datatype, maybe you could put\n>> them into an array and pass the array as a single argument to the\n>> aggregate.\n\n> How would you do this without having to make multiple SQL calls?\n\nI was thinking something like\n\n select my_aggregate(my_array_constructor(foo, bar, baz)) from ...\n\nwhere my_array_constructor is a quick hack C routine to build a\n3-element array from 3 input arguments (s/3/whatever you need/).\nSomeday we ought to have SQL syntax to build an array value from\na list of scalars, but in the meantime an auxiliary function is the\nonly way to do it.\n\nThe overhead of constructing and then interpreting the temporary\narray value is slightly annoying, but I don't think it'll be horribly\nexpensive. See the existing aggregate-related routines in numeric.c\nif you need some help with the C coding.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 23 Aug 2001 15:19:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: OLAP, Aggregates, and order of operations " }, { "msg_contents": "Tom Lane wrote:\n> \n> mlw <markw@mohawksoft.com> writes:\n> >> If the needed parameters are all the same datatype, maybe you could put\n> >> them into an array and pass the array as a single argument to the\n> >> aggregate.\n> \n> > How would you do this without having to make multiple SQL calls?\n> \n> I was thinking something like\n> \n> select my_aggregate(my_array_constructor(foo, bar, baz)) from ...\n> \n> where my_array_constructor is a quick hack C routine to build a\n> 3-element array from 3 input arguments (s/3/whatever you need/).\n> Someday we ought to have SQL syntax to build an array value from\n> a list of scalars, but in the meantime an auxiliary function is the\n> only way to do it.\n\nInteresting. Kind of ugly, but interesting. \n\nSo, what would the order of operation be?\n\nI assume \"my_array_constructor()\" would be called first, and the return value\nthen be passed to \"my_aggregate()\" along with the state value being set to the\ninitial state, then subsequent calls to \"my_array_constructor()\", followed by\n\"my_aggregate()\" for each additional row in the group?\n\nI need to think about that.\n\n> \n> The overhead of constructing and then interpreting the temporary\n> array value is slightly annoying, but I don't think it'll be horribly\n> expensive. See the existing aggregate-related routines in numeric.c\n> if you need some help with the C coding.\n\n<postgres use story>\nI can do the C stuff, I have tons of C and C++ functions written for Postgres\nalready, when I get the time to make them clean enough to contribute to the\nPostgres project, I will. (Text manipulation, search engine, date manipulation,\nxmcd, analysis functions, decode, and others) If you are interested in seeing a\nhalf Oracle, half Postgres site, take a look at http://www.dotclick.com. (You\nwill need a Windows box)\n\nIt is pretty evenly split between postgres and oracle. All \"member\" related\ndata is on Oracle. All music related data is in Postgres. It has saved us\nprobably $50K to $100 in Oracle database licenses and hardware to do it this\nway.\n\nWe have three postgres boxes. One master, and two slaves. The master gets\nupdated with new information from various sites. The program which does the\nupdating, on the master, creates a SQL log script of everything it does. The\nscript is then run against the slaves to maintain consistency. A web farm is\nsplit evenly between the two slaves.\n\nIt is pretty cool. \n\n(As a side note, we are using Oracle for session management across a bunch of\nservers. Sadly we can not use postgres for this (we would love too), sessions\nare mostly updates and deletes, maybe when 7.2 comes out, but I'm still not\nsure about that.)\n\n</postgres use story>\n", "msg_date": "Thu, 23 Aug 2001 23:58:46 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Re: OLAP, Aggregates, and order of operations" }, { "msg_contents": "mlw <markw@mohawksoft.com> writes:\n> So, what would the order of operation be?\n\n> I assume \"my_array_constructor()\" would be called first, and the return value\n> then be passed to \"my_aggregate()\" along with the state value being set to the\n> initial state, then subsequent calls to \"my_array_constructor()\", followed by\n> \"my_aggregate()\" for each additional row in the group?\n\nCheck.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 24 Aug 2001 00:22:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: OLAP, Aggregates, and order of operations " } ]
[ { "msg_contents": "I'm trying my best to convert from MySQL to PgSQL but I cant get a good\nanswer about \ncertian questions. It was an easy task in mysql but all this talk about\n, text , toast and bytea is just confusing me.\nI cant get a clear picture of any of this,from the book from Bruce, the\ne-mail archives. Ive looked all i can,\nwith every keyword i can think of from years past. Here is my situation.\n\n\nWHAT I WAS DOING IN MYSQL\nVia the web my clients are uploading basic text/data files, sometimes >\nthan 30MB. In the past ,via CGI I have been parsing the file\ninto one STL string, using mysql_escape_string to escape it and then using\nan INSERT to place the \n ,\\'+\"stlstring+\"\\' , into a BLOB column. \nI dont want to use a temp. file anywhere. The data will always be passed via\nthe database and buffers for certian reasons.\n\n\nTHIS IS WHAT I CANT SEEM TO FIGURE OUT IN POSTGRESQL\n1. I cant get a clear answer on what kind of data type to use for my large\ntext string? TEXT, ???, ??? or something about TOAST\nI have seen in the e-mail archive but cant find any documentaion?\n\n2. I've written my own escape method ,(\"cant find one for Pgsql\") , BUT i\ndon't know what \nto escape and not to escape. So it keeps failing. I cand find any docs. on\nwhat to escape either?\n\n\nSUMMARY\nWhat datatype do I use? How Im a supposed to escape it and get it though the\nparser correctly so i can\nretrieve it correctly?\n\n\nThnks for your time.\n\nPS: Using RedHat.\n\n\n\nJason H. Ory\nMedprint+\nSoftware Developer\n(205) 989-4617\njason.ory@ndchealth.com\n\n", "msg_date": "Thu, 23 Aug 2001 13:09:14 -0400", "msg_from": "jason.ory@ndchealth.com", "msg_from_op": true, "msg_subject": "Toast,bytea, Text -blob all confusing" }, { "msg_contents": "On Thu, Aug 23, 2001 at 01:09:14PM -0400, jason.ory@ndchealth.com wrote:\n> I'm trying my best to convert from MySQL to PgSQL but I cant get a good\n> answer about \n> certian questions. It was an easy task in mysql but all this talk about\n> , text , toast and bytea is just confusing me.\n> I cant get a clear picture of any of this,from the book from Bruce, the\n> e-mail archives. Ive looked all i can,\n> with every keyword i can think of from years past. Here is my situation.\n> \n> \n> WHAT I WAS DOING IN MYSQL\n> Via the web my clients are uploading basic text/data files, sometimes >\n> than 30MB. In the past ,via CGI I have been parsing the file\n> into one STL string, using mysql_escape_string to escape it and then using\n> an INSERT to place the \n> ,\\'+\"stlstring+\"\\' , into a BLOB column. \n> I dont want to use a temp. file anywhere. The data will always be passed via\n> the database and buffers for certian reasons.\n> \n> \n> THIS IS WHAT I CANT SEEM TO FIGURE OUT IN POSTGRESQL\n> 1. I cant get a clear answer on what kind of data type to use for my large\n> text string? TEXT, ???, ??? or something about TOAST\n> I have seen in the e-mail archive but cant find any documentaion?\n\nTOAST is just a name for the mechanism/feature that is used in Postgres\n>= 7.1 to overcome the (32 KB if I recall correctly) limit on the row\nsize in previous versions. It's completely transparent to the\nprogrammer, i.e. if you use TEXT, for instance, you can have a row up to\n1 GB in size (which is probably not practical) theoretically. The\nadvantage over using BLOBs is that you can search this field (with 30 MB\nfieldis, if you have a few of them, this is probably not practical\neither so you'd probably want to consider some full text indexing\nmechanism). I'd use TEXT for this reason.\n\n> \n> 2. I've written my own escape method ,(\"cant find one for Pgsql\") , BUT i\n> don't know what \n> to escape and not to escape. So it keeps failing. I cand find any docs. on\n> what to escape either?\n\nHm. I don't understand why the database (using MySQL or Postgres) would\nmake any difference there.\n\nHope it helps,\n\nFrank\n", "msg_date": "Mon, 27 Aug 2001 20:08:25 +0200", "msg_from": "Frank Joerdens <frank@joerdens.de>", "msg_from_op": false, "msg_subject": "Re: Toast,bytea, Text -blob all confusing" }, { "msg_contents": "On Thu, 23 Aug 2001 jason.ory@ndchealth.com wrote:\n\n> THIS IS WHAT I CANT SEEM TO FIGURE OUT IN POSTGRESQL\n> 1. I cant get a clear answer on what kind of data type to use for my large\n> text string? TEXT, ???, ??? or something about TOAST\n> I have seen in the e-mail archive but cant find any documentaion?\nI would suggest bytea or blob. Blobs are well-documented in normal\ndocumentation and in documentation of your favorite interface, so I'll\njust talk about bytea.\n\n> 2. I've written my own escape method ,(\"cant find one for Pgsql\") , BUT i\n> don't know what \n> to escape and not to escape. So it keeps failing. I cand find any docs. on\n> what to escape either?\nFor bytea, follow this rule: to escape a null character, use this:\n'\\\\0'. To escape a backslash, use this: '\\\\\\\\'.\n\nSame idea to unescape data.\n\n\n", "msg_date": "Mon, 27 Aug 2001 15:05:11 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": false, "msg_subject": "Re: Toast,bytea, Text -blob all confusing" }, { "msg_contents": "At 03:05 PM 27-08-2001 -0400, Alex Pilosov wrote:\n>On Thu, 23 Aug 2001 jason.ory@ndchealth.com wrote:\n>\n>> THIS IS WHAT I CANT SEEM TO FIGURE OUT IN POSTGRESQL\n>> 1. I cant get a clear answer on what kind of data type to use for my large\n>> text string? TEXT, ???, ??? or something about TOAST\n>> I have seen in the e-mail archive but cant find any documentaion?\n>I would suggest bytea or blob. Blobs are well-documented in normal\n>documentation and in documentation of your favorite interface, so I'll\n>just talk about bytea.\n>\n>> 2. I've written my own escape method ,(\"cant find one for Pgsql\") , BUT i\n>> don't know what \n>> to escape and not to escape. So it keeps failing. I cand find any docs. on\n>> what to escape either?\n>For bytea, follow this rule: to escape a null character, use this:\n>'\\\\0'. To escape a backslash, use this: '\\\\\\\\'.\n>\n>Same idea to unescape data.\n\nAre there other characters that need to be escaped? I suspect there are\nmore characters that need to be escaped - ctrl chars? single quotes?. Why\nfour backslashes for one? Is there a definitive documentation anywhere for\nwhat bytea is _supposed_ (not what it might actually be) to be and how it\nis to be handled?\n\nAlso why wouldn't escaping stuff like this work with TEXT then? If a null\nis going to be backslash backslash zero, and come out the same way, it sure\nlooks like TEXT to me :). OK so there's this thing about storage. So maybe\nI could save a byte by just converting nulls to backslash zero and real\nbackslashes to backslash backslash. Tada. \n\nOK it's probably not the same, but having to put four backslashes when two\nshould be enough to quote one makes me rather puzzled and uneasy. \n\nCheerio,\nLink.\n\n", "msg_date": "Tue, 28 Aug 2001 14:28:24 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Re: Toast,bytea, Text -blob all confusing" }, { "msg_contents": "On Tue, 28 Aug 2001, Lincoln Yeoh wrote:\n\n> >For bytea, follow this rule: to escape a null character, use this:\n> >'\\\\0'. To escape a backslash, use this: '\\\\\\\\'.\n> >\n> >Same idea to unescape data.\n> \n> Are there other characters that need to be escaped? I suspect there are\n> more characters that need to be escaped - ctrl chars? single quotes?. Why\n> four backslashes for one? Is there a definitive documentation anywhere for\n> what bytea is _supposed_ (not what it might actually be) to be and how it\n> is to be handled?\n\nYes, sorry for being unclear on this one. Here's a more detailed\nexplanation: Bytea is just a stream of data. On input, it must follow C\nescaping conventions, on output, it will be escaped using C escaping\nconventions. \n\nHowever, there's a trap: before things get to bytea input handler, they\nare first processed by postgresql general parser. Hence, the string \\\\0\ngiven from psql will become \\0 when it gets to bytea input handler. String\n\\\\\\\\ will become \\\\. All non-printable characters must be escaped like\nthis: \\\\(octal of character), for ex, chr(255) must be presented as \\\\377.\n(If you want, you can also use this as an alternative and more generic way\nto escape a backslash, \\\\134). Single quote must be escaped either as \\\\47 \nor as \\'. Note the single backslash. Why only one? Because bytea parser\ndoesn't care about single quotes and you only need to escape it for the\npostgresql parser...\n\nSo, just keep in mind the double-parsing of input and you'll be safe.\n\n> Also why wouldn't escaping stuff like this work with TEXT then? If a null\n> is going to be backslash backslash zero, and come out the same way, it sure\n> looks like TEXT to me :). OK so there's this thing about storage. So maybe\nBecause text is null-terminated, can't have a null inside.\n\n> I could save a byte by just converting nulls to backslash zero and real\n> backslashes to backslash backslash. Tada. \nIf you do that, you'll break ordering/comparison. Bytea in memory is\nstored EXACTLY the way input string was, without any escaping, hence, all\ncomparisons will be correct ( '\\\\0'::bytea is less than '\\\\1'::bytea). \n\nWith your representation, comparisons will fail, because in memory, data\nis escaped using some escaping convention that you made up.\n\n> OK it's probably not the same, but having to put four backslashes when two\n> should be enough to quote one makes me rather puzzled and uneasy. \nDouble parsing, hence double escaping.\n\n--\nAlex Pilosov | http://www.acedsl.com/home.html\nCTO - Acecape, Inc. | AceDSL:The best ADSL in the world\n325 W 38 St. Suite 1005 | (Stealth Marketing Works! :)\nNew York, NY 10018 |\n\n", "msg_date": "Tue, 28 Aug 2001 09:08:01 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@acecape.com>", "msg_from_op": false, "msg_subject": "Re: Toast,bytea, Text -blob all confusing" }, { "msg_contents": "> > >For bytea, follow this rule: to escape a null character, use this:\n> > >'\\\\0'. To escape a backslash, use this: '\\\\\\\\'.\n> > >\n> > >Same idea to unescape data.\n> >\n> > Are there other characters that need to be escaped? I suspect there are\n> > more characters that need to be escaped - ctrl chars? single quotes?.\nWhy\n> > four backslashes for one? Is there a definitive documentation anywhere\nfor\n> > what bytea is _supposed_ (not what it might actually be) to be and how\nit\n> > is to be handled?\n>\n> Yes, sorry for being unclear on this one. Here's a more detailed\n> explanation: Bytea is just a stream of data. On input, it must follow C\n> escaping conventions, on output, it will be escaped using C escaping\n> conventions.\n>\n> However, there's a trap: before things get to bytea input handler, they\n> are first processed by postgresql general parser. Hence, the string \\\\0\n> given from psql will become \\0 when it gets to bytea input handler. String\n> \\\\\\\\ will become \\\\. All non-printable characters must be escaped like\n> this: \\\\(octal of character), for ex, chr(255) must be presented as \\\\377.\n> (If you want, you can also use this as an alternative and more generic way\n> to escape a backslash, \\\\134). Single quote must be escaped either as \\\\47\n> or as \\'. Note the single backslash. Why only one? Because bytea parser\n> doesn't care about single quotes and you only need to escape it for the\n> postgresql parser...\n>\n> So, just keep in mind the double-parsing of input and you'll be safe.\n>\n> > Also why wouldn't escaping stuff like this work with TEXT then? If a\nnull\n> > is going to be backslash backslash zero, and come out the same way, it\nsure\n> > looks like TEXT to me :). OK so there's this thing about storage. So\nmaybe\n> Because text is null-terminated, can't have a null inside.\n>\n> > I could save a byte by just converting nulls to backslash zero and real\n> > backslashes to backslash backslash. Tada.\n> If you do that, you'll break ordering/comparison. Bytea in memory is\n> stored EXACTLY the way input string was, without any escaping, hence, all\n> comparisons will be correct ( '\\\\0'::bytea is less than '\\\\1'::bytea).\n>\n> With your representation, comparisons will fail, because in memory, data\n> is escaped using some escaping convention that you made up.\n>\n> > OK it's probably not the same, but having to put four backslashes when\ntwo\n> > should be enough to quote one makes me rather puzzled and uneasy.\n> Double parsing, hence double escaping.\n\nGreat explanation Alex --thanks! I'll add a bit:\n\nI've done about 400,000 inserts and subsequent queries to verify that, from\nPHP at least, only four charaters need to be escaped. The inserts were 20\nbyte strings gotten by concatenating some known text with a counter in a\nloop, and then producing a SHA-1 hash. This produces very uniformly\ndistributed binary data. Then I ran the same loop, except I queried for the\ninserted strings. I'm quite confident from this experiment that binary can\nreliably be inserted via standard SQL when these four characters are\nescaped. Here's the run down:\n\n\\\\000 First slash is consumed by the general parser, leaving \\000 for the\nbyteain function. If you only use one '\\', the general parser converts the\ncharacter into a true '\\0' byte, and the byteain function sees this byte as\nthe string terminator and stops. This causes the input string to be\ntruncated (which seems to confuse many people).\n\n\\\\012 In my early tests 0x0a (LF) was getting converted to 0x20 (space).\nI think this was happening during PHP's parsing, but I'm still not sure.\nI'll dig into this some more later.\n\n\\\\047 As Alex mentioned, the byteain function doesn't treat this as a\nspecial character, but of course the general parser does as this is a single\nquote. It also works fine to escape it as \\', I just prefer to use all\noctals.\n\n\\\\134 Both the general parser and the byteain function see this (a single\n\\) as the special escape character. Therefore the general parser turns \\\\\\\\\ninto \\\\, and the byteain function turns \\\\ into \\ for actual storage. Again,\nI prefer to use the octal representation instead.\n\nI hope this helps reduce the concerns and confusion over bytea. If anyone\ncan help explain why my linefeeds were getting converted to spaces, all the\nmysteries would be explained ;-)\n\n-- Joe\n\n\n", "msg_date": "Wed, 29 Aug 2001 10:36:25 -0700", "msg_from": "\"Joe Conway\" <joseph.conway@home.com>", "msg_from_op": false, "msg_subject": "Re: Re: Toast,bytea, Text -blob all confusing" }, { "msg_contents": "Thanks you your description, I have added a bytea section to the docs.\n\nPatch attached.\n\n\n> > > >For bytea, follow this rule: to escape a null character, use this:\n> > > >'\\\\0'. To escape a backslash, use this: '\\\\\\\\'.\n> > > >\n> > > >Same idea to unescape data.\n> > >\n> > > Are there other characters that need to be escaped? I suspect there are\n> > > more characters that need to be escaped - ctrl chars? single quotes?.\n> Why\n> > > four backslashes for one? Is there a definitive documentation anywhere\n> for\n> > > what bytea is _supposed_ (not what it might actually be) to be and how\n> it\n> > > is to be handled?\n> >\n> > Yes, sorry for being unclear on this one. Here's a more detailed\n> > explanation: Bytea is just a stream of data. On input, it must follow C\n> > escaping conventions, on output, it will be escaped using C escaping\n> > conventions.\n> >\n> > However, there's a trap: before things get to bytea input handler, they\n> > are first processed by postgresql general parser. Hence, the string \\\\0\n> > given from psql will become \\0 when it gets to bytea input handler. String\n> > \\\\\\\\ will become \\\\. All non-printable characters must be escaped like\n> > this: \\\\(octal of character), for ex, chr(255) must be presented as \\\\377.\n> > (If you want, you can also use this as an alternative and more generic way\n> > to escape a backslash, \\\\134). Single quote must be escaped either as \\\\47\n> > or as \\'. Note the single backslash. Why only one? Because bytea parser\n> > doesn't care about single quotes and you only need to escape it for the\n> > postgresql parser...\n> >\n> > So, just keep in mind the double-parsing of input and you'll be safe.\n> >\n> > > Also why wouldn't escaping stuff like this work with TEXT then? If a\n> null\n> > > is going to be backslash backslash zero, and come out the same way, it\n> sure\n> > > looks like TEXT to me :). OK so there's this thing about storage. So\n> maybe\n> > Because text is null-terminated, can't have a null inside.\n> >\n> > > I could save a byte by just converting nulls to backslash zero and real\n> > > backslashes to backslash backslash. Tada.\n> > If you do that, you'll break ordering/comparison. Bytea in memory is\n> > stored EXACTLY the way input string was, without any escaping, hence, all\n> > comparisons will be correct ( '\\\\0'::bytea is less than '\\\\1'::bytea).\n> >\n> > With your representation, comparisons will fail, because in memory, data\n> > is escaped using some escaping convention that you made up.\n> >\n> > > OK it's probably not the same, but having to put four backslashes when\n> two\n> > > should be enough to quote one makes me rather puzzled and uneasy.\n> > Double parsing, hence double escaping.\n> \n> Great explanation Alex --thanks! I'll add a bit:\n> \n> I've done about 400,000 inserts and subsequent queries to verify that, from\n> PHP at least, only four charaters need to be escaped. The inserts were 20\n> byte strings gotten by concatenating some known text with a counter in a\n> loop, and then producing a SHA-1 hash. This produces very uniformly\n> distributed binary data. Then I ran the same loop, except I queried for the\n> inserted strings. I'm quite confident from this experiment that binary can\n> reliably be inserted via standard SQL when these four characters are\n> escaped. Here's the run down:\n> \n> \\\\000 First slash is consumed by the general parser, leaving \\000 for the\n> byteain function. If you only use one '\\', the general parser converts the\n> character into a true '\\0' byte, and the byteain function sees this byte as\n> the string terminator and stops. This causes the input string to be\n> truncated (which seems to confuse many people).\n> \n> \\\\012 In my early tests 0x0a (LF) was getting converted to 0x20 (space).\n> I think this was happening during PHP's parsing, but I'm still not sure.\n> I'll dig into this some more later.\n> \n> \\\\047 As Alex mentioned, the byteain function doesn't treat this as a\n> special character, but of course the general parser does as this is a single\n> quote. It also works fine to escape it as \\', I just prefer to use all\n> octals.\n> \n> \\\\134 Both the general parser and the byteain function see this (a single\n> \\) as the special escape character. Therefore the general parser turns \\\\\\\\\n> into \\\\, and the byteain function turns \\\\ into \\ for actual storage. Again,\n> I prefer to use the octal representation instead.\n> \n> I hope this helps reduce the concerns and confusion over bytea. If anyone\n> can help explain why my linefeeds were getting converted to spaces, all the\n> mysteries would be explained ;-)\n> \n> -- Joe\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: doc/src/sgml/datatype.sgml\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/doc/src/sgml/datatype.sgml,v\nretrieving revision 1.60\ndiff -c -r1.60 datatype.sgml\n*** doc/src/sgml/datatype.sgml\t2001/08/31 01:55:25\t1.60\n--- doc/src/sgml/datatype.sgml\t2001/09/04 03:15:49\n***************\n*** 84,89 ****\n--- 84,95 ----\n </row>\n \n <row>\n+ <entry><type>bytea</type></entry>\n+ <entry></entry>\n+ <entry>binary data</entry>\n+ </row>\n+ \n+ <row>\n <entry><type>character(<replaceable>n</replaceable>)</type></entry>\n <entry><type>char(<replaceable>n</replaceable>)</type></entry>\n <entry>fixed-length character string</entry>\n***************\n*** 782,788 ****\n \t<entry>text</entry>\n \t<entry>Variable unlimited length</entry>\n </row>\n! </tbody>\n </tgroup>\n </table>\n \n--- 788,798 ----\n \t<entry>text</entry>\n \t<entry>Variable unlimited length</entry>\n </row>\n! <row>\n! \t<entry>bytea</entry>\n! \t<entry>binary data</entry>\n! </row>\n! </tbody>\n </tgroup>\n </table>\n \n***************\n*** 827,832 ****\n--- 837,855 ----\n does not require an explicit declared upper limit on the size of\n the string. Although the type <type>text</type> is not in the SQL\n standard, many other RDBMS packages have it as well.\n+ </para>\n+ \n+ <para>\n+ The <type>bytea</type> data type allows storage of binary data,\n+ specifically allowing storage of NULLs which are entered as\n+ <literal>'\\\\000'</>. The first backslash is interpreted by the\n+ single quotes, and the second is recognized by <type>bytea</> and\n+ preceeds a three digit octal value. For a similar reason, a\n+ backslash must be entered into a field as <literal>'\\\\\\\\'</> or\n+ <literal>'\\\\134'</>. You may also have to escape line feeds and\n+ carriage return if your interface automatically translates these. It\n+ can store values of any length. <type>Bytea</> is a non-standard\n+ data type.\n </para>\n \n <para>", "msg_date": "Mon, 3 Sep 2001 23:19:19 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: Toast,bytea, Text -blob all confusing" }, { "msg_contents": "> >\n> > \\\\012 In my early tests 0x0a (LF) was getting converted to 0x20\n(space).\n> > I think this was happening during PHP's parsing, but I'm still not sure.\n> > I'll dig into this some more later.\n> >\n\n<redfaced>\n The script I was using in PHP *explicitly* converted all linefeeds to\nspaces. Mystery solved.\n</redfaced>\n\nI think Bruce's text still works though.\n\n-- Joe\n\n\n", "msg_date": "Mon, 3 Sep 2001 21:14:32 -0700", "msg_from": "\"Joe Conway\" <joseph.conway@home.com>", "msg_from_op": false, "msg_subject": "Re: Re: Toast,bytea, Text -blob all confusing" }, { "msg_contents": "> > >\n> > > \\\\012 In my early tests 0x0a (LF) was getting converted to 0x20\n> (space).\n> > > I think this was happening during PHP's parsing, but I'm still not sure.\n> > > I'll dig into this some more later.\n> > >\n> \n> <redfaced>\n> The script I was using in PHP *explicitly* converted all linefeeds to\n> spaces. Mystery solved.\n> </redfaced>\n> \n> I think Bruce's text still works though.\n\nI can see other interfaces doing fancy things with newlines and carriage\nreturns so I added it to the docs.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 4 Sep 2001 00:16:53 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: Toast,bytea, Text -blob all confusing" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> However, there's a trap: before things get to bytea input handler, they\n> are first processed by postgresql general parser.\n\nThis description fails to make clear that the two levels of parsing only\napply for datums that are written as string literals in SQL commands.\nAn example where this doesn't apply is COPY input data.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 04 Sep 2001 00:52:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: Toast,bytea, Text -blob all confusing " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > However, there's a trap: before things get to bytea input handler, they\n> > are first processed by postgresql general parser.\n> \n> This description fails to make clear that the two levels of parsing only\n> apply for datums that are written as string literals in SQL commands.\n> An example where this doesn't apply is COPY input data.\n\nAre you talking about my SGML changes? I clearly mention quote-handling\nand bytea handling, which pretty clearly not apply in COPY.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 4 Sep 2001 00:54:50 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: Toast,bytea, Text -blob all confusing" } ]
[ { "msg_contents": "I'm trying my best to convert from MySQL to PgSQL but I cant get a good\nclear answer about \ncertian issures.Mainly TEXT, TOAST,BLOB , BYTEA etc.\nIt was an easy task in mysql but everything in the archives about , text ,\ntoast and bytea is just\nconfusing me with postgresql. I have Bruces's book and I've searched the\narchives years back with all the right keywords with not luck.Here is my\nsituation-->\n\n\nWHAT I WAS DOING IN MYSQL\nVia the web my clients are uploading basic text/data files, sometimes >\nthan 30MB. In the past ,via CGI I have been parsing the file\ninto one STL string, using mysql_escape_string to escape it and then using\nan INSERT to place the \n ,\\'+\"stlstring+\"\\' , into a BLOB column. \n\"dont want to use a temp. file or files in general anywhere. The data will\nalways be passed via the database and buffers for certian reasons.\"Thus no\nOID's\n\n\nTHIS IS WHAT I CANT SEEM TO FIGURE OUT IN POSTGRESQL\n1. I cant get a clear answer on what kind of data type to use for my large\ntext string? TEXT, ???, ??? or something about TOAST\nI have seen in the e-mail archive but cant find any documentaion?\n\n2. I've written my own escape method ,(\"cant find one for Pgsql\") , BUT i\ndon't know what \nto escape and not to escape. So it keeps failing. I cand find any docs. on\nwhat to escape either?\n\n\nSUMMARY\nWhat is the best datatype to use, for large raw text and/or binary if i\nchoose? \nOnce I know this,how Im a supposed to escape my string and get it through\nthe parser correctly so i can retrieve it correctly?\n\nThanks for your time.\n\nPS: Using RedHat.\n\n\n\nJason H. Ory\nMedprint+\nSoftware Developer\njason.ory@ndchealth.com\n\n", "msg_date": "Thu, 23 Aug 2001 13:47:01 -0400", "msg_from": "jason.ory@ndchealth.com", "msg_from_op": true, "msg_subject": "Toast, Text, blob bytea Huh?" }, { "msg_contents": "On Thu, 23 Aug 2001 13:47:01 -0400, you wrote:\n>1. I cant get a clear answer on what kind of data type to use for my large\n>text string? TEXT, ???, ??? or something about TOAST\n>I have seen in the e-mail archive but cant find any documentaion?\n\nTOAST is not a data type, but a project that extended the\ncapacity of PostgreSQL from version 7.1 onwards to support\nfields up to 1 GB in length and rows of (practically) unlimited\nlength. This makes TEXT and BYTEA good data types for storing\nlarge character and binary data. In the hands of a marketing\ndepartment toasted data would be a great feature that justified\nthe release of version 8 all by itself :-)\n\nHowever, unlike SQL3 Lobs, values of type TEXT and BYTEA are\nalways transfered as one unit to the client. As a Java\nprogrammer I like that feature. A string is a string and a byte\narray is a byte array, no matter what it's length may be.\n\nLarge Objects in PostgreSQL\n(http://www.ca.postgresql.org/users-lounge/docs/7.1/programmer/largeobjects.html)\nsupport file oriented access to large data. This may be a more\nconvenient programming model, depending on the client interface\nthat you use and the requirements of your application.\n\nHowever, Large Objects in PostgreSQL are objects that exist\nindependently from the rows in which you hold a reference to\nthem. If you delete or update a row, your application may need\nto delete certain Large Objects as well. In this respect,\nPostgreSQL Large Objects differ from SQL3 Lobs.\n\n>2. I've written my own escape method ,(\"cant find one for Pgsql\") , BUT i\n>don't know what \n>to escape and not to escape. So it keeps failing. I cand find any docs. on\n>what to escape either?\n\nI'm not sure what you mean by that. What characters would you\nwant to escape in what way and why?\n\nAnd by the way, what client interface are you using?\n\nRegards,\nRen� Pijlman\n", "msg_date": "Thu, 23 Aug 2001 21:33:35 +0200", "msg_from": "Rene Pijlman <rpijlman@wanadoo.nl>", "msg_from_op": false, "msg_subject": "Re: Toast, Text, blob bytea Huh?" }, { "msg_contents": "> I'm trying my best to convert from MySQL to PgSQL but I cant get a good\n> clear answer about\n> certian issures.Mainly TEXT, TOAST,BLOB , BYTEA etc.\n> It was an easy task in mysql but everything in the archives about , text ,\n> toast and bytea is just\n> confusing me with postgresql. I have Bruces's book and I've searched the\n\nQuick glossary:\n\nTEXT is a datatype which stores character data of unspecified length (up to\nthe max value of a 4 byte integer in length, although I've seen comments\nindicating that the practical limit is closer to 1 GB -- not sure why). TEXT\nis not intended to hold arbitrary binary data. If you want to store binary\nin a text column, encode it to hex or base64 or something first.\n\nTOAST is an internal database concept. If I understand it correctly, it\nrefers to a combination of compression and out-of-line storage for large\nlength values of a charater datatype. This happens transparently to you.\n\nBLOB is otherwise known as LO or Large Object datatype in PostgreSQL. These\nare always stored out-of-line, I don't believe they are compressed, and they\nhave their own special access methods (for dealing with data a \"chunk\" at a\ntime).\n\nBYTEA is very similar to TEXT, except that it is intended for binary data. I\nrecently posted a PHP function which escapes binary in order to allow\ninserting it into a bytea column (on the pgsql-general list). At a minimum\nthere are 4 characters which must be escaped. They are ACSII 0, 10, 39, and\n92. They must reach PostgreSQL looking like \\\\000, \\\\012, \\\\047, and \\\\134\nrespectively (actually 39 could be \\' and 92 could be \\\\\\\\, but it is\nsimpler to be consistent).\n\n> THIS IS WHAT I CANT SEEM TO FIGURE OUT IN POSTGRESQL\n> 1. I cant get a clear answer on what kind of data type to use for my large\n> text string? TEXT, ???, ??? or something about TOAST\n> I have seen in the e-mail archive but cant find any documentaion?\n\nSo, you can use TEXT if you encode to hex or base64 in your app first, or\nyou bytea if you escape as I described above in your app. Or you can use the\nLO functions to manipulate large objects (see\nhttp://www.postgresql.org/idocs/index.php?lo-interfaces.html).\n\n>\n> 2. I've written my own escape method ,(\"cant find one for Pgsql\") , BUT i\n> don't know what\n> to escape and not to escape. So it keeps failing. I cand find any docs. on\n> what to escape either?\n\nSee above.\n\n>\n>\n> SUMMARY\n> What is the best datatype to use, for large raw text and/or binary if i\n> choose?\n> Once I know this,how Im a supposed to escape my string and get it through\n> the parser correctly so i can retrieve it correctly?\n\nIf you use TEXT, you will have to decode the hex/base64 back into binary in\nyour app. Similarly, if you use bytea, although stored as binary, the data\nis returned with \"unprintable\" values escaped as octals*. Your app will have\nto decode the octals back into binary.\n\n*NOTE to hackers: is there a good reason for this? ISTM that the client\nshould be responsible for any encoding needed when bytea is returned. The\nserver should return bytea as straight varlena.\n\nIf you use LO, you have to use the interface functions instead of standard\nSQL.\n\nHope this helps,\n\n-- Joe\n\n", "msg_date": "Thu, 23 Aug 2001 13:00:10 -0700", "msg_from": "\"Joe Conway\" <joseph.conway@home.com>", "msg_from_op": false, "msg_subject": "Re: Toast, Text, blob bytea Huh?" }, { "msg_contents": "Quoting Joe Conway <joseph.conway@home.com>:\n\n> TEXT is a datatype which stores character data of unspecified length (up\n> to\n> the max value of a 4 byte integer in length, although I've seen\n> comments\n> indicating that the practical limit is closer to 1 GB -- not sure why).\n\nIt may be something to do with the 1Gb splitting of the physical files \nrepresenting a table... Unless it changed recently, a table was split over \nmultiple files at the 1Gb mark.\n\nPeter\n\n-- \nPeter Mount peter@retep.org.uk\nPostgreSQL JDBC Driver: http://www.retep.org.uk/postgres/\nRetepPDF PDF library for Java: http://www.retep.org.uk/pdf/\n", "msg_date": "Fri, 24 Aug 2001 10:27:48 -0400 (EDT)", "msg_from": "Peter T Mount <peter@retep.org.uk>", "msg_from_op": false, "msg_subject": "Re: Toast, Text, blob bytea Huh?" }, { "msg_contents": "Peter T Mount <peter@retep.org.uk> writes:\n> Quoting Joe Conway <joseph.conway@home.com>:\n>> indicating that the practical limit is closer to 1 GB -- not sure why).\n\n> It may be something to do with the 1Gb splitting of the physical files \n> representing a table...\n\nNo, that's just a coincidence. The reason that TOAST limits fields to\n1Gb is that the high-order two bits of the varlena length word were\ncommandeered as TOAST state indicators. There are now only thirty bits\navailable to represent the length of a variable-length Datum; hence the\nhard limit on field width is 1Gb.\n\nI'd think that the \"practical\" limit is quite a bit less than that, at\nleast until we devise an API that lets you read and write toasted values\nin sections.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 24 Aug 2001 11:15:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Toast, Text, blob bytea Huh? " }, { "msg_contents": "> No, that's just a coincidence. The reason that TOAST limits fields to\n> 1Gb is that the high-order two bits of the varlena length word were\n> commandeered as TOAST state indicators. There are now only thirty bits\n> available to represent the length of a variable-length Datum; hence the\n> hard limit on field width is 1Gb.\n> \n> I'd think that the \"practical\" limit is quite a bit less than that, at\n> least until we devise an API that lets you read and write toasted values\n> in sections.\n\nYes, passing around multi-gigabytes memory chunks in a process is pretty\nslow.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 24 Aug 2001 12:07:44 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Toast, Text, blob bytea Huh?" }, { "msg_contents": "Peter T Mount wrote:\n> Quoting Joe Conway <joseph.conway@home.com>:\n>\n> > TEXT is a datatype which stores character data of unspecified length (up\n> > to\n> > the max value of a 4 byte integer in length, although I've seen\n> > comments\n> > indicating that the practical limit is closer to 1 GB -- not sure why).\n>\n> It may be something to do with the 1Gb splitting of the physical files\n> representing a table... Unless it changed recently, a table was split over\n> multiple files at the 1Gb mark.\n\n No, it's because the upper two bits of the variable size\n field are used as flags.\n\n But in practice there are other limits that force you to keep\n the objects you throw into text or bytea fields alot smaller.\n When your INSERT query is received, parsed, planned and a\n heap tuple created, there are at least four copies of that\n object in the backends memory. How much virtual memory does\n your OS support for one single process?\n\n And by the way, TOAST is not only used for character data\n types. All variable size data types in the base system are\n toastable. Well, arrays might be considered sort of pop-tarts\n here, but anyway, they get toasted.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Fri, 24 Aug 2001 15:00:19 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Toast, Text, blob bytea Huh?" }, { "msg_contents": "joseph.conway@home.com (\"Joe Conway\") wrote in message news:<02a101c12c0e$37856d60$48d210ac@jecw2k1>...\n... snip\n> BYTEA is very similar to TEXT, except that it is intended for binary data. I\n> recently posted a PHP function which escapes binary in order to allow\n> inserting it into a bytea column (on the pgsql-general list). At a minimum\n> there are 4 characters which must be escaped. They are ACSII 0, 10, 39, and\n> 92. They must reach PostgreSQL looking like \\\\000, \\\\012, \\\\047, and \\\\134\n> respectively (actually 39 could be \\' and 92 could be \\\\\\\\, but it is\n> simpler to be consistent).\n... snip\n\nIs it actually necessary to escape \\012 (linefeed) in a query? My\nbrief testing using psql or python pygresql seemed to work ok with\nonly \\000, \\', and \\\\ escaped. Gosh, maybe all my data is corrupted\n(!!)\n\nRyan\n", "msg_date": "4 Sep 2001 09:32:58 -0700", "msg_from": "ryan_rs@c4.com (Ryan)", "msg_from_op": false, "msg_subject": "Re: Toast, Text, blob bytea Huh?" }, { "msg_contents": "> joseph.conway@home.com (\"Joe Conway\") wrote in message\nnews:<02a101c12c0e$37856d60$48d210ac@jecw2k1>...\n> ... snip\n> > BYTEA is very similar to TEXT, except that it is intended for binary\ndata. I\n> > recently posted a PHP function which escapes binary in order to allow\n> > inserting it into a bytea column (on the pgsql-general list). At a\nminimum\n> > there are 4 characters which must be escaped. They are ACSII 0, 10, 39,\nand\n> > 92. They must reach PostgreSQL looking like \\\\000, \\\\012, \\\\047, and\n\\\\134\n> > respectively (actually 39 could be \\' and 92 could be \\\\\\\\, but it is\n> > simpler to be consistent).\n> ... snip\n>\n> Is it actually necessary to escape \\012 (linefeed) in a query? My\n> brief testing using psql or python pygresql seemed to work ok with\n> only \\000, \\', and \\\\ escaped. Gosh, maybe all my data is corrupted\n> (!!)\n>\n\nSorry the response is a week late, but your post just hit the list (at least\nI just got it). I found after sending this, that the problem with linefeeds\nwas in my PHP code, so you should be OK :-)\n\nSorry for the confusion I may have caused!\n\nJoe\n\n\n", "msg_date": "Mon, 10 Sep 2001 08:01:19 -0700", "msg_from": "\"Joe Conway\" <joseph.conway@home.com>", "msg_from_op": false, "msg_subject": "Re: Toast, Text, blob bytea Huh?" }, { "msg_contents": "> joseph.conway@home.com (\"Joe Conway\") wrote in message news:<02a101c12c0e$37856d60$48d210ac@jecw2k1>...\n> ... snip\n> > BYTEA is very similar to TEXT, except that it is intended for binary data. I\n> > recently posted a PHP function which escapes binary in order to allow\n> > inserting it into a bytea column (on the pgsql-general list). At a minimum\n> > there are 4 characters which must be escaped. They are ACSII 0, 10, 39, and\n> > 92. They must reach PostgreSQL looking like \\\\000, \\\\012, \\\\047, and \\\\134\n> > respectively (actually 39 could be \\' and 92 could be \\\\\\\\, but it is\n> > simpler to be consistent).\n> ... snip\n> \n> Is it actually necessary to escape \\012 (linefeed) in a query? My\n> brief testing using psql or python pygresql seemed to work ok with\n> only \\000, \\', and \\\\ escaped. Gosh, maybe all my data is corrupted\n> (!!)\n\nThe linefeed escape was reported by a PHP users and perhaps there is an\nissue with PHP only. Not sure.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 Sep 2001 11:22:52 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Toast, Text, blob bytea Huh?" } ]
[ { "msg_contents": "There seem to be several ways to get at just about anything in the\nCatalog Tables. The ODBC driver, psql, and pg_dump typically use\nslightly diff sql and you guys have suggested even better ways. Forgive\nme as I ask for more.\n\nHow do I determine the foriegn keys in a table?\n\nI see pg_class.relfkeys and pg_class.relrefs. I am not sure what the\ndiff is between the two. In anycase; where can I go to find the\ntable/column(s) for each fk?\n\nHaving this info will allow me to accurately connect the tables in the\nreverse engineered ERD. Very cool.\n\nPeter\n\n\n\n", "msg_date": "Thu, 23 Aug 2001 14:02:16 -0700", "msg_from": "Peter Harvey <pharvey@codebydesign.com>", "msg_from_op": true, "msg_subject": "Reverse Engineering" } ]
[ { "msg_contents": "Hello, i just reviewed the win32 errno patch and i saw that maybe i didn't\nreally played it totally safe in my last suggestion, the system table might\npick up the msg but not the netmsg.dll, so better try both.\nI also added a hex printout of the \"errno\" appended to all messages, that's\nnicer.\n\nIf anyone hate my coding style, or that i'm using goto constructs, just tell\nme, and i'll rework it into a nested if () thing.\n\nPatch attached.\n\nMagnus\n\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n Programmer/Networker [|] Magnus Naeslund\n PGP Key: http://www.genline.nu/mag_pgp.txt\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-", "msg_date": "Thu, 23 Aug 2001 23:22:14 +0200", "msg_from": "\"Magnus Naeslund\\(f\\)\" <mag@fbab.net>", "msg_from_op": true, "msg_subject": "[PATCH] Win32 errno a little bit safer" }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n> Hello, i just reviewed the win32 errno patch and i saw that maybe i didn't\n> really played it totally safe in my last suggestion, the system table might\n> pick up the msg but not the netmsg.dll, so better try both.\n> I also added a hex printout of the \"errno\" appended to all messages, that's\n> nicer.\n> \n> If anyone hate my coding style, or that i'm using goto constructs, just tell\n> me, and i'll rework it into a nested if () thing.\n> \n> Patch attached.\n> \n> Magnus\n> \n> -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n> Programmer/Networker [|] Magnus Naeslund\n> PGP Key: http://www.genline.nu/mag_pgp.txt\n> -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n> \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 3 Sep 2001 16:31:31 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Win32 errno a little bit safer" }, { "msg_contents": "\nThanks. Patch applied.\n\n> Hello, i just reviewed the win32 errno patch and i saw that maybe i didn't\n> really played it totally safe in my last suggestion, the system table might\n> pick up the msg but not the netmsg.dll, so better try both.\n> I also added a hex printout of the \"errno\" appended to all messages, that's\n> nicer.\n> \n> If anyone hate my coding style, or that i'm using goto constructs, just tell\n> me, and i'll rework it into a nested if () thing.\n> \n> Patch attached.\n> \n> Magnus\n> \n> -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n> Programmer/Networker [|] Magnus Naeslund\n> PGP Key: http://www.genline.nu/mag_pgp.txt\n> -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n> \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 5 Sep 2001 22:51:57 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Win32 errno a little bit safer" } ]
[ { "msg_contents": "> > If the licence becomes a problem I can easily change it,\n> > but I prefer the GPL if possible.\n> \n> We just wanted to make sure the backend changes were not\n> under the GPL.\n\nNo, Bruce - backend part of code is useless without interface\nfunctions and I wonder doesn't GPL-ed interface implementation\nprevent using of user-locks in *commercial* applications.\nFor example, one could use user-locks for processing incoming\norders by multiple operators:\nselect * from orders where user_lock(orders.oid) = 1 LIMIT 1\n- so each operator would lock one order for processing and\noperators wouldn't block each other. So, could such\napplication be commercial with current licence of\nuser_lock()? (Sorry, I'm not licence guru.)\n\nDISCLAIMER (to avoid ungrounded rumors -:))\nI have no plans to use user-locks in applications\nof *any* kind (free/commercial). It's just matter of\nprinciple - anything in/from backend code maybe used\nfor any purposes, - that's nature of our licence.\n\nDISCLAIMER II (to avoid ungrounded rumors in future -:))\nI would be interested in using proposed \"key-locking\"\nin some particular commercial application but this\nfeature is not \"must have\" for that application -\nfor my purposes it's enough:\n\n----------------------------------------------------------\nLOCKTAG tag;\ntag.relId = XactLockTableId;\ntag.dbId = _tableId_;\n// tag.dbId = InvalidOid is used in XactLockTableInsert\n// and no way to use valid OID for XactLock purposes,\n// so no problem\ntag.objId.xid = _user_key_;\n----------------------------------------------------------\n\n- but I like standard solutions more -:)\n(BTW, key-locking was requested by others a long ago.)\n\nVadim\n", "msg_date": "Thu, 23 Aug 2001 15:30:11 -0700", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: User locks code" }, { "msg_contents": "> > > If the licence becomes a problem I can easily change it,\n> > > but I prefer the GPL if possible.\n> > \n> > We just wanted to make sure the backend changes were not\n> > under the GPL.\n> \n> No, Bruce - backend part of code is useless without interface\n> functions and I wonder doesn't GPL-ed interface implementation\n> prevent using of user-locks in *commercial* applications.\n> For example, one could use user-locks for processing incoming\n> orders by multiple operators:\n> select * from orders where user_lock(orders.oid) = 1 LIMIT 1\n> - so each operator would lock one order for processing and\n> operators wouldn't block each other. So, could such\n> application be commercial with current licence of\n> user_lock()? (Sorry, I'm not licence guru.)\n\nI assume any code that uses contrib/userlock has to be GPL'ed, meaning\nit can be used for commercial purposes but can't be sold as binary-only,\nand actually can't be sold for much because you have to make the code\navailable for near-zero cost.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 23 Aug 2001 18:55:24 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: User locks code" } ]
[ { "msg_contents": "> > For example, one could use user-locks for processing incoming\n> > orders by multiple operators:\n> > select * from orders where user_lock(orders.oid) = 1 LIMIT 1\n> > - so each operator would lock one order for processing and\n> > operators wouldn't block each other. So, could such\n> > application be commercial with current licence of\n> > user_lock()? (Sorry, I'm not licence guru.)\n> \n> I assume any code that uses contrib/userlock has to be GPL'ed,\n> meaning it can be used for commercial purposes but can't be sold\n> as binary-only, and actually can't be sold for much because you\n> have to make the code available for near-zero cost.\n\nI'm talking not about solding contrib/userlock separately, but\nabout ability to sold applications which use contrib/userlock.\nSorry, if it was not clear.\n\nVadim\n", "msg_date": "Thu, 23 Aug 2001 16:01:16 -0700", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: User locks code" }, { "msg_contents": "> > > For example, one could use user-locks for processing incoming\n> > > orders by multiple operators:\n> > > select * from orders where user_lock(orders.oid) = 1 LIMIT 1\n> > > - so each operator would lock one order for processing and\n> > > operators wouldn't block each other. So, could such\n> > > application be commercial with current licence of\n> > > user_lock()? (Sorry, I'm not licence guru.)\n> > \n> > I assume any code that uses contrib/userlock has to be GPL'ed,\n> > meaning it can be used for commercial purposes but can't be sold\n> > as binary-only, and actually can't be sold for much because you\n> > have to make the code available for near-zero cost.\n> \n> I'm talking not about solding contrib/userlock separately, but\n> about ability to sold applications which use contrib/userlock.\n> Sorry, if it was not clear.\n\nNo, you were clear. My assumption is that once you link that code into\nthe backend, the entire backend is GPL'ed and any other application code\nyou link into it is also (stored procedures, triggers, etc.) I don't\nthink your client application will be GPL'ed, assuming you didn't link\nin libreadline.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 23 Aug 2001 19:04:28 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: User locks code" } ]
[ { "msg_contents": "> > > I assume any code that uses contrib/userlock has to be GPL'ed,\n> > > meaning it can be used for commercial purposes but can't be sold\n> > > as binary-only, and actually can't be sold for much because you\n> > > have to make the code available for near-zero cost.\n> > \n> > I'm talking not about solding contrib/userlock separately, but\n> > about ability to sold applications which use contrib/userlock.\n> > Sorry, if it was not clear.\n> \n> No, you were clear.\n\nSo I missed your \"near-zero cost\" sentence.\n\n> My assumption is that once you link that code into\n> the backend, the entire backend is GPL'ed and any other\n> application code you link into it is also (stored procedures,\n> triggers, etc.) I don't think your client application will\n> be GPL'ed, assuming you didn't link in libreadline.\n\nApplication would explicitly call user_lock() functions in\nqueries, so issue is still not clear for me. And once again -\ncompare complexities of contrib/userlock and backend' userlock\ncode: what's reason to cover contrib/userlock by GPL?\n\nVadim\n", "msg_date": "Thu, 23 Aug 2001 16:24:39 -0700", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: User locks code" }, { "msg_contents": "> > No, you were clear.\n> \n> So I missed your \"near-zero cost\" sentence.\n\nOK.\n\n> > My assumption is that once you link that code into\n> > the backend, the entire backend is GPL'ed and any other\n> > application code you link into it is also (stored procedures,\n> > triggers, etc.) I don't think your client application will\n> > be GPL'ed, assuming you didn't link in libreadline.\n> \n> Application would explicitly call user_lock() functions in\n> queries, so issue is still not clear for me. And once again -\n\nWell, yes, it calls user_lock(), but the communication is not OS-linked,\nit is linked over a network socket, so I don't think the GPL spreads\nover a socket. Just as telnet'ing somewhere an typing 'bash' doesn't\nmake your telnet GPL'ed, so I think the client code is safe. To the\nclient, the backend is just returning information. You don't really\nlink to the query results.\n\n> compare complexities of contrib/userlock and backend' userlock\n> code: what's reason to cover contrib/userlock by GPL?\n\nOnly because Massimo prefers it. I can think of no other reason. It\nclearly GPL-stamps any backend that links it in.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 23 Aug 2001 19:34:40 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: User locks code" } ]
[ { "msg_contents": "Hi,\n\nI am rather new to Postgresql and am having trouble with\nsome aspects of large objects. I am coming from a MySQL\nbackgroun where a longblob could just be another column so\nplease excuse my ignorance.\n\nI am creating objects using the DBI interface and inserting\ndata which works well. I am however having trouble\ndeleting blob data from the db.\ntest=> \\lo_list\n Large objects\n ID | Description\n-------+-------------\n 89803 |\n 90068 |\n(2 rows)\n\ntest=> \\lo_unlink 89803\nERROR: pg_description: Permission denied.\ntest=>\n\nDo I have to grant any user who needs blobs access to a\nsystem table (pg_description)? If so, how much damage can\nthey do?\n\nAlso, I have been reading about tforeign keys which looks\nvery neat. Can these be extended to large objects. For\nexample:\ntable files\nfilename varchar(120) not null,\ndata oid\ndelete from files where filename='badfile.txt';\nCan I have the delete statement above knock out the\nassociated large object if one exists?\n\nThanks in advance,\nShane\n\n-- \nShane Wegner: shane@cm.nu\n http://www.cm.nu/~shane/\nPGP: 1024D/FFE3035D\n A0ED DAC4 77EC D674 5487\n 5B5C 4F89 9A4E FFE3 035D\n", "msg_date": "Thu, 23 Aug 2001 17:19:14 -0700", "msg_from": "Shane Wegner <shane@cm.nu>", "msg_from_op": true, "msg_subject": "Problems with large objects" }, { "msg_contents": "Shane Wegner <shane@cm.nu> writes:\n> test=> \\lo_unlink 89803\n> ERROR: pg_description: Permission denied.\n\nHmm. Maybe those client-side comment manipulations in psql aren't\nsuch a hot idea. I know I never tested them as non-superuser :-(\n\nShane, try that from a superuser Postgres userid. Meanwhile,\nit's back to the drawing board for us.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 23 Aug 2001 22:57:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Permissions for large-object comments" }, { "msg_contents": "On Thu, Aug 23, 2001 at 10:57:29PM -0400, Tom Lane wrote:\n> Shane Wegner <shane@cm.nu> writes:\n> > test=> \\lo_unlink 89803\n> > ERROR: pg_description: Permission denied.\n> \n> Hmm. Maybe those client-side comment manipulations in psql aren't\n> such a hot idea. I know I never tested them as non-superuser :-(\n> \n> Shane, try that from a superuser Postgres userid. Meanwhile,\n> it's back to the drawing board for us.\n\nYes an unlink works fine as the superuser. I can't unlink\nusing the DBI interface either though which suggests it not\nbeing psql's fault.\n\nShane\n\n> \n> \t\t\tregards, tom lane\n\n-- \nShane Wegner: shane@cm.nu\n http://www.cm.nu/~shane/\nPGP: 1024D/FFE3035D\n A0ED DAC4 77EC D674 5487\n 5B5C 4F89 9A4E FFE3 035D\n", "msg_date": "Thu, 23 Aug 2001 20:00:30 -0700", "msg_from": "Shane Wegner <shane@cm.nu>", "msg_from_op": true, "msg_subject": "Re: Permissions for large-object comments" }, { "msg_contents": "Tom Lane writes:\n\n> Shane Wegner <shane@cm.nu> writes:\n> > test=> \\lo_unlink 89803\n> > ERROR: pg_description: Permission denied.\n>\n> Hmm. Maybe those client-side comment manipulations in psql aren't\n> such a hot idea. I know I never tested them as non-superuser :-(\n\n:-(\n\n> Shane, try that from a superuser Postgres userid. Meanwhile,\n> it's back to the drawing board for us.\n\nI'm not sure about the future of the large objects, so I'm less eager to\ninvent a new mechanism. I'm open to ideas, however.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Sat, 25 Aug 2001 02:45:40 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Permissions for large-object comments" }, { "msg_contents": "On Sat, Aug 25, 2001 at 02:45:40AM +0200, Peter Eisentraut wrote:\n> Tom Lane writes:\n> \n> > Shane Wegner <shane@cm.nu> writes:\n> > > test=> \\lo_unlink 89803\n> > > ERROR: pg_description: Permission denied.\n> >\n> > Hmm. Maybe those client-side comment manipulations in psql aren't\n> > such a hot idea. I know I never tested them as non-superuser :-(\n> \n> :-(\n> \n> > Shane, try that from a superuser Postgres userid. Meanwhile,\n> > it's back to the drawing board for us.\n> \n> I'm not sure about the future of the large objects, so I'm less eager to\n> invent a new mechanism. I'm open to ideas, however.\n\nWell as I'm not a developer, this it out of my league. \nHowever, if the future of large objects is in question, is\nthere a better way I should be storing large chunks of\nbinary data in the database. The text column doesn't seem\nto support it.\nRegards,\nShane\n\n-- \nShane Wegner: shane@cm.nu\n http://www.cm.nu/~shane/\nPGP: 1024D/FFE3035D\n A0ED DAC4 77EC D674 5487\n 5B5C 4F89 9A4E FFE3 035D\n", "msg_date": "Fri, 24 Aug 2001 17:48:18 -0700", "msg_from": "Shane Wegner <shane@cm.nu>", "msg_from_op": true, "msg_subject": "Re: Permissions for large-object comments" } ]
[ { "msg_contents": "Hi Everyone,\nJust wanted to let you all know that I have been\nworking on development of financial applications\nusing,java, javascript, javabeans and of course\nPostgreSQL database for past one year. I was out of\ntouch with the community for this time and it kinda\nfeels like as if I am coming out trenches. I heard and\nread interviews by Geoff Davidson and Bruce Momijam.\nIn Geoff Davidson's interview there is talk about need\nof Applications like Oracle or many other commercial\nvendors have. \nWell I cannot say that my Application , which I fondly\ncall ERPTool , can fill the need but ,it definitly can\nprovide a very good starting point.\n\nRight now the modules it has are,\nOrder Entry,\nPurchasing,\nReceivables,\nPayables,\nGL (Basic)\nInventory (Very Basic)\nThere are three main points that set it apart from the\ncommercial applications.\n1.It is built using mostly free/open source software.\n2.It is highly and very rapidly customizable.\n3.It needs nothing more than a browser on client side\nto function. \nWhile it serves all the basic needs for a small\ncompany\nit can very rapidly expanded by adding new forms and\nfunctionality to suit the needs of an enterprise of\nany size. I would like to colloborate with postgresql\nand other open source communities to make one of the\nmost versetile and dependable product using open\nsource code.\nPlease let me know if this can be done.\n\nRegards\nAmandeep Singh\n\n\n\n__________________________________________________\nDo You Yahoo!?\nMake international calls for as low as $.04/minute with Yahoo! Messenger\nhttp://phonecard.yahoo.com/\n", "msg_date": "Thu, 23 Aug 2001 17:33:14 -0700 (PDT)", "msg_from": "Amandeep Singh <aman_boparai@yahoo.com>", "msg_from_op": true, "msg_subject": "ERP Applications on Postgresql -- ERPTool" }, { "msg_contents": "Hi Amandeep,\n\nDo you have an URL for your application?\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\nAmandeep Singh wrote:\n> \n> Hi Everyone,\n> Just wanted to let you all know that I have been\n> working on development of financial applications\n> using,java, javascript, javabeans and of course\n> PostgreSQL database for past one year. I was out of\n> touch with the community for this time and it kinda\n> feels like as if I am coming out trenches. I heard and\n> read interviews by Geoff Davidson and Bruce Momijam.\n> In Geoff Davidson's interview there is talk about need\n> of Applications like Oracle or many other commercial\n> vendors have.\n> Well I cannot say that my Application , which I fondly\n> call ERPTool , can fill the need but ,it definitly can\n> provide a very good starting point.\n> \n> Right now the modules it has are,\n> Order Entry,\n> Purchasing,\n> Receivables,\n> Payables,\n> GL (Basic)\n> Inventory (Very Basic)\n> There are three main points that set it apart from the\n> commercial applications.\n> 1.It is built using mostly free/open source software.\n> 2.It is highly and very rapidly customizable.\n> 3.It needs nothing more than a browser on client side\n> to function.\n> While it serves all the basic needs for a small\n> company\n> it can very rapidly expanded by adding new forms and\n> functionality to suit the needs of an enterprise of\n> any size. I would like to colloborate with postgresql\n> and other open source communities to make one of the\n> most versetile and dependable product using open\n> source code.\n> Please let me know if this can be done.\n> \n> Regards\n> Amandeep Singh\n> \n> __________________________________________________\n> Do You Yahoo!?\n> Make international calls for as low as $.04/minute with Yahoo! Messenger\n> http://phonecard.yahoo.com/\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Fri, 24 Aug 2001 18:30:46 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: ERP Applications on Postgresql -- ERPTool" }, { "msg_contents": "Hi Justin, Andrew,\nI am making doing some essential face lift on it right\nnow, Once I am done I will send you the URL.\n\nAmandeep\n--- Justin Clift <justin@postgresql.org> wrote:\n> Hi Amandeep,\n> \n> Do you have an URL for your application?\n> \n> :-)\n> \n> Regards and best wishes,\n> \n> Justin Clift\n> \n> \n> Amandeep Singh wrote:\n> > \n> > Hi Everyone,\n> > Just wanted to let you all know that I have been\n> > working on development of financial applications\n> > using,java, javascript, javabeans and of course\n> > PostgreSQL database for past one year. I was out\n> of\n> > touch with the community for this time and it\n> kinda\n> > feels like as if I am coming out trenches. I heard\n> and\n> > read interviews by Geoff Davidson and Bruce\n> Momijam.\n> > In Geoff Davidson's interview there is talk about\n> need\n> > of Applications like Oracle or many other\n> commercial\n> > vendors have.\n> > Well I cannot say that my Application , which I\n> fondly\n> > call ERPTool , can fill the need but ,it definitly\n> can\n> > provide a very good starting point.\n> > \n> > Right now the modules it has are,\n> > Order Entry,\n> > Purchasing,\n> > Receivables,\n> > Payables,\n> > GL (Basic)\n> > Inventory (Very Basic)\n> > There are three main points that set it apart from\n> the\n> > commercial applications.\n> > 1.It is built using mostly free/open source\n> software.\n> > 2.It is highly and very rapidly customizable.\n> > 3.It needs nothing more than a browser on client\n> side\n> > to function.\n> > While it serves all the basic needs for a small\n> > company\n> > it can very rapidly expanded by adding new forms\n> and\n> > functionality to suit the needs of an enterprise\n> of\n> > any size. I would like to colloborate with\n> postgresql\n> > and other open source communities to make one of\n> the\n> > most versetile and dependable product using open\n> > source code.\n> > Please let me know if this can be done.\n> > \n> > Regards\n> > Amandeep Singh\n> > \n> > __________________________________________________\n> > Do You Yahoo!?\n> > Make international calls for as low as $.04/minute\n> with Yahoo! Messenger\n> > http://phonecard.yahoo.com/\n> > \n> > ---------------------------(end of\n> broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> > \n> >\n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n> -- \n> \"My grandfather once told me that there are two\n> kinds of people: those\n> who work and those who take the credit. He told me\n> to try to be in the\n> first group; there was less competition there.\"\n> - Indira Gandhi\n\n\n__________________________________________________\nDo You Yahoo!?\nMake international calls for as low as $.04/minute with Yahoo! Messenger\nhttp://phonecard.yahoo.com/\n", "msg_date": "Fri, 24 Aug 2001 10:06:13 -0700 (PDT)", "msg_from": "Amandeep Singh <aman_boparai@yahoo.com>", "msg_from_op": true, "msg_subject": "Re: ERP Applications on Postgresql -- ERPTool" } ]
[ { "msg_contents": "> > AFAICS, if you are holding an open SQL cursor, it is sufficient\n> > to check that ctid hasn't changed to know that you have the\n> > same, un-updated tuple. Under MVCC rules, VACUUM will be unable\n> > to delete any tuple that is visible to your open transaction,\n> > and so new-style VACUUM cannot recycle the ctid.\n...\n> \n> As Tom mentiond once in this thread, I've referred to non-SQL\n> cursors which could go across transaction boundaries.\n> TIDs aren't that reliable across transactions.\n\nWe could avoid reassignment of MyProc->xmin having cursors\nopened across tx boundaries and so new-style vacuum wouldn't\nremove old tuple versions...\n\n> OIDs and xmin have already lost a part of its nature. Probably\n> I have to guard myself beforehand and so would have to mention\n> repeatedly from now on that if we switch to an overwriting smgr,\n> there's no system item to detect the change of tuples. \n\nSo, is tid ok to use for your purposes?\nI think we'll be able to restore old tid along with other tuple\ndata from rollback segments, so I don't see any problem from\nosmgr...\n\nVadim\n", "msg_date": "Thu, 23 Aug 2001 18:15:12 -0700", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: CURRENT OF cursor without OIDs" }, { "msg_contents": "\"Mikheev, Vadim\" wrote:\n> \n> > > AFAICS, if you are holding an open SQL cursor, it is sufficient\n> > > to check that ctid hasn't changed to know that you have the\n> > > same, un-updated tuple. Under MVCC rules, VACUUM will be unable\n> > > to delete any tuple that is visible to your open transaction,\n> > > and so new-style VACUUM cannot recycle the ctid.\n> ...\n> >\n> > As Tom mentiond once in this thread, I've referred to non-SQL\n> > cursors which could go across transaction boundaries.\n> > TIDs aren't that reliable across transactions.\n> \n> We could avoid reassignment of MyProc->xmin having cursors\n> opened across tx boundaries and so new-style vacuum wouldn't\n> remove old tuple versions...\n\nOops I'm referring to client side cursors in our ODBC\ndriver. We have no cross-transaction cursors yet though\nI'd like to see a backend cross-transaction cursor also.\n\n> \n> > OIDs and xmin have already lost a part of its nature. Probably\n> > I have to guard myself beforehand and so would have to mention\n> > repeatedly from now on that if we switch to an overwriting smgr,\n> > there's no system item to detect the change of tuples.\n> \n> So, is tid ok to use for your purposes?\n\nNo. I need an OID-like column which is independent from\nthe physical position of tuples other than TID.\n \n> I think we'll be able to restore old tid along with other tuple\n> data from rollback segments, so I don't see any problem from\n> osmgr...\n\nHow do we detect the change of tuples from clients ?\nTIDs are invariant under osmgr. xmin is about to be\nunreliable for the purpose.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Fri, 24 Aug 2001 10:32:44 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: CURRENT OF cursor without OIDs" } ]
[ { "msg_contents": "\nJust a test ...\n\n", "msg_date": "Fri, 24 Aug 2001 08:54:57 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Ignore this ..." } ]
[ { "msg_contents": "I noticed while testing the preceeding patch for resultmap, that we\nuse /bin/ld -G to build the .so's. THIS DOESN'T WORK on UnixWare and\nOpenUNIX 8. \n\nWhere can I change this to use cc -G? \n\nLarry\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Fri, 24 Aug 2001 10:33:55 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": true, "msg_subject": "/bin/ld -G vs /usr/ccs/bin/cc -G" }, { "msg_contents": "Larry Rosenman writes:\n\n> I noticed while testing the preceeding patch for resultmap, that we\n> use /bin/ld -G to build the .so's. THIS DOESN'T WORK on UnixWare and\n> OpenUNIX 8.\n>\n> Where can I change this to use cc -G?\n\nsrc/makefiles/Makefile.unixware\n\nMake sure that gcc works as well. I recall that we had some problems with\ngcc -G in the past, though I don't recall the details.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Sat, 25 Aug 2001 02:37:48 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: /bin/ld -G vs /usr/ccs/bin/cc -G" }, { "msg_contents": "* Peter Eisentraut <peter_e@gmx.net> [010824 19:33]:\n> Larry Rosenman writes:\n> \n> > I noticed while testing the preceeding patch for resultmap, that we\n> > use /bin/ld -G to build the .so's. THIS DOESN'T WORK on UnixWare and\n> > OpenUNIX 8.\n> >\n> > Where can I change this to use cc -G?\n> \n> src/makefiles/Makefile.unixware\n> \n> Make sure that gcc works as well. I recall that we had some problems with\n> gcc -G in the past, though I don't recall the details.\nCan you check this patch? I believe it will fix the GCC issue as\nwell as the native CC...\n\n\nIndex: Makefile.unixware\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/makefiles/Makefile.unixware,v\nretrieving revision 1.9\ndiff -c -r1.9 Makefile.unixware\n*** Makefile.unixware\t2000/12/16 18:14:25\t1.9\n--- Makefile.unixware\t2001/08/25 18:22:36\n***************\n*** 16,21 ****\n else\n CXXFLAGS_SL = -K PIC\n endif\n \n %.so: %.o\n! \t$(LD) -G -Bdynamic -o $@ $<\n--- 16,26 ----\n else\n CXXFLAGS_SL = -K PIC\n endif\n+ ifeq ($(GCC), yes)\n+ SO_FLAGS = -shared\n+ else\n+ SO_FLAGS = -G\n+ endif\n \n %.so: %.o\n! \t$(CC) $(SO_FLAGS) -Bdynamic -o $@ $<\n> \n> -- \n> Peter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Sat, 25 Aug 2001 13:23:38 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": true, "msg_subject": "Re: /bin/ld -G vs /usr/ccs/bin/cc -G" } ]
[ { "msg_contents": "> Oops I'm referring to client side cursors in our ODBC\n> driver. We have no cross-transaction cursors yet though\n> I'd like to see a backend cross-transaction cursor also.\n\nOps, sorry.\nBTW, what are \"visibility\" rules for ODBC cross-tx cursor?\nNo Repeatable reads, no Serializability?\nDo you hold some locks over table while cursor opened\n(I noticed session locking in lmgr recently)?\nCould ODBC cross-tx cursors be implemented using server\ncross-tx cursors?\n\n> > I think we'll be able to restore old tid along with other tuple\n> > data from rollback segments, so I don't see any problem from\n> > osmgr...\n> \n> How do we detect the change of tuples from clients ?\n\nWhat version of tuple client must see? New one?\n\n> TIDs are invariant under osmgr. xmin is about to be\n> unreliable for the purpose.\n\nSeems I have to learn more about ODBC cross-tx cursors -:(\nAnyway, *MSQL*, Oracle, Informix - all have osmgr. Do they\nhave cross-tx cursors in their ODBC drivers?\n\nVadim\n", "msg_date": "Fri, 24 Aug 2001 10:06:56 -0700", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "Re: CURRENT OF cursor without OIDs" } ]
[ { "msg_contents": "> > Besides, anyone who actually wanted to use the userlock\n> > code would need only to write their own wrapper functions\n> > to get around the GPL license.\n> \n> This is a part of copyright law that eludes me - can i write\n> a replacement function for something so simple that it can\n> essentially be done in one way only (like incrementing a\n> value by one) ?\n\nYes, this is what bothers me in user-lock case.\nOn the other hand contrib/user-lock' licence\ncannot cover usage of LOCKTAG and LockAcquire\n(because of this code is from backend) and this is\nall what used in user_lock funcs. So, that licence\nis unenforceable to everything... except of func names -:)\n\nVadim\n", "msg_date": "Fri, 24 Aug 2001 10:26:33 -0700", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: User locks code" } ]
[ { "msg_contents": "> So, rather than going over everone's IANAL opinons about mixing\n> licenses, let's just let Massimo know that it'd just be a lot\n> easier to PostgreSQL/BSD license the whole thing, if he doesn't\n> mind too much.\n\nYes, it would be better.\n\nVadim\n", "msg_date": "Fri, 24 Aug 2001 10:28:17 -0700", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: User locks code" } ]
[ { "msg_contents": "> Because the code we got from Berkeley was BSD licensed, we\n> can't change it, and because many of us like the BSD license\n> better because we don't want to require them to release the\n> source code, we just want them to use PostgreSQL. And we\n> think they will release the source code eventually anyway.\n\nAnd we think that no one will try to fork and commercialize\nserver code - todays, when SAP & InterBase open their DB\ncode, it seems as \"no-brain\".\n\nVadim\n", "msg_date": "Fri, 24 Aug 2001 10:31:55 -0700", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: [OT] Re: User locks code" } ]
[ { "msg_contents": "I have compiled PostgreSQL 7.1.2 with gcc 3.0.1, and have the following\nproblem: if I include first libpq++.h before iostream, id est:\n\n#include <libpq++.h>\n#include <iostream>\n\nthe compiler complains:\n\nIn file included from /usr/include/g++-v3/bits/locale_facets.tcc:38,\n from /usr/include/g++-v3/bits/std_locale.h:41,\n from /usr/include/g++-v3/bits/ostream.tcc:32,\n from /usr/include/g++-v3/bits/std_ostream.h:278,\n from /usr/include/g++-v3/bits/std_iostream.h:40,\n from /usr/include/g++-v3/iostream:31,\n from p.cc:3:\n/usr/include/g++-v3/bits/std_limits.h:286:5: missing binary operator\n/usr/include/g++-v3/bits/std_limits.h:483:5: missing binary operator\n\n\nThis is because somewhere in PostgreSQL you have the following code:\n\n#ifndef true\n#define true ((bool)1)\n#endif\n\nand it seems that in gcc 3.0.1, \"true\" is not defined, or that \"true\" is\na reserved word doesn't mean that is defined, or that the offending\nlines below they don't exist in std_limit.h, I don't know as I no longer\nhave the old compiler. The workaround is to include first iostream, and\nthen libpq++. The offending lines in std_limits.h are the following:\n\n#ifdef __CHAR_UNSIGNED__\n#define __glibcpp_plain_char_is_signed false\n#else\n#define __glibcpp_plain_char_is_signed true\n#endif\n\nend somewhere below this one:\n\n#if __glibcpp_plain_char_is_signed\n#endif\n\nwhich is preprocessed to:\n\n#if true\n#endif\n\nwhen the #include<iostream> is before libpq++ and:\n\n#if ((bool)1)\n#endif\n\notherwise. This last statement is invalid for the compiler.\n\nLeandro Fanzone\n\n", "msg_date": "Fri, 24 Aug 2001 17:12:19 -0300", "msg_from": "Leandro Fanzone <leandro@hasar.com>", "msg_from_op": true, "msg_subject": "gcc 3.0.1" }, { "msg_contents": "Leandro Fanzone <leandro@hasar.com> writes:\n> I have compiled PostgreSQL 7.1.2 with gcc 3.0.1, and have the following\n> problem: if I include first libpq++.h before iostream, id est:\n> #include <libpq++.h>\n> #include <iostream>\n> the compiler complains:\n\n> This is because somewhere in PostgreSQL you have the following code:\n\n> #ifndef true\n> #define true ((bool)1)\n> #endif\n\nYeah. c.h has\n\n#ifndef __cplusplus\n#ifndef bool\ntypedef char bool;\n#endif\t /* ndef bool */\n#endif\t /* not C++ */\n\n#ifndef true\n#define true\t((bool) 1)\n#endif\n\n#ifndef false\n#define false\t((bool) 0)\n#endif\n\nIt's been like that for quite some time, but it's always struck me as\nbizarre: if we're willing to trust a C++ compiler to provide type\nbool, why would we not trust it to provide the literals true and false\nas well? ISTM the code should read\n\n#ifndef __cplusplus\n\n#ifndef bool\ntypedef char bool;\n#endif\n\n#ifndef true\n#define true\t((bool) 1)\n#endif\n\n#ifndef false\n#define false\t((bool) 0)\n#endif\n\n#endif\t /* not C++ */\n\nDoes anyone have an objection to this?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 26 Aug 2001 02:42:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "C++ and bool constants (was Re: [NOVICE] gcc 3.0.1)" }, { "msg_contents": "\nI like the change.\n\n> Leandro Fanzone <leandro@hasar.com> writes:\n> > I have compiled PostgreSQL 7.1.2 with gcc 3.0.1, and have the following\n> > problem: if I include first libpq++.h before iostream, id est:\n> > #include <libpq++.h>\n> > #include <iostream>\n> > the compiler complains:\n> \n> > This is because somewhere in PostgreSQL you have the following code:\n> \n> > #ifndef true\n> > #define true ((bool)1)\n> > #endif\n> \n> Yeah. c.h has\n> \n> #ifndef __cplusplus\n> #ifndef bool\n> typedef char bool;\n> #endif\t /* ndef bool */\n> #endif\t /* not C++ */\n> \n> #ifndef true\n> #define true\t((bool) 1)\n> #endif\n> \n> #ifndef false\n> #define false\t((bool) 0)\n> #endif\n> \n> It's been like that for quite some time, but it's always struck me as\n> bizarre: if we're willing to trust a C++ compiler to provide type\n> bool, why would we not trust it to provide the literals true and false\n> as well? ISTM the code should read\n> \n> #ifndef __cplusplus\n> \n> #ifndef bool\n> typedef char bool;\n> #endif\n> \n> #ifndef true\n> #define true\t((bool) 1)\n> #endif\n> \n> #ifndef false\n> #define false\t((bool) 0)\n> #endif\n> \n> #endif\t /* not C++ */\n> \n> Does anyone have an objection to this?\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 26 Aug 2001 12:49:22 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: C++ and bool constants (was Re: [NOVICE] gcc 3.0.1)" }, { "msg_contents": "Fine for me also.\n\nLeandro.\n\nTom Lane wrote:\n\n> Leandro Fanzone <leandro@hasar.com> writes:\n> > I have compiled PostgreSQL 7.1.2 with gcc 3.0.1, and have the following\n> > problem: if I include first libpq++.h before iostream, id est:\n> > #include <libpq++.h>\n> > #include <iostream>\n> > the compiler complains:\n>\n> > This is because somewhere in PostgreSQL you have the following code:\n>\n> > #ifndef true\n> > #define true ((bool)1)\n> > #endif\n>\n> Yeah. c.h has\n>\n> #ifndef __cplusplus\n> #ifndef bool\n> typedef char bool;\n> #endif /* ndef bool */\n> #endif /* not C++ */\n>\n> #ifndef true\n> #define true ((bool) 1)\n> #endif\n>\n> #ifndef false\n> #define false ((bool) 0)\n> #endif\n>\n> It's been like that for quite some time, but it's always struck me as\n> bizarre: if we're willing to trust a C++ compiler to provide type\n> bool, why would we not trust it to provide the literals true and false\n> as well? ISTM the code should read\n>\n> #ifndef __cplusplus\n>\n> #ifndef bool\n> typedef char bool;\n> #endif\n>\n> #ifndef true\n> #define true ((bool) 1)\n> #endif\n>\n> #ifndef false\n> #define false ((bool) 0)\n> #endif\n>\n> #endif /* not C++ */\n>\n> Does anyone have an objection to this?\n>\n> regards, tom lane\n\n", "msg_date": "Mon, 27 Aug 2001 09:49:23 -0300", "msg_from": "Leandro Fanzone <leandro@hasar.com>", "msg_from_op": true, "msg_subject": "Re: C++ and bool constants (was Re: [NOVICE] gcc 3.0.1)" } ]
[ { "msg_contents": "I'm going to add an -o option to the bootstrap backend, like the\nstandalone backend has, so we can turn off the DEBUG: messages that come\nthrough during initdb. (Error messages still come through.) Those\nmessages are confusing, especially the \"database system is ready\", with a\n5 second pause after it. Just in case you wonder where they went.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Sat, 25 Aug 2001 01:24:15 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "DEBUG: output lines in initdb" }, { "msg_contents": "> I'm going to add an -o option to the bootstrap backend, like the\n> standalone backend has, so we can turn off the DEBUG: messages that come\n> through during initdb. (Error messages still come through.) Those\n> messages are confusing, especially the \"database system is ready\", with a\n> 5 second pause after it. Just in case you wonder where they went.\n\nYEA!\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 24 Aug 2001 20:22:43 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: DEBUG: output lines in initdb" } ]
[ { "msg_contents": "I have applied the following patch to remove MD5 usage of int64 types. \nI split it into two int32 values and did the job that way. I also\nchanged the code to use standard c.h data types, and changes 0xFF to\n0xff.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: src/backend/libpq/md5.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/libpq/md5.c,v\nretrieving revision 1.4\ndiff -c -r1.4 md5.c\n*** src/backend/libpq/md5.c\t2001/08/17 02:59:19\t1.4\n--- src/backend/libpq/md5.c\t2001/08/25 01:00:23\n***************\n*** 24,33 ****\n *\tPRIVATE FUNCTIONS\n */\n \n- typedef unsigned char unsigned8;\n- typedef unsigned int unsigned32;\n- typedef unsigned long unsigned64;\n- \n #ifdef FRONTEND\n #undef palloc\n #define palloc malloc\n--- 24,29 ----\n***************\n*** 39,51 ****\n *\tThe returned array is allocated using malloc. the caller should free it\n * \twhen it is no longer needed.\n */\n! static unsigned8 *\n! createPaddedCopyWithLength(unsigned8 *b, unsigned32 *l)\n {\n! \tunsigned8 *ret;\n! \tunsigned32 q;\n! \tunsigned32 len, newLen448;\n! \tunsigned64 len64;\n \n \tlen = ((b == NULL) ? 0 : *l);\n \tnewLen448 = len + 64 - (len % 64) - 8;\n--- 35,47 ----\n *\tThe returned array is allocated using malloc. the caller should free it\n * \twhen it is no longer needed.\n */\n! static uint8 *\n! createPaddedCopyWithLength(uint8 *b, uint32 *l)\n {\n! \tuint8 *ret;\n! \tuint32 q;\n! \tuint32 len, newLen448;\n! \tuint32 len_high, len_low;\t/* 64-bit value split into 32-bit sections */\n \n \tlen = ((b == NULL) ? 0 : *l);\n \tnewLen448 = len + 64 - (len % 64) - 8;\n***************\n*** 53,63 ****\n \t\tnewLen448 += 64;\n \n \t*l = newLen448 + 8;\n! \tif ((ret = (unsigned8 *) malloc(sizeof(unsigned8) * *l)) == NULL)\n \t\treturn NULL;\n \n \tif (b != NULL)\n! \t\tmemcpy(ret, b, sizeof(unsigned8) * len);\n \n \t/* pad */\n \tret[len] = 0x80;\n--- 49,59 ----\n \t\tnewLen448 += 64;\n \n \t*l = newLen448 + 8;\n! \tif ((ret = (uint8 *) malloc(sizeof(uint8) * *l)) == NULL)\n \t\treturn NULL;\n \n \tif (b != NULL)\n! \t\tmemcpy(ret, b, sizeof(uint8) * len);\n \n \t/* pad */\n \tret[len] = 0x80;\n***************\n*** 65,88 ****\n \t\tret[q] = 0x00;\n \n \t/* append length as a 64 bit bitcount */\n! \tlen64 = len;\n! \tlen64 <<= 3;\n \tq = newLen448;\n! \tret[q++] = (len64 & 0xFF);\n! \tlen64 >>= 8;\n! \tret[q++] = (len64 & 0xFF);\n! \tlen64 >>= 8;\n! \tret[q++] = (len64 & 0xFF);\n! \tlen64 >>= 8;\n! \tret[q++] = (len64 & 0xFF);\n! \tlen64 >>= 8;\n! \tret[q++] = (len64 & 0xFF);\n! \tlen64 >>= 8;\n! \tret[q++] = (len64 & 0xFF);\n! \tlen64 >>= 8;\n! \tret[q++] = (len64 & 0xFF);\n! \tlen64 >>= 8;\n! \tret[q] = (len64 & 0xFF);\n \n \treturn ret;\n }\n--- 61,86 ----\n \t\tret[q] = 0x00;\n \n \t/* append length as a 64 bit bitcount */\n! \tlen_low = len;\n! \t/* split into two 32-bit values */\n! \t/* we only look at the bottom 32-bits */\n! \tlen_high = len >> 29;\n! \tlen_low <<= 3;\n \tq = newLen448;\n! \tret[q++] = (len_low & 0xff);\n! \tlen_low >>= 8;\n! \tret[q++] = (len_low & 0xff);\n! \tlen_low >>= 8;\n! \tret[q++] = (len_low & 0xff);\n! \tlen_low >>= 8;\n! \tret[q++] = (len_low & 0xff);\n! \tret[q++] = (len_high & 0xff);\n! \tlen_high >>= 8;\n! \tret[q++] = (len_high & 0xff);\n! \tlen_high >>= 8;\n! \tret[q++] = (len_high & 0xff);\n! \tlen_high >>= 8;\n! \tret[q] = (len_high & 0xff);\n \n \treturn ret;\n }\n***************\n*** 94,102 ****\n #define ROT_LEFT(x, n) (((x) << (n)) | ((x) >> (32 - (n))))\n \n static void\n! doTheRounds(unsigned32 X[16], unsigned32 state[4])\n {\n! \tunsigned32 a, b, c, d;\n \n \ta = state[0];\n \tb = state[1];\n--- 92,100 ----\n #define ROT_LEFT(x, n) (((x) << (n)) | ((x) >> (32 - (n))))\n \n static void\n! doTheRounds(uint32 X[16], uint32 state[4])\n {\n! \tuint32 a, b, c, d;\n \n \ta = state[0];\n \tb = state[1];\n***************\n*** 182,194 ****\n }\n \n static int\n! calculateDigestFromBuffer(unsigned8 *b, unsigned32 len, unsigned8 sum[16])\n {\n! \tregister unsigned32 i, j, k, newI;\n! \tunsigned32 l;\n! \tunsigned8 *input;\n! \tregister unsigned32 *wbp;\n! \tunsigned32 workBuff[16], state[4];\n \n \tl = len;\n \n--- 180,192 ----\n }\n \n static int\n! calculateDigestFromBuffer(uint8 *b, uint32 len, uint8 sum[16])\n {\n! \tregister uint32 i, j, k, newI;\n! \tuint32 l;\n! \tuint8 *input;\n! \tregister uint32 *wbp;\n! \tuint32 workBuff[16], state[4];\n \n \tl = len;\n \n***************\n*** 223,241 ****\n \tj = 0;\n \tfor (i = 0; i < 4; i++) {\n \t\tk = state[i];\n! \t\tsum[j++] = (k & 0xFF);\n \t\tk >>= 8;\n! \t\tsum[j++] = (k & 0xFF);\n \t\tk >>= 8;\n! \t\tsum[j++] = (k & 0xFF);\n \t\tk >>= 8;\n! \t\tsum[j++] = (k & 0xFF);\n \t}\n \treturn 1;\n }\n \n static void\n! bytesToHex(unsigned8 b[16], char *s)\n {\n \tstatic char *hex = \"0123456789abcdef\";\n \tint q, w;\n--- 221,239 ----\n \tj = 0;\n \tfor (i = 0; i < 4; i++) {\n \t\tk = state[i];\n! \t\tsum[j++] = (k & 0xff);\n \t\tk >>= 8;\n! \t\tsum[j++] = (k & 0xff);\n \t\tk >>= 8;\n! \t\tsum[j++] = (k & 0xff);\n \t\tk >>= 8;\n! \t\tsum[j++] = (k & 0xff);\n \t}\n \treturn 1;\n }\n \n static void\n! bytesToHex(uint8 b[16], char *s)\n {\n \tstatic char *hex = \"0123456789abcdef\";\n \tint q, w;\n***************\n*** 280,288 ****\n bool\n md5_hash(const void *buff, size_t len, char *hexsum)\n {\n! \tunsigned8 sum[16];\n \n! \tif (!calculateDigestFromBuffer((unsigned8 *) buff, len, sum))\n \t\treturn false;\n \n \tbytesToHex(sum, hexsum);\n--- 278,286 ----\n bool\n md5_hash(const void *buff, size_t len, char *hexsum)\n {\n! \tuint8 sum[16];\n \n! \tif (!calculateDigestFromBuffer((uint8 *) buff, len, sum))\n \t\treturn false;\n \n \tbytesToHex(sum, hexsum);", "msg_date": "Fri, 24 Aug 2001 21:36:57 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "MD5 removal of int64 code" } ]
[ { "msg_contents": "\nThis may sound like a stupid question, and i apologize if it is, but I\ncouldn't find the answer in any documentation.\n\nEvery table has a implicit column oid. Does this column have an index on it?\nI assume not, and I am putting an index on it anyway.\n\nThe real problem is that I have a table like the following:\n\ncreate table foo (\n time timestamp DEFAULT CURRENT_TIMESTAMP,\n ...\n)\n\nI insert an row, and I want to get the timestamp of that row. So i do a\nselect on oid. I want an index. Does one already exist?\n\n-rchit\n", "msg_date": "Fri, 24 Aug 2001 21:01:39 -0700", "msg_from": "Rachit Siamwalla <rachit@ensim.com>", "msg_from_op": true, "msg_subject": "Does the oid column have an implicit index on it?" }, { "msg_contents": "Rachit Siamwalla <rachit@ensim.com> writes:\n> Every table has a implicit column oid. Does this column have an index on it?\n\nNo, you must create an index if you want one.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 25 Aug 2001 00:50:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Does the oid column have an implicit index on it? " } ]
[ { "msg_contents": "I am going to add MD5 authentication to ODBC. What is a good way to get\nbackend/libpq/md5.c into odbc for compilation. I know people want the\nODBC to be stand-alone.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 25 Aug 2001 00:33:29 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "MD5 for ODBC" } ]
[ { "msg_contents": "Is include/config.h still supposed to be in CVS?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 25 Aug 2001 12:47:05 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "/include/config.h" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Is include/config.h still supposed to be in CVS?\n\nWhat? config.h was never supposed to be in CVS.\n\nconfig.h.in has been renamed to pg_config.h.in, see Peter's recent\nactivity ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 25 Aug 2001 14:04:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: /include/config.h " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Is include/config.h still supposed to be in CVS?\n> \n> What? config.h was never supposed to be in CVS.\n> \n> config.h.in has been renamed to pg_config.h.in, see Peter's recent\n> activity ...\n\nI see. It was reported to me on IRC, and I saw the file in there after\na distclean. Turns out it was left over from before pg_config.h was\ncreated.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 25 Aug 2001 14:31:24 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: /include/config.h" } ]
[ { "msg_contents": "In trying to decide how to keep ODBC compilable standalone, I can't\nfigure out how to get md5.c in there from backend/libpq. Is it crazy to\nput md5.c in interfaces/odbc and symlink it from there to backend/libpq\nand interfaces/libpq? I don't want to have two copies of md5.c but that\nmay be another option. Opinions?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 25 Aug 2001 15:06:46 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "linking hba.c" }, { "msg_contents": "> In trying to decide how to keep ODBC compilable standalone, I can't\n> figure out how to get md5.c in there from backend/libpq. Is it crazy to\n> put md5.c in interfaces/odbc and symlink it from there to backend/libpq\n> and interfaces/libpq? I don't want to have two copies of md5.c but that\n> may be another option. Opinions?\n>\n\nSeems to me that a stand-alone driver would have to have its own md5.c....\notherwise you may as well start linking in all kinds of code from the\ndistro. Of course that is what just about all other ODBC drivers do anyway.\n\nPeter\n\n\n", "msg_date": "Sat, 25 Aug 2001 12:26:11 -0700", "msg_from": "Peter Harvey <pharvey@codebydesign.com>", "msg_from_op": false, "msg_subject": "Re: linking hba.c" }, { "msg_contents": "> > In trying to decide how to keep ODBC compilable standalone, I can't\n> > figure out how to get md5.c in there from backend/libpq. Is it crazy to\n> > put md5.c in interfaces/odbc and symlink it from there to backend/libpq\n> > and interfaces/libpq? I don't want to have two copies of md5.c but that\n> > may be another option. Opinions?\n> >\n> \n> Seems to me that a stand-alone driver would have to have its own md5.c....\n> otherwise you may as well start linking in all kinds of code from the\n> distro. Of course that is what just about all other ODBC drivers do anyway.\n\nIf we want to have only one copy of md5.c, and we want ODBC to be\nstandalone, the only way I can think to do it is to link the _other_\nuses of md5.c back to interfaces/odbc/md5.c\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 25 Aug 2001 16:15:42 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: linking hba.c" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> In trying to decide how to keep ODBC compilable standalone, I can't\n> figure out how to get md5.c in there from backend/libpq.\n\nAFAIK ODBC hasn't been compilable standalone for awhile; doesn't it\ndepend on our toplevel configure script now?\n\n> Is it crazy to\n> put md5.c in interfaces/odbc and symlink it from there to backend/libpq\n> and interfaces/libpq?\n\nYES, that's crazy. If you think ODBC should build separately, then give\nit its own copy of md5.c. But please first check that your assumption\nis still valid.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 25 Aug 2001 19:18:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: linking hba.c " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > In trying to decide how to keep ODBC compilable standalone, I can't\n> > figure out how to get md5.c in there from backend/libpq.\n> \n> AFAIK ODBC hasn't been compilable standalone for awhile; doesn't it\n> depend on our toplevel configure script now?\n\nThe Unix version is not compilable on its own, but I see no code in\nwin32.mak that refers to upper directories.\n\n> \n> > Is it crazy to\n> > put md5.c in interfaces/odbc and symlink it from there to backend/libpq\n> > and interfaces/libpq?\n> \n> YES, that's crazy. If you think ODBC should build separately, then give\n> it its own copy of md5.c. But please first check that your assumption\n> is still valid.\n\nAnd I will just throw a note at the top of each mentioning they should\nbe identical.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 25 Aug 2001 19:26:42 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: linking hba.c" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > In trying to decide how to keep ODBC compilable standalone, I can't\n> > > figure out how to get md5.c in there from backend/libpq.\n> >\n> > AFAIK ODBC hasn't been compilable standalone for awhile; doesn't it\n> > depend on our toplevel configure script now?\n> \n> The Unix version is not compilable on its own, but I see no code in\n> win32.mak that refers to upper directories.\n\nYes. \n \n> >\n> > > Is it crazy to\n> > > put md5.c in interfaces/odbc and symlink it from there to backend/libpq\n> > > and interfaces/libpq?\n> >\n> > YES, that's crazy. If you think ODBC should build separately, then give\n> > it its own copy of md5.c. But please first check that your assumption\n> > is still valid.\n> \n> And I will just throw a note at the top of each mentioning they should\n> be identical.\n\nIs it that easy to copy md5.c ?\nms5.c includes postgres.h and libpq/crypt.h.\nI prefer to have e.g. md5.dll.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Mon, 27 Aug 2001 10:54:58 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: linking hba.c" }, { "msg_contents": "> > > > Is it crazy to\n> > > > put md5.c in interfaces/odbc and symlink it from there to backend/libpq\n> > > > and interfaces/libpq?\n> > >\n> > > YES, that's crazy. If you think ODBC should build separately, then give\n> > > it its own copy of md5.c. But please first check that your assumption\n> > > is still valid.\n> > \n> > And I will just throw a note at the top of each mentioning they should\n> > be identical.\n> \n> Is it that easy to copy md5.c ?\n> ms5.c includes postgres.h and libpq/crypt.h.\n> I prefer to have e.g. md5.dll.\n\nThere is not much there. I can use a -Ddefine in ODBC and just\nconditionally define what I need when I am compiling in ODBC. Isn't\nworth a DLL.\n\nCan you help me add MD5 to ODBC? I am getting lost in the way they do\npasswords, and I don't even have a MSWin machine to test it once I am\ndone.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 26 Aug 2001 22:45:37 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: linking hba.c" } ]
[ { "msg_contents": "How can i add foreign keys after the CREATE TABLE? Is there some\ncombination of other SQL commands that will do the trick?\n\nMy problem is;\n\na. that I have a circular reference between 3 tables\nb. that I must be able to reverse engineer the resulting references in\nthe shema by querying for foreign keys\n\nRight now I am calculating a dependency hierarchy and generating tables\nin the resulting order but this does not work for a circular ref.\n\nAside from the circular ref issues... i think it would be easier and\nsafer to be able to generate all of the tables in one pass (with primary\nkey defs) and then add the foreign keys in a second pass.\n\nAny help appreciated.\n\nPeter\n\n\n\n", "msg_date": "Sat, 25 Aug 2001 15:37:50 -0700", "msg_from": "Peter Harvey <pharvey@codebydesign.com>", "msg_from_op": true, "msg_subject": "FOREIGN KEY after CREATE TABLE?" }, { "msg_contents": "On Sunday, 26. August 2001 00:37, Peter Harvey wrote:\n> How can i add foreign keys after the CREATE TABLE? Is there some\n> combination of other SQL commands that will do the trick?\n\nPeter,\n\ntry ALTER TABLE a_table ADD CONSTRAINT ... FOREIGN KEY ...\n\nRegards,\n-- \nChristof Glaser\n", "msg_date": "Sun, 26 Aug 2001 01:36:58 +0200", "msg_from": "Christof Glaser <scivi@web.de>", "msg_from_op": false, "msg_subject": "Re: FOREIGN KEY after CREATE TABLE?" }, { "msg_contents": "> > How can i add foreign keys after the CREATE TABLE? Is there some\n> > combination of other SQL commands that will do the trick?\n>\n> Peter,\n>\n> try ALTER TABLE a_table ADD CONSTRAINT ... FOREIGN KEY ...\n>\n\n<Grr> I hate asking stupid questions. I scanned the ALTER TABLE syntax\nand somehow overlooked it.\n\nThanks!\n\nPeter\n\n\n", "msg_date": "Sat, 25 Aug 2001 16:59:37 -0700", "msg_from": "Peter Harvey <pharvey@codebydesign.com>", "msg_from_op": true, "msg_subject": "Re: FOREIGN KEY after CREATE TABLE?" } ]
[ { "msg_contents": "Anyone know of a good reference or set of examples clearly showing the usage\nof V1? I have a problem that I've posted on pgsql-general, but this list\nmight be more appropriate for.\n\nGeoff\n", "msg_date": "Sat, 25 Aug 2001 21:02:27 -0400", "msg_from": "\"Gowey, Geoffrey\" <ggowey@rxhope.com>", "msg_from_op": true, "msg_subject": "version 1 C-Language Functions documentation and example" }, { "msg_contents": "> Anyone know of a good reference or set of examples clearly showing the\nusage\n> of V1? I have a problem that I've posted on pgsql-general, but this list\n> might be more appropriate for.\n\nStart with:\n http://www.postgresql.org/idocs/index.php?xfunc-c.html\nThen look at the contrib directory in your source tree for more examples\n(assuming your source tree is >=7.1).\n\nHTH,\n-- Joe\n\n", "msg_date": "Sat, 25 Aug 2001 19:56:21 -0700", "msg_from": "\"Joe Conway\" <joseph.conway@home.com>", "msg_from_op": false, "msg_subject": "Re: version 1 C-Language Functions documentation and example" } ]
[ { "msg_contents": "I tried this, but it severely lacks in clarity for someone just getting\nstarted in extending pgsql with c. My main gripe really is that while it\nshows how to write the functions (and I've already read the .h file for all\nthe available v1 features) it lacks in explaining what type is needed for\nthe create function (which is what I'm running squarely into). For example\n(in my case): I've written a function using PG_GETARGS_CSTRING so now what\nshould the param type be for the create function? char? varchar? text? I'm\nat the begining phase where it's confusing. I have also tried the contrib\ntree, but all the programs in there use version 0 coding and I wish to start\noff on the right foot with pgsql by using the latest method. I have tried\nsearching the archives of the mailing list on finding some explanation about\nwriting code, but all available information seems to be my usual problem\nwhen looking for info: tons of technical ref, but too little on getting\nstarted to understand the tech ref stuff. Anyone agree? disagree? know of\nsomething that I am not seeing?\n\nGeoff\n\n-----Original Message-----\nFrom: Joe Conway [mailto:joseph.conway@home.com]\nSent: Saturday, August 25, 2001 10:56 PM\nTo: Gowey, Geoffrey; pgsql-hackers@postgresql.org\nSubject: Re: [HACKERS] version 1 C-Language Functions documentation and\nexample\n\n\n> Anyone know of a good reference or set of examples clearly showing the\nusage\n> of V1? I have a problem that I've posted on pgsql-general, but this list\n> might be more appropriate for.\n\nStart with:\n http://www.postgresql.org/idocs/index.php?xfunc-c.html\nThen look at the contrib directory in your source tree for more examples\n(assuming your source tree is >=7.1).\n\nHTH,\n-- Joe\n", "msg_date": "Sat, 25 Aug 2001 23:05:56 -0400", "msg_from": "\"Gowey, Geoffrey\" <ggowey@rxhope.com>", "msg_from_op": true, "msg_subject": "RE: version 1 C-Language Functions documentation and ex\n\tample" }, { "msg_contents": "\"Gowey, Geoffrey\" <ggowey@rxhope.com> writes:\n> ... I have also tried the contrib\n> tree, but all the programs in there use version 0 coding and I wish to start\n> off on the right foot with pgsql by using the latest method.\n\nThe contrib tree is not as well maintained as the main source code.\nForget contrib, look in src/backend/utils/adt/ for a function that does\nsomething similar to what you need to do.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 26 Aug 2001 01:49:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: version 1 C-Language Functions documentation and ex ample " } ]
[ { "msg_contents": "I have, but while there are numerous usages of V1 style in the main code it\ndoes not show how the function is defined via create function. This is\nproblematic to say the least.\n\nGeoff\n\n-----Original Message-----\nFrom: Tom Lane [mailto:tgl@sss.pgh.pa.us]\nSent: Sunday, August 26, 2001 1:50 AM\nTo: Gowey, Geoffrey\nCc: 'Joe Conway'; pgsql-hackers@postgresql.org\nSubject: Re: [HACKERS] version 1 C-Language Functions documentation and\nex ample \n\n\n\"Gowey, Geoffrey\" <ggowey@rxhope.com> writes:\n> ... I have also tried the contrib\n> tree, but all the programs in there use version 0 coding and I wish to\nstart\n> off on the right foot with pgsql by using the latest method.\n\nThe contrib tree is not as well maintained as the main source code.\nForget contrib, look in src/backend/utils/adt/ for a function that does\nsomething similar to what you need to do.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 26 Aug 2001 01:50:57 -0400", "msg_from": "\"Gowey, Geoffrey\" <ggowey@rxhope.com>", "msg_from_op": true, "msg_subject": "RE: version 1 C-Language Functions documentation and ex\n\t ample" } ]
[ { "msg_contents": "In current sources, lock.c's LockDisable() function is called only\nfor bootstrapping or if the (useless) -L backend switch is used.\nI have verified that it's not needed for bootstrapping: initdb\nsucceeds just fine without it. Accordingly, I'm strongly tempted\nto remove the function, the switch, and the tests for LockingDisabled()\nin all the lock-related functions. Does anyone see a reason to\nkeep 'em?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 26 Aug 2001 19:08:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Anyone see any value in LockDisable/LockingDisabled mechanism?" } ]
[ { "msg_contents": "I had upgraded yesterday and *THOUGHT* all was fine. \n\nForgot to pg_dump, so I restored my $PGLIB and bin directory, now get\nthe following when I try to pg_dump:\n\n$ pg_dump ler >z\nPassword: \npg_dump: query to get function name of oid - failed: ERROR: oidin:\nerror in \"-\": can't parse \"-\"\n$ \n\nAnd the debug query log:\n\nAug 26 19:10:05 lerami pg-prod[3861]: [1] DEBUG: connection: host=[local] user=ler database=ler\nAug 26 19:10:05 lerami pg-prod[3861]: [2] DEBUG: InitPostgres\nAug 26 19:10:05 lerami pg-prod[3861]: [3] DEBUG: StartTransactionCommand\nAug 26 19:10:05 lerami pg-prod[3861]: [4] DEBUG: query: select getdatabaseencoding()\nAug 26 19:10:05 lerami pg-prod[3861]: [5] DEBUG: ProcessQuery\nAug 26 19:10:05 lerami pg-prod[3861]: [6] DEBUG: CommitTransactionCommand\nAug 26 19:10:05 lerami pg-prod[3861]: [7] DEBUG: StartTransactionCommand\nAug 26 19:10:05 lerami pg-prod[3861]: [8] DEBUG: query: SELECT version();\nAug 26 19:10:05 lerami pg-prod[3861]: [9] DEBUG: ProcessQuery\nAug 26 19:10:05 lerami pg-prod[3861]: [10] DEBUG: CommitTransactionCommand\nAug 26 19:10:05 lerami pg-prod[3861]: [11] DEBUG: StartTransactionCommand\nAug 26 19:10:05 lerami pg-prod[3861]: [12] DEBUG: query: begin\nAug 26 19:10:05 lerami pg-prod[3861]: [13] DEBUG: ProcessUtility: begin\nAug 26 19:10:05 lerami pg-prod[3861]: [14] DEBUG: CommitTransactionCommand\nAug 26 19:10:05 lerami pg-prod[3861]: [15] DEBUG: StartTransactionCommand\nAug 26 19:10:05 lerami pg-prod[3861]: [16] DEBUG: query: set transaction isolation level serializable\nAug 26 19:10:05 lerami pg-prod[3861]: [17] DEBUG: ProcessUtility: set transaction isolation level serializable\nAug 26 19:10:05 lerami pg-prod[3861]: [18] DEBUG: CommitTransactionCommand\nAug 26 19:10:05 lerami pg-prod[3861]: [19] DEBUG: StartTransactionCommand\nAug 26 19:10:05 lerami pg-prod[3861]: [20] DEBUG: query: SELECT datlastsysoid from pg_database where datname = 'ler'\nAug 26 19:10:05 lerami pg-prod[3861]: [21] DEBUG: ProcessQuery\nAug 26 19:10:05 lerami pg-prod[3861]: [22] DEBUG: CommitTransactionCommand\nAug 26 19:10:05 lerami pg-prod[3861]: [23] DEBUG: StartTransactionCommand\nAug 26 19:10:05 lerami pg-prod[3861]: [24] DEBUG: query: select (select usename from pg_user where datdba = usesysid) as dba from pg_database where datname = 'ler'\nAug 26 19:10:05 lerami pg-prod[3861]: [25] DEBUG: ProcessQuery\nAug 26 19:10:05 lerami pg-prod[3861]: [26] DEBUG: CommitTransactionCommand\nAug 26 19:10:05 lerami pg-prod[3861]: [27] DEBUG: StartTransactionCommand\nAug 26 19:10:05 lerami pg-prod[3861]: [28-1] DEBUG: query: SELECT pg_type.oid, typowner, typname, typlen, typprtlen, typinput, typoutput, typreceive, typsend, typelem,\nAug 26 19:10:05 lerami pg-prod[3861]: [28-2] typdelim, typdefault, typrelid, typalign, typstorage, typbyval, typisdefined, (select usename from pg_user where typowner =\nAug 26 19:10:05 lerami pg-prod[3861]: [28-3] usesysid) as usename, format_type(pg_type.oid, NULL) as typedefn from pg_type\nAug 26 19:10:05 lerami pg-prod[3861]: [29] DEBUG: ProcessQuery\nAug 26 19:10:06 lerami pg-prod[3861]: [30] DEBUG: CommitTransactionCommand\nAug 26 19:10:06 lerami pg-prod[3861]: [31] DEBUG: StartTransactionCommand\nAug 26 19:10:06 lerami pg-prod[3861]: [32-1] DEBUG: query: SELECT pg_proc.oid, proname, prolang, pronargs, prorettype, proretset, proargtypes, prosrc, probin, (select\nAug 26 19:10:06 lerami pg-prod[3861]: [32-2] usename from pg_user where proowner = usesysid) as usename, proiscachable, proisstrict from pg_proc where pg_proc.oid >\nAug 26 19:10:06 lerami pg-prod[3861]: [32-3] '16554'::oid\nAug 26 19:10:06 lerami pg-prod[3861]: [33] DEBUG: ProcessQuery\nAug 26 19:10:06 lerami pg-prod[3861]: [34] DEBUG: CommitTransactionCommand\nAug 26 19:10:06 lerami pg-prod[3861]: [35] DEBUG: StartTransactionCommand\nAug 26 19:10:06 lerami pg-prod[3861]: [36-1] DEBUG: query: SELECT pg_aggregate.oid, aggname, aggtransfn, aggfinalfn, aggtranstype, aggbasetype, agginitval, 't'::boolean as\nAug 26 19:10:06 lerami pg-prod[3861]: [36-2] convertok, (select usename from pg_user where aggowner = usesysid) as usename from pg_aggregate\nAug 26 19:10:06 lerami pg-prod[3861]: [37] DEBUG: ProcessQuery\nAug 26 19:10:06 lerami pg-prod[3861]: [38] DEBUG: CommitTransactionCommand\nAug 26 19:10:06 lerami pg-prod[3861]: [39] DEBUG: StartTransactionCommand\nAug 26 19:10:06 lerami pg-prod[3861]: [40-1] DEBUG: query: SELECT pg_operator.oid, oprname, oprkind, oprcode, oprleft, oprright, oprcom, oprnegate, oprrest, oprjoin,\nAug 26 19:10:06 lerami pg-prod[3861]: [40-2] oprcanhash, oprlsortop, oprrsortop, (select usename from pg_user where oprowner = usesysid) as usename from pg_operator\nAug 26 19:10:06 lerami pg-prod[3861]: [41] DEBUG: ProcessQuery\nAug 26 19:10:06 lerami pg-prod[3861]: [42] DEBUG: CommitTransactionCommand\nAug 26 19:10:06 lerami pg-prod[3861]: [43] DEBUG: StartTransactionCommand\nAug 26 19:10:06 lerami pg-prod[3861]: [44-1] DEBUG: query: SELECT pg_class.oid, relname, relacl, relkind, (select usename from pg_user where relowner = usesysid) as\nAug 26 19:10:06 lerami pg-prod[3861]: [44-2] usename, relchecks, reltriggers, relhasindex, relhasoids from pg_class where relname !~ '^pg_' and relkind in ('r', 'S', 'v')\nAug 26 19:10:06 lerami pg-prod[3861]: [44-3] order by oid\nAug 26 19:10:06 lerami pg-prod[3861]: [45] DEBUG: ProcessQuery\nAug 26 19:10:06 lerami pg-prod[3861]: [46] DEBUG: CommitTransactionCommand\nAug 26 19:10:06 lerami pg-prod[3861]: [47] DEBUG: StartTransactionCommand\nAug 26 19:10:06 lerami pg-prod[3861]: [48] DEBUG: query: SELECT indexrelid FROM pg_index i WHERE i.indisprimary AND i.indrelid = 20947 \nAug 26 19:10:06 lerami pg-prod[3861]: [49] DEBUG: ProcessQuery\nAug 26 19:10:06 lerami pg-prod[3861]: [50] DEBUG: CommitTransactionCommand\nAug 26 19:10:06 lerami pg-prod[3861]: [51] DEBUG: StartTransactionCommand\nAug 26 19:10:07 lerami pg-prod[3861]: [448-2] relname = 'pg_class') and objsubid = 9\nAug 26 19:10:07 lerami pg-prod[3861]: [449] DEBUG: ProcessQuery\nAug 26 19:10:07 lerami pg-prod[3861]: [450] DEBUG: CommitTransactionCommand\nAug 26 19:10:07 lerami pg-prod[3861]: [451] DEBUG: StartTransactionCommand\nAug 26 19:10:07 lerami pg-prod[3861]: [452-1] DEBUG: query: SELECT description FROM pg_description WHERE objoid = 20974 and classoid = (SELECT oid FROM pg_class where\nAug 26 19:10:07 lerami pg-prod[3861]: [452-2] relname = 'pg_class') and objsubid = 10\nAug 26 19:10:07 lerami pg-prod[3861]: [453] DEBUG: ProcessQuery\nAug 26 19:10:07 lerami pg-prod[3861]: [454] DEBUG: CommitTransactionCommand\nAug 26 19:10:07 lerami pg-prod[3861]: [455] DEBUG: StartTransactionCommand\nAug 26 19:10:07 lerami pg-prod[3861]: [456-1] DEBUG: query: SELECT description FROM pg_description WHERE objoid = 20974 and classoid = (SELECT oid FROM pg_class where\nAug 26 19:10:07 lerami pg-prod[3861]: [456-2] relname = 'pg_class') and objsubid = 0\nAug 26 19:10:07 lerami pg-prod[3861]: [457] DEBUG: ProcessQuery\nAug 26 19:10:07 lerami pg-prod[3861]: [458] DEBUG: CommitTransactionCommand\nAug 26 19:10:07 lerami pg-prod[3861]: [459] DEBUG: StartTransactionCommand\nAug 26 19:10:07 lerami pg-prod[3861]: [460-1] DEBUG: query: SELECT description FROM pg_description WHERE objoid = 20976 and classoid = (SELECT oid FROM pg_class where\nAug 26 19:10:07 lerami pg-prod[3861]: [460-2] relname = 'pg_class') and objsubid = 1\nAug 26 19:10:07 lerami pg-prod[3861]: [461] DEBUG: ProcessQuery\nAug 26 19:10:07 lerami pg-prod[3861]: [462] DEBUG: CommitTransactionCommand\nAug 26 19:10:07 lerami pg-prod[3861]: [463] DEBUG: StartTransactionCommand\nAug 26 19:10:07 lerami pg-prod[3861]: [464-1] DEBUG: query: SELECT description FROM pg_description WHERE objoid = 20976 and classoid = (SELECT oid FROM pg_class where\nAug 26 19:10:07 lerami pg-prod[3861]: [464-2] relname = 'pg_class') and objsubid = 2\nAug 26 19:10:07 lerami pg-prod[3861]: [465] DEBUG: ProcessQuery\nAug 26 19:10:07 lerami pg-prod[3861]: [466] DEBUG: CommitTransactionCommand\nAug 26 19:10:07 lerami pg-prod[3861]: [467] DEBUG: StartTransactionCommand\nAug 26 19:10:07 lerami pg-prod[3861]: [468-1] DEBUG: query: SELECT description FROM pg_description WHERE objoid = 20976 and classoid = (SELECT oid FROM pg_class where\nAug 26 19:10:07 lerami pg-prod[3861]: [468-2] relname = 'pg_class') and objsubid = 3\nAug 26 19:10:07 lerami pg-prod[3861]: [469] DEBUG: ProcessQuery\nAug 26 19:10:07 lerami pg-prod[3861]: [470] DEBUG: CommitTransactionCommand\nAug 26 19:10:07 lerami pg-prod[3861]: [471] DEBUG: StartTransactionCommand\nAug 26 19:10:07 lerami pg-prod[3861]: [472-1] DEBUG: query: SELECT description FROM pg_description WHERE objoid = 20976 and classoid = (SELECT oid FROM pg_class where\nAug 26 19:10:07 lerami pg-prod[3861]: [472-2] relname = 'pg_class') and objsubid = 4\nAug 26 19:10:07 lerami pg-prod[3861]: [473] DEBUG: ProcessQuery\nAug 26 19:10:07 lerami pg-prod[3861]: [474] DEBUG: CommitTransactionCommand\nAug 26 19:10:07 lerami pg-prod[3861]: [475] DEBUG: StartTransactionCommand\nAug 26 19:10:07 lerami pg-prod[3861]: [476-1] DEBUG: query: SELECT description FROM pg_description WHERE objoid = 20976 and classoid = (SELECT oid FROM pg_class where\nAug 26 19:10:07 lerami pg-prod[3861]: [476-2] relname = 'pg_class') and objsubid = 5\nAug 26 19:10:07 lerami pg-prod[3861]: [477] DEBUG: ProcessQuery\nAug 26 19:10:07 lerami pg-prod[3861]: [478] DEBUG: CommitTransactionCommand\nAug 26 19:10:07 lerami pg-prod[3861]: [479] DEBUG: StartTransactionCommand\nAug 26 19:10:07 lerami pg-prod[3861]: [480-1] DEBUG: query: SELECT description FROM pg_description WHERE objoid = 20976 and classoid = (SELECT oid FROM pg_class where\nAug 26 19:10:07 lerami pg-prod[3861]: [480-2] relname = 'pg_class') and objsubid = 6\nAug 26 19:10:07 lerami pg-prod[3861]: [481] DEBUG: ProcessQuery\nAug 26 19:10:07 lerami pg-prod[3861]: [482] DEBUG: CommitTransactionCommand\nAug 26 19:10:07 lerami pg-prod[3861]: [483] DEBUG: StartTransactionCommand\nAug 26 19:10:07 lerami pg-prod[3861]: [484-1] DEBUG: query: SELECT description FROM pg_description WHERE objoid = 20976 and classoid = (SELECT oid FROM pg_class where\nAug 26 19:10:07 lerami pg-prod[3861]: [484-2] relname = 'pg_class') and objsubid = 7\nAug 26 19:10:07 lerami pg-prod[3861]: [485] DEBUG: ProcessQuery\nAug 26 19:10:07 lerami pg-prod[3861]: [486] DEBUG: CommitTransactionCommand\nAug 26 19:10:07 lerami pg-prod[3861]: [487] DEBUG: StartTransactionCommand\nAug 26 19:10:07 lerami pg-prod[3861]: [488-1] DEBUG: query: SELECT description FROM pg_description WHERE objoid = 20976 and classoid = (SELECT oid FROM pg_class where\nAug 26 19:10:07 lerami pg-prod[3861]: [488-2] relname = 'pg_class') and objsubid = 8\nAug 26 19:10:07 lerami pg-prod[3861]: [489] DEBUG: ProcessQuery\nAug 26 19:10:07 lerami pg-prod[3861]: [490] DEBUG: CommitTransactionCommand\nAug 26 19:10:07 lerami pg-prod[3861]: [491] DEBUG: StartTransactionCommand\nAug 26 19:10:07 lerami pg-prod[3861]: [492-1] DEBUG: query: SELECT description FROM pg_description WHERE objoid = 20976 and classoid = (SELECT oid FROM pg_class where\nAug 26 19:10:07 lerami pg-prod[3861]: [492-2] relname = 'pg_class') and objsubid = 9\nAug 26 19:10:07 lerami pg-prod[3861]: [493] DEBUG: ProcessQuery\nAug 26 19:10:07 lerami pg-prod[3861]: [494] DEBUG: CommitTransactionCommand\nAug 26 19:10:07 lerami pg-prod[3861]: [495] DEBUG: StartTransactionCommand\nAug 26 19:10:07 lerami pg-prod[3861]: [496-1] DEBUG: query: SELECT description FROM pg_description WHERE objoid = 20976 and classoid = (SELECT oid FROM pg_class where\nAug 26 19:10:07 lerami pg-prod[3861]: [496-2] relname = 'pg_class') and objsubid = 10\nAug 26 19:10:07 lerami pg-prod[3861]: [497] DEBUG: ProcessQuery\nAug 26 19:10:07 lerami pg-prod[3861]: [498] DEBUG: CommitTransactionCommand\nAug 26 19:10:07 lerami pg-prod[3861]: [499] DEBUG: StartTransactionCommand\nAug 26 19:10:07 lerami pg-prod[3861]: [500-1] DEBUG: query: SELECT description FROM pg_description WHERE objoid = 20976 and classoid = (SELECT oid FROM pg_class where\nAug 26 19:10:07 lerami pg-prod[3861]: [500-2] relname = 'pg_class') and objsubid = 0\nAug 26 19:10:07 lerami pg-prod[3861]: [501] DEBUG: ProcessQuery\nAug 26 19:10:07 lerami pg-prod[3861]: [502] DEBUG: CommitTransactionCommand\nAug 26 19:10:07 lerami pg-prod[3861]: [503] DEBUG: StartTransactionCommand\nAug 26 19:10:07 lerami pg-prod[3861]: [504] DEBUG: query: SELECT proname from pg_proc where pg_proc.oid = '-'::oid\nAug 26 19:10:07 lerami pg-prod[3861]: [505] ERROR: oidin: error in \"-\": can't parse \"-\"\nAug 26 19:10:07 lerami pg-prod[3861]: [506] DEBUG: AbortCurrentTransaction\nAug 26 19:10:07 lerami pg-prod[3861]: [507] DEBUG: proc_exit(0)\nAug 26 19:10:07 lerami pg-prod[3861]: [508] DEBUG: shmem_exit(0)\nAug 26 19:10:07 lerami pg-prod[3861]: [509] DEBUG: exit(0)\n\nAny ideas on how I can get my data out? \n\nLER\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Sun, 26 Aug 2001 19:13:54 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": true, "msg_subject": "pg_dump failure, can't get data out..." }, { "msg_contents": "Larry Rosenman <ler@lerctr.org> writes:\n> I had upgraded yesterday and *THOUGHT* all was fine. \n> Forgot to pg_dump, so I restored my $PGLIB and bin directory, now get\n> the following when I try to pg_dump:\n\nUh ... you did what exactly? Upgraded from what to what? And what's\nthis about \"restoring the bin directory\"? We've forced initdb often\nenough in the past weeks that I'd not think the data directory would\nbe interchangeable across development versions more than a few days\napart...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 26 Aug 2001 20:24:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump failure, can't get data out... " }, { "msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [010826 19:25]:\n> Larry Rosenman <ler@lerctr.org> writes:\n> > I had upgraded yesterday and *THOUGHT* all was fine. \n> > Forgot to pg_dump, so I restored my $PGLIB and bin directory, now get\n> > the following when I try to pg_dump:\n> \n> Uh ... you did what exactly? Upgraded from what to what? And what's\n> this about \"restoring the bin directory\"? We've forced initdb often\n> enough in the past weeks that I'd not think the data directory would\n> be interchangeable across development versions more than a few days\n> apart...\nI had dumped and initdb'd to yesterdays sources. I then ran for a\nwhile. My backup ran last nite/this am. \n\nI then cvs update'd today, and compiled/installed. Got the initdb\nwarning (missing the initdb forced, my fault, and stupid). \n\nI then restored my /usr/local/pgsql/bin and /usr/local/pgsql/lib\ndirectory from the tape cut last nite. \n\nI then get the posted error's. \n\n:-( \n\nLER\n\n> \n> \t\t\tregards, tom lane\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Sun, 26 Aug 2001 19:27:24 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": true, "msg_subject": "Re: pg_dump failure, can't get data out..." }, { "msg_contents": "Larry Rosenman <ler@lerctr.org> writes:\n> pg_dump: query to get function name of oid - failed: ERROR: oidin:\n> error in \"-\": can't parse \"-\"\n\nActually, I'm seeing it here too ... seems to have been a side-effect\nof the change that I recently made to declare pg_index.indproc as\nregproc instead of plain Oid. pg_dump is expecting \"0\" for empty\nindproc but it's now getting \"-\". Hmm, maybe that change wasn't a\ngood idea; is it likely to break anything besides pg_dump?\n\nAnyway, the quick-hack answer is to change line 4374 in pg_dump.c from\n\n\t\tif (strcmp(indinfo[i].indproc, \"0\") == 0)\n\nto\n\n\t\tif (strcmp(indinfo[i].indproc, \"-\") == 0 ||\n\t\t strcmp(indinfo[i].indproc, \"0\") == 0)\n\nYou shouldn't need to revert your sources for this --- apply the patch,\ncompile a new pg_dump, and use it against your yesterday's server.\n\nBTW, I have to congratulate you on your bravery. I sure wouldn't keep\ndata I cared about in CVS tip ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 26 Aug 2001 20:39:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump failure, can't get data out... " }, { "msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [010826 19:39]:\n> Larry Rosenman <ler@lerctr.org> writes:\n> > pg_dump: query to get function name of oid - failed: ERROR: oidin:\n> > error in \"-\": can't parse \"-\"\n> \n> Actually, I'm seeing it here too ... seems to have been a side-effect\n> of the change that I recently made to declare pg_index.indproc as\n> regproc instead of plain Oid. pg_dump is expecting \"0\" for empty\n> indproc but it's now getting \"-\". Hmm, maybe that change wasn't a\n> good idea; is it likely to break anything besides pg_dump?\n> \n> Anyway, the quick-hack answer is to change line 4374 in pg_dump.c from\n> \n> \t\tif (strcmp(indinfo[i].indproc, \"0\") == 0)\n> \n> to\n> \n> \t\tif (strcmp(indinfo[i].indproc, \"-\") == 0 ||\n> \t\t strcmp(indinfo[i].indproc, \"0\") == 0)\n> \n> You shouldn't need to revert your sources for this --- apply the patch,\n> compile a new pg_dump, and use it against your yesterday's server.\n> \n> BTW, I have to congratulate you on your bravery. I sure wouldn't keep\n> data I cared about in CVS tip ;-)\nIt's not totally irreplaceable, but... \n\nHey, I find bugs this way. \n\nThis fix gets the data out. \n\nNOW, do we need a regression test for pg_dump? \n\nLER\n\n> \n> \t\t\tregards, tom lane\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Sun, 26 Aug 2001 19:45:21 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": true, "msg_subject": "Re: pg_dump failure, can't get data out..." }, { "msg_contents": "Larry Rosenman <ler@lerctr.org> writes:\n> NOW, do we need a regression test for pg_dump? \n\nWe always have and still do (isn't it on the TODO list?). Some Great\nBridge people have been poking at the problem but haven't yet come up\nwith a clean answer AFAIK. The obvious approach of dumping and\nreloading the regression database does not currently work well...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 26 Aug 2001 20:52:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump failure, can't get data out... " }, { "msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [010826 19:52]:\n> Larry Rosenman <ler@lerctr.org> writes:\n> > NOW, do we need a regression test for pg_dump? \n> \n> We always have and still do (isn't it on the TODO list?). Some Great\n> Bridge people have been poking at the problem but haven't yet come up\n> with a clean answer AFAIK. The obvious approach of dumping and\n> reloading the regression database does not currently work well...\nI understand. Thanks for the quick looksee and finding this one. :-) \n\nI'll be more careful with making sure pg_dump works before losing\nupdates and saving the pg_dump output. \n\n\n> \n> \t\t\tregards, tom lane\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Sun, 26 Aug 2001 19:54:22 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": true, "msg_subject": "Re: pg_dump failure, can't get data out..." }, { "msg_contents": "Larry Rosenman writes:\n\n> pg_dump: query to get function name of oid - failed: ERROR: oidin:\n> error in \"-\": can't parse \"-\"\n\nIt's trying to dump a functional index but the function appears to be\nabsent. (Therefore the name comes out as '-'.) It's hard to tell which\nindex this is because the query pg_dump uses does not ORDER BY, but if in\ndoubt you can try the query in pg_dump.c:getIndexes() yourself -- it will\nbe the first index. Offhand I don't see a related change in pg_dump in\nrecent times, so it probably isn't necessarily an upgrade related issue,\nit might be an inconsistent schema.\n\nIt might be easiest to remove the index in question before dumping.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Mon, 27 Aug 2001 03:27:44 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: pg_dump failure, can't get data out..." }, { "msg_contents": "* Peter Eisentraut <peter_e@gmx.net> [010826 20:23]:\n> Larry Rosenman writes:\n> \n> > pg_dump: query to get function name of oid - failed: ERROR: oidin:\n> > error in \"-\": can't parse \"-\"\n> \n> It's trying to dump a functional index but the function appears to be\n> absent. (Therefore the name comes out as '-'.) It's hard to tell which\n> index this is because the query pg_dump uses does not ORDER BY, but if in\n> doubt you can try the query in pg_dump.c:getIndexes() yourself -- it will\n> be the first index. Offhand I don't see a related change in pg_dump in\n> recent times, so it probably isn't necessarily an upgrade related issue,\n> it might be an inconsistent schema.\n> \n> It might be easiest to remove the index in question before dumping.\nseems to happen on ANY table/index. \n\nI.E. I couldn't dump ANY user DB's. \n\nLER\n\n\n\n> \n> -- \n> Peter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Sun, 26 Aug 2001 21:02:32 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": true, "msg_subject": "Re: pg_dump failure, can't get data out..." }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Offhand I don't see a related change in pg_dump in\n> recent times, so it probably isn't necessarily an upgrade related issue,\n> it might be an inconsistent schema.\n\nThe proximate cause is that I changed pg_index.indproc from plain \"oid\"\nto \"regproc\" a week ago, so the output format is different. It seemed\nlike a good idea at the time ...\n\nI have hacked pg_dump to force the output format back to oid, but I'm\nsorta thinking that the schema change was ill-advised because it's\nlikely to break other applications that look at indproc. Should we\nchange the column back to oid?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 27 Aug 2001 00:16:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump failure, can't get data out... " } ]
[ { "msg_contents": "Here are a few dates of interest:\n\n\tMonday-Friday: Tom, Jan, and I will be at LinuxWorld in San\nFrancisco. Vadim will be at LinuxWorld too, but of course he lives in\nSan Francisco.\n\n\tBeta may start as soon as Saturday, September 1. I know Tom has\nmentioned it and no one has said anything contrary. This of course\ncould change.\n\n\tI will be on vacation in Lebanon/Syria from September 16 to\nOctober 17. I will be on-line 1-2 days a week from that location, so I\nwill be around, but not as frequently available.\n\n\tOSDN Database Summit is September 23-25 in Providence, Rhode\nIsland. Though I was supposed to attend, I will now not be able to. \nTom Lane will give my presentation. It was a very enjoyable event when\nI attended last year.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 27 Aug 2001 00:01:48 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Upcoming events" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> ... Beta may start as soon as Saturday, September 1.\n\nMy guess is that end of *next* week would be more appropriate.\n\nPersonally I've finished all the major items I wanted to do for 7.2,\nbut there are still some little things that it'd be nice to clean up.\nAnd we have a number of patches to review/apply. \"Saturday\" means\n\"now\" as far as I'm concerned, since I won't be back from San Francisco\nuntil Sunday ... it'd be nice to have a few more work days before beta.\n\nDoes anyone else have work that they're trying to finish up before\nwe go beta?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 27 Aug 2001 20:56:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Upcoming events " }, { "msg_contents": "> Does anyone else have work that they're trying to finish up before\n> we go beta?\n>\n\nWell, I did have a question that got lost in the details of the recent bytea\ndiscussion. Specifically it was this: is there a good reason that byteaout\noctal escapes all non-printable characters?\n\nISTM that if you are using bytea, it's because you need to store and\nmanipulate binary data, and therefore it ought to be the client's\nresponsibility to do any required escaping of the returned results. In the\nwork I'm doing with bytea presently, I either have to unescape it on the\nclient side, or use a binary cursor (and some app environments, like PHP,\ndon't currently give me the option of a binary cursor). Not that big of a\ndeal, but it doesn't seem right, and as you've seen by recent discussions it\nconfuses people.\n\nThe only reason I can think of for this behavior is that it makes the\nresults displayable in psql -- but again, I'd expect psql to deal with that,\nnot the backend.\n\nThoughts?\n\n-- Joe\n\n\n", "msg_date": "Mon, 27 Aug 2001 19:03:59 -0700", "msg_from": "\"Joe Conway\" <joseph.conway@home.com>", "msg_from_op": false, "msg_subject": "Re: Upcoming events " }, { "msg_contents": "Do we want ADD PRIMARY KEY?\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Tom Lane\n> Sent: Tuesday, 28 August 2001 8:56 AM\n> To: Bruce Momjian\n> Cc: PostgreSQL-development\n> Subject: Re: [HACKERS] Upcoming events \n> \n> \n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > ... Beta may start as soon as Saturday, September 1.\n> \n> My guess is that end of *next* week would be more appropriate.\n> \n> Personally I've finished all the major items I wanted to do for 7.2,\n> but there are still some little things that it'd be nice to clean up.\n> And we have a number of patches to review/apply. \"Saturday\" means\n> \"now\" as far as I'm concerned, since I won't be back from San Francisco\n> until Sunday ... it'd be nice to have a few more work days before beta.\n> \n> Does anyone else have work that they're trying to finish up before\n> we go beta?\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n", "msg_date": "Tue, 28 Aug 2001 10:09:45 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "RE: Upcoming events " }, { "msg_contents": "\"Joe Conway\" <joseph.conway@home.com> writes:\n> ... is there a good reason that byteaout\n> octal escapes all non-printable characters?\n\nWell, AFAICS it *has to* escape nulls (zero bytes). Whether it escapes\nmore stuff is a matter of taste once you accept that.\n\nWhat we really need to have to make bytea more useful is direct read and\nwrite functions that don't require any escaping (a la large object\nlo_read/lo_write).\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 27 Aug 2001 22:21:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "bytea escaping" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> Do we want ADD PRIMARY KEY?\n\nIf you can get it done in the next week or so ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 27 Aug 2001 22:32:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Upcoming events " }, { "msg_contents": "\nYa, lets go with the 10th of Sept, which is a Monday, start of the week\nand all that, everyone has had a chance to relax once \"the kids\" are back\nin school and all that :)\n\nOn Mon, 27 Aug 2001, Tom Lane wrote:\n\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > ... Beta may start as soon as Saturday, September 1.\n>\n> My guess is that end of *next* week would be more appropriate.\n>\n> Personally I've finished all the major items I wanted to do for 7.2,\n> but there are still some little things that it'd be nice to clean up.\n> And we have a number of patches to review/apply. \"Saturday\" means\n> \"now\" as far as I'm concerned, since I won't be back from San Francisco\n> until Sunday ... it'd be nice to have a few more work days before beta.\n>\n> Does anyone else have work that they're trying to finish up before\n> we go beta?\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n", "msg_date": "Mon, 27 Aug 2001 23:58:15 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Upcoming events " }, { "msg_contents": "Aiiye. I'm sending a _large_ (60k) patch to add 'select * from cursor foo'\ntonight. I'm hoping that it could possibly get included...\n\n-alex\n\nOn Mon, 27 Aug 2001, Tom Lane wrote:\n\n> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > Do we want ADD PRIMARY KEY?\n> \n> If you can get it done in the next week or so ...\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n> \n\n", "msg_date": "Tue, 28 Aug 2001 11:35:06 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": false, "msg_subject": "Re: Upcoming events " }, { "msg_contents": "> \n> Ya, lets go with the 10th of Sept, which is a Monday, start of the week\n> and all that, everyone has had a chance to relax once \"the kids\" are back\n> in school and all that :)\n> \n\nYes. I need to resolve all outstanding patches before we go beta and I\ncan use the extra time too. Can I suggest a pgindent run as soon as we\ngo beta? That way, we have maximum pgindent testing.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 28 Aug 2001 12:38:06 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Upcoming events" }, { "msg_contents": "Tom Lane wrote:\n> \n> \"Joe Conway\" <joseph.conway@home.com> writes:\n> > ... is there a good reason that byteaout\n> > octal escapes all non-printable characters?\n> \n> Well, AFAICS it *has to* escape nulls (zero bytes). Whether it escapes\n> more stuff is a matter of taste once you accept that.\n\noutput function seems to escape all bytes <=\\027 and >=\\177\n\n> What we really need to have to make bytea more useful is direct read and\n> write functions that don't require any escaping (a la large object\n> lo_read/lo_write).\n\nTwo intertwined things we are currently missing:\n\n1) a portable BINARY protocol (i _think_ that's what the typreceive and\ntypsend \nfields in pg_type are meant to implement - currently they are allways\nthe same as\ntypinput, typoutput)\n\nhannu=# select count(*) from pg_type where typreceive != typinput or\ntypsend != typoutput;\n count \n-------\n 0\n(1 row)\n\n2) a FE-BE protocol that allows first PREPARING a statement and then\nEXECUTEing \nit with args. Most DB's have it, SPI has it and both ODBC and JDBC have\nit. \nThis should use the above-mentioned protocol to send the arguments to\nexecute.\n\nhaving LO access to things is also nice, but it is independent of being\nable to \neasily store binary data in a database, especially if we claim PG to be\nan ORDBMS.\n\n-------------\nHannu\n", "msg_date": "Wed, 29 Aug 2001 12:04:30 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: bytea escaping" }, { "msg_contents": "> Does anyone else have work that they're trying to finish up before\n> we go beta?\n\nI've got some (relatively small) patches to support some ISO variations\nin date/time representation.\n\nI'm considering diving in and separating timestamp into two types,\ntimestamp and timestamptz supporting \"with and without\" time zones as\nhas been discussed off and on for some time. I want to review the SQL99\nspec a bit more (and get additional feedback on whether this is\ndesirable) before doing this though.\n\n - Thomas\n", "msg_date": "Wed, 29 Aug 2001 15:34:48 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Upcoming events" }, { "msg_contents": "\n> > Does anyone else have work that they're trying to finish up before\n> > we go beta?\n\n I'm fixing and add some features to \"to_char\" (new to_char(interval) ).\nMaybe I will finish it on this Friday.\n\n Karel\n\n--\n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n\n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Wed, 29 Aug 2001 18:19:30 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: Re: Upcoming events" } ]
[ { "msg_contents": "Some people have suggested we develop a roadmap of future pgsql\nreleases. As you can see in this Mozilla article, having a roadmap just\nmeans you get to change it often: :-)\n\n\thttp://mozillaquest.com/Mozilla_News_01/Mozilla_RoadMap_17aug01_Story01.html\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 27 Aug 2001 00:09:03 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Pgsql roadmap" } ]
[ { "msg_contents": "\nCurrent sources don't compile on AIX with xlc compiler because of the \ncombined (and inconsistent ? or compiler bug) use of extern and static \nfor the datetktbl in datetime.c.\n\nheader unconditionally has:\nextern datetkn datetktbl[];\n\nsource has:\nstatic datetkn datetktbl[] = {\n \nThe usual approach would be to avoid that line in the header if included\n\nfrom datetime.c, but I think it would be better to use a const in this\ncase.\n\nI think this is a general compatibility problem, and thus needs to be\nsolved.\n\nAndreas\n", "msg_date": "Mon, 27 Aug 2001 10:42:40 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "extern + static issue in datetime.c" }, { "msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> Current sources don't compile on AIX with xlc compiler because of the \n> combined (and inconsistent ? or compiler bug) use of extern and static \n> for the datetktbl in datetime.c.\n\nFixed. A pass with HP's compiler also showed up a static-vs-not-static\nconflict in network.c.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 27 Aug 2001 16:05:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: extern + static issue in datetime.c " } ]
[ { "msg_contents": "Hi,\n\nI'm going to vacation and will be totally offline and\nTeodor is already enjoy Cyprus. He will be back\nseptember 8-9. I'll be back september 19.\n\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Mon, 27 Aug 2001 14:24:04 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": true, "msg_subject": "GiST vacation" } ]
[ { "msg_contents": "I wrote:\n> Current sources don't compile on AIX with xlc compiler because of the \n> combined (and inconsistent ? or compiler bug) use of extern \n> and static \n> for the datetktbl in datetime.c.\n> \n> header unconditionally has:\n> extern datetkn datetktbl[];\n> \n> source has:\n> static datetkn datetktbl[] = {\n> \n> The usual approach would be to avoid that line in the header \n> if included\n> \n> from datetime.c, but I think it would be better to use a const in this\n> case.\n> \n> I think this is a general compatibility problem, and thus needs to be\n> solved.\n\nAttached is a patch that might be considered (remove static for these\nglobals).\n\nThanks\nAndreas", "msg_date": "Mon, 27 Aug 2001 17:25:43 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "RE: extern + static issue in datetime.c" }, { "msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> Attached is a patch that might be considered (remove static for these\n> globals).\n\nActually, they are not globals AFAICS, so removing the header extern\nseemed the more appropriate fix.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 27 Aug 2001 16:27:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: extern + static issue in datetime.c " } ]
[ { "msg_contents": "I would like to learn more about Multi-Version Concurrency Control\nOther than http://pgsql.profnet.pl/osdn/transactions.pdf, where can I\nfind more resource?\n\n-- \n\n\n\nCarfield Yim, visit my homepage at http://www.carfield.com.hk\n\n", "msg_date": "Tue, 28 Aug 2001 01:31:57 +0800 (HKT)", "msg_from": "Carfield Yim <mailing@desktop.carfield.com.hk>", "msg_from_op": true, "msg_subject": "Where can I learn more about Multi-Version Concurrency Control?" } ]
[ { "msg_contents": "Hi, I would like to start in PostgreSQL. I have never used before,\nwhere should I find something for starting, something as a tutorial.\n\nThank you very much for your help.\n\n--\n________________________\nGeorgina Rodr�guez Prado\nISC ITESM\nTel. [52] (7) 3-13-22-84\nMEXICO\n\n\n", "msg_date": "Mon, 27 Aug 2001 14:46:35 -0500", "msg_from": "Georgina =?iso-8859-1?Q?Rodr=EDguez?= <gina@apollo.up-link.net>", "msg_from_op": true, "msg_subject": "Hi" }, { "msg_contents": "\nYou will find a Tutorial in the postgres documentation. \nThe file postgres.tar.gz located in the 'doc' subdirectory of\nyour source directory contains the postgres documentation in html\nformat.\n\nArne \n\nGeorgina Rodr�guez wrote:\n> \n> Hi, I would like to start in PostgreSQL. I have never used before,\n> where should I find something for starting, something as a tutorial.\n> \n> Thank you very much for your help.\n> \n> --\n> ________________________\n> Georgina Rodr�guez Prado\n> ISC ITESM\n> Tel. [52] (7) 3-13-22-84\n> MEXICO\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n", "msg_date": "Wed, 29 Aug 2001 19:23:59 +0200", "msg_from": "Arne Weiner <aswr@gmx.de>", "msg_from_op": false, "msg_subject": "Re: Hi" } ]
[ { "msg_contents": "Today I did something I usually do about once per release cycle: try to\nbuild the system with HP's vendor cc, rather than gcc which I prefer.\nThis usually turns up some portability issues, and indeed I found some.\nOne that I'm not entirely sure about how to fix is that libpq++ no \nlonger builds at all:\n\naCC +z -I../../../src/interfaces/libpq -I../../../src/include -I/usr/local/include -c -o pgconnection.o pgconnection.cc\nError 56: \"pgconnection.cc\", line 20 # Namespaces are not yet implemented.\n using namespace std;\n ^^^^^^^^^^^^^^^^^^^^\n\nGiven that we have a HAVE_NAMESPACE_STD configure symbol, I do not\nunderstand why unconditional \"using\"s have been inserted into the\nlibpq++ files. Shouldn't these be protected by #ifdef\nHAVE_NAMESPACE_STD? Or is there a different fix that's more\nappropriate?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 27 Aug 2001 16:16:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "libpq++ current sources don't compile with older C++ compilers" }, { "msg_contents": "On Mon, Aug 27, 2001 at 04:16:57PM -0400, Tom Lane wrote:\n> Today I did something I usually do about once per release cycle: try to\n> build the system with HP's vendor cc, rather than gcc which I prefer.\n> This usually turns up some portability issues, and indeed I found some.\n> One that I'm not entirely sure about how to fix is that libpq++ no \n> longer builds at all:\n> \n> aCC +z -I../../../src/interfaces/libpq -I../../../src/include -I/usr/local/include -c -o pgconnection.o pgconnection.cc\n> Error 56: \"pgconnection.cc\", line 20 # Namespaces are not yet implemented.\n> using namespace std;\n> ^^^^^^^^^^^^^^^^^^^^\n> \n> Given that we have a HAVE_NAMESPACE_STD configure symbol, I do not\n> understand why unconditional \"using\"s have been inserted into the\n> libpq++ files. Shouldn't these be protected by #ifdef\n> HAVE_NAMESPACE_STD? Or is there a different fix that's more\n> appropriate?\n\nWhat version of aCC are you using? Newer releases support -AA which\nprovide the std namespace. This, of course, doesn't answer your\nquestion.\n\n-- \nalbert chin (china@thewrittenword.com)\n", "msg_date": "Mon, 27 Aug 2001 21:16:50 -0500", "msg_from": "pgsql-hackers@thewrittenword.com", "msg_from_op": false, "msg_subject": "Re: libpq++ current sources don't compile with older C++ compilers" }, { "msg_contents": "pgsql-hackers@thewrittenword.com writes:\n> What version of aCC are you using?\n\n$ what /opt/aCC/bin/aCC\n/opt/aCC/bin/aCC:\n HP aC++ B3910B A.01.00\n HP ANSI C++ B3910B A.00.03\n /usr/lib/libc: $Revision: 76.3 $\n\nIt's whatever shipped with HPUX 10.20, AFAIR. For my purposes, the\nfact that it's not the latest and greatest is exactly the point ...\n\n> Newer releases support -AA which provide the std namespace. This, of\n> course, doesn't answer your question.\n\nNo, but it does point up the fact that we do not select -AA anyway.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 27 Aug 2001 23:34:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: libpq++ current sources don't compile with older C++ compilers " }, { "msg_contents": "\n\nTom, I am sure it is an oversight. Can you fix it?\n\n> Today I did something I usually do about once per release cycle: try to\n> build the system with HP's vendor cc, rather than gcc which I prefer.\n> This usually turns up some portability issues, and indeed I found some.\n> One that I'm not entirely sure about how to fix is that libpq++ no \n> longer builds at all:\n> \n> aCC +z -I../../../src/interfaces/libpq -I../../../src/include -I/usr/local/include -c -o pgconnection.o pgconnection.cc\n> Error 56: \"pgconnection.cc\", line 20 # Namespaces are not yet implemented.\n> using namespace std;\n> ^^^^^^^^^^^^^^^^^^^^\n> \n> Given that we have a HAVE_NAMESPACE_STD configure symbol, I do not\n> understand why unconditional \"using\"s have been inserted into the\n> libpq++ files. Shouldn't these be protected by #ifdef\n> HAVE_NAMESPACE_STD? Or is there a different fix that's more\n> appropriate?\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 28 Aug 2001 12:34:16 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: libpq++ current sources don't compile with older C++ compilers" } ]
[ { "msg_contents": "> > Does anyone else have work that they're trying to finish up before\n> > we go beta?\n> >\n>\n> Well, I did have a question that got lost in the details of the recent\nbytea\n> discussion. Specifically it was this: is there a good reason that byteaout\n> octal escapes all non-printable characters?\n>\n> ISTM that if you are using bytea, it's because you need to store and\n> manipulate binary data, and therefore it ought to be the client's\n> responsibility to do any required escaping of the returned results. In the\n> work I'm doing with bytea presently, I either have to unescape it on the\n> client side, or use a binary cursor (and some app environments, like PHP,\n> don't currently give me the option of a binary cursor). Not that big of a\n> deal, but it doesn't seem right, and as you've seen by recent discussions\nit\n> confuses people.\n>\n> The only reason I can think of for this behavior is that it makes the\n> results displayable in psql -- but again, I'd expect psql to deal with\nthat,\n> not the backend.\n>\nI guess add pg_dump to that list -- any others? Do you think there is a lot\nof existing code?\n\n-- Joe\n\n\n", "msg_date": "Mon, 27 Aug 2001 19:20:33 -0700", "msg_from": "\"Joe Conway\" <joseph.conway@home.com>", "msg_from_op": true, "msg_subject": "Re: Upcoming events " } ]
[ { "msg_contents": "Hello\n\nI am trying to get 7.1.3 to compile on IRIX 6.5.13 using GCC 3.0.1 ... I\nalways end up having this error:\n\ngcc -Wall -Wmissing-prototypes -Wmissing-declarations\n-I../../../../src/include -U_NO_XOPEN4 -c -o s_lock.o s_lock.c s_lock.c:\nIn function `s_lock': s_lock.c:134: warning: passing arg 1 of pointer to\nfunction discards qualifiers from pointer target type s_lock.c: At top\nlevel: s_lock.c:234: warning: `tas_dummy' defined but not used as: Error:\n/var/tmp/ccLTwXmB.s, line 403: undefined assembler operation: .global\n .global tas gmake[4]: *** [s_lock.o] Error 1 gmake[4]: Leaving\ndirectory `/usr/local/src/postgresql-7.1.3/src/backend/storage/buffer'\ngmake[3]: *** [buffer-recursive] Error 2 gmake[3]: Leaving directory\n`/usr/local/src/postgresql-7.1.3/src/backend/storage'\ngmake[2]: *** [storage-recursive] Error 2\n\nI read in the archives that GCC was not very good to compile on IRIX, at\nthis stage I have no choice, we don't have MIPSPro licensed.\n\nFirst question: what can I do to solve this error?\nSecond question: is there a binary for IRIX 6.5 that I can download from\nsomewhere?\n\nI am trying to contact SGI to see if we can get a version of the compiler,\nonce that's done and we can compile, I willing to provide a binary version\nof pgsql compiled under IRIX.\n\nCheers\n\n/B\n\n--- Bruno Mattarollo <bruno@web1.greenpeace.org> ---\n SysAdmin & TechLead - Greenpeace International\n http://www.greenpeace.org/\n----------------------------------------------------\n\n\n", "msg_date": "Tue, 28 Aug 2001 10:58:32 +0200 (CEST)", "msg_from": "Bruno Mattarollo <bruno@web1.greenpeace.org>", "msg_from_op": true, "msg_subject": "7.1.3, IRIX 6.5 and gcc" }, { "msg_contents": "Bruno Mattarollo writes:\n\n> I am trying to get 7.1.3 to compile on IRIX 6.5.13 using GCC 3.0.1 ... I\n> always end up having this error:\n>\n> gcc -Wall -Wmissing-prototypes -Wmissing-declarations\n> -I../../../../src/include -U_NO_XOPEN4 -c -o s_lock.o s_lock.c s_lock.c:\n> In function `s_lock': s_lock.c:134: warning: passing arg 1 of pointer to\n> function discards qualifiers from pointer target type s_lock.c: At top\n> level: s_lock.c:234: warning: `tas_dummy' defined but not used as: Error:\n> /var/tmp/ccLTwXmB.s, line 403: undefined assembler operation: .global\n> .global tas gmake[4]: *** [s_lock.o] Error 1 gmake[4]: Leaving\n> directory `/usr/local/src/postgresql-7.1.3/src/backend/storage/buffer'\n> gmake[3]: *** [buffer-recursive] Error 2 gmake[3]: Leaving directory\n> `/usr/local/src/postgresql-7.1.3/src/backend/storage'\n> gmake[2]: *** [storage-recursive] Error 2\n\nGo into the file src/backend/storage/s_lock.c and change the line that\nlooks like\n\n#if defined(__mips__)\n\nto\n\n#if defined(__mips__) && !defined(__sgi)\n\n(This \"mips\" assembly code is there for the likes of NetBSD and Linux.)\n\nThis should allow for the compilation to continue.\n\n> I read in the archives that GCC was not very good to compile on IRIX, at\n> this stage I have no choice, we don't have MIPSPro licensed.\n\n\"not very good\" can also be read as \"won't work\", unless GCC 3 now does\nbetter.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Tue, 28 Aug 2001 11:51:30 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: 7.1.3, IRIX 6.5 and gcc" }, { "msg_contents": "Hello\n\nThanks Peter! That solved the problem and I could get postgresql 7.1.3 to\ncompile. Actually it works :) It seems that for my needs now, this GCC is\nenough... At least I could compile postgresql, python, postfix and xemacs\nand some other GNU software ... :)\n\nThanks again!\n\n/B\n\nOn Tue, 28 Aug 2001, Peter Eisentraut wrote:\n\n> Go into the file src/backend/storage/s_lock.c and change the line that\n> looks like\n>\n> #if defined(__mips__)\n>\n> to\n>\n> #if defined(__mips__) && !defined(__sgi)\n>\n> (This \"mips\" assembly code is there for the likes of NetBSD and Linux.)\n>\n> This should allow for the compilation to continue.\n>\n> > I read in the archives that GCC was not very good to compile on IRIX, at\n> > this stage I have no choice, we don't have MIPSPro licensed.\n>\n> \"not very good\" can also be read as \"won't work\", unless GCC 3 now does\n> better.\n>\n>\n\n-- \n\n--- Bruno Mattarollo <bruno@web1.greenpeace.org> ---\n SysAdmin & TechLead - Greenpeace International\n http://www.greenpeace.org/\n----------------------------------------------------\n\n\n", "msg_date": "Tue, 28 Aug 2001 16:17:00 +0200 (CEST)", "msg_from": "Bruno Mattarollo <bruno@web1.greenpeace.org>", "msg_from_op": true, "msg_subject": "Re: 7.1.3, IRIX 6.5 and gcc" }, { "msg_contents": "Bruno Mattarollo writes:\n\n> Thanks Peter! That solved the problem and I could get postgresql 7.1.3 to\n> compile.\n\nOkay, the fix has been checked in for the next release.\n\n> Actually it works :) It seems that for my needs now, this GCC is\n> enough...\n\nI'm referring to the note at http://freeware.sgi.com/howto.html:\n\n * gcc vs. cc\n\n Code that runs fine when compiled with SGI cc\n and doesn't run when compiled with gcc might be\n calling one of the following functions:\n\n inet_ntoa, inet_lnaof, inet_netof, inet_makeaddr, semctl\n\n (there may be others). These are functions\n that get passed or return structs that are smaller than 16\n bytes but not 8 bytes long. gcc and SGI cc\n are incompatible in the way they pass these structs so\n compiling with gcc and linking with the SGI\n libc.so (which was compiled with the SGI cc) is likely to\n cause these problems. Note that this problem\n is pretty rare since such functions are not widely used.\n This may be considered a bug in gcc but is too involved to fix I'm told.\n\nPostgreSQL calls at least semctl(), but not necessarily during mere\n\"seeing if it works\". So be careful.\n\nCheck out the above site for more resources about open source software on\nIrix.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Tue, 28 Aug 2001 17:20:36 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: 7.1.3, IRIX 6.5 and gcc" }, { "msg_contents": "bruno@web1.greenpeace.org (Bruno Mattarollo) wrote in message news:<Pine.LNX.4.33.0108281615390.1774-100000@web1.greenpeace.org>...\n> Hello\n> \n> Thanks Peter! That solved the problem and I could get postgresql 7.1.3 to\n> compile. Actually it works :) It seems that for my needs now, this GCC is\n> enough... At least I could compile postgresql, python, postfix and xemacs\n> and some other GNU software ... :)\n> \n> Thanks again!\n> \n> /B\n> \n\nI have complied binaries for IRIX 6.5.12 using MIPSPro. I'd be happy\nto send them to anyone that wants them or upload them to the website\nif someone tells me where to place them.\n\n-Tony\n", "msg_date": "28 Aug 2001 10:30:04 -0700", "msg_from": "reina@nsi.edu (Tony Reina)", "msg_from_op": false, "msg_subject": "Re: 7.1.3, IRIX 6.5 and gcc" } ]
[ { "msg_contents": "Hi!\n\nPlease find attached some very simple encoders/decoders for bytea and base64.\nBytea encoder is very picky about what it leaves unescaped - basically the \nbase64\nchar set ;-)\n\nSince this seems to be a very poorly documented but much asked-for thing, I \nthought\nyou would maybe like to add this code to libpq (so that everyone benefits).\n\nI'm aware that function renames might be necessary, though.\nIf you like, I could make the code fit into libpq, and send diffs.\n\nAny comments/interests?\n\nGreetings,\n\tJoerg", "msg_date": "Tue, 28 Aug 2001 11:07:32 +0200", "msg_from": "Joerg Hessdoerfer <Joerg.Hessdoerfer@sea-gmbh.com>", "msg_from_op": true, "msg_subject": "Bytea/Base64 encoders for libpq - interested?" }, { "msg_contents": "On Tue, Aug 28, 2001 at 11:07:32AM +0200, Joerg Hessdoerfer wrote:\n> Hi!\n> \n> Please find attached some very simple encoders/decoders for bytea and base64.\n> Bytea encoder is very picky about what it leaves unescaped - basically the \n> base64\n> char set ;-)\n> \n> Since this seems to be a very poorly documented but much asked-for thing, I \n> thought\n> you would maybe like to add this code to libpq (so that everyone benefits).\n> \n> I'm aware that function renames might be necessary, though.\n> If you like, I could make the code fit into libpq, and send diffs.\n> \n> Any comments/interests?\n\n What implement base64 PostgreSQL datetype that use externaly base64 and\ninternaly same things as bytea. It prevent FE and parser problems with\n\"bad\" chars and internaly for data storage save less space than text\nwith base64. Of course it doesn't solve a problem with encoding/decoding \ndata in your application to/from base64. May be implement for this\ndatetype cast to/from bytea too.\n\n SELECT my_bytea::base64 FROM foo;\n\n INSERT INTO foo (my_bytea) VALUES ('some_base64_string'::bytea);\n\n And you can still fetch all data directly in batea by binary cursor. \n\n Comments?\n\n\t\tKarel\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Tue, 28 Aug 2001 11:55:28 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: Bytea/Base64 encoders for libpq - interested?" }, { "msg_contents": "At 11:55 AM 28-08-2001 +0200, Karel Zak wrote:\n>\n> What implement base64 PostgreSQL datetype that use externaly base64 and\n>internaly same things as bytea. It prevent FE and parser problems with\n>\"bad\" chars and internaly for data storage save less space than text\n>with base64. Of course it doesn't solve a problem with encoding/decoding \n>data in your application to/from base64. May be implement for this\n>datetype cast to/from bytea too.\n>\n> SELECT my_bytea::base64 FROM foo;\n>\n> INSERT INTO foo (my_bytea) VALUES ('some_base64_string'::bytea);\n>\n> And you can still fetch all data directly in batea by binary cursor. \n>\n> Comments?\n\nSounds good to me. Even better if the base64 parser is bulletproof and\ntolerant of junk. That way base64 email attachments may not even need to be\nprocessed much - just filter a bit and shove it in :).\n\nBut shouldn't there be a ::base64 somewhere in the insert statement?\n\nCheerio,\nLink.\n\n", "msg_date": "Wed, 29 Aug 2001 09:58:12 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Re: Bytea/Base64 encoders for libpq - interested?" }, { "msg_contents": "At 11:55 AM 28-08-2001 +0200, Karel Zak wrote:\n> What implement base64 PostgreSQL datetype that use externaly base64 and\n>internaly same things as bytea. It prevent FE and parser problems with\n\nAnother point:\n\nI have no problems with base64[1]. However I was thinking that it might be\nfar easier for the C/C++/Java (and other low level languages) bunch to do\nhexadecimal. e.g. zero zero for null, zero A for line feed. \n\nIt expands things in the input/output stream, but it might be worth some\nconsideration. Simplicity, cpu usage etc.\n\nCheerio,\nLink.\n\n[1] OK, I can't convert base64 to ASCII mentally yet. But I don't think\nthat should really a factor.\n\n\n", "msg_date": "Wed, 29 Aug 2001 10:22:22 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Re: Bytea/Base64 encoders for libpq - interested?" }, { "msg_contents": "\nWhere did we leave this?\n\n> On Tue, Aug 28, 2001 at 11:07:32AM +0200, Joerg Hessdoerfer wrote:\n> > Hi!\n> > \n> > Please find attached some very simple encoders/decoders for bytea and base64.\n> > Bytea encoder is very picky about what it leaves unescaped - basically the \n> > base64\n> > char set ;-)\n> > \n> > Since this seems to be a very poorly documented but much asked-for thing, I \n> > thought\n> > you would maybe like to add this code to libpq (so that everyone benefits).\n> > \n> > I'm aware that function renames might be necessary, though.\n> > If you like, I could make the code fit into libpq, and send diffs.\n> > \n> > Any comments/interests?\n> \n> What implement base64 PostgreSQL datetype that use externaly base64 and\n> internaly same things as bytea. It prevent FE and parser problems with\n> \"bad\" chars and internaly for data storage save less space than text\n> with base64. Of course it doesn't solve a problem with encoding/decoding \n> data in your application to/from base64. May be implement for this\n> datetype cast to/from bytea too.\n> \n> SELECT my_bytea::base64 FROM foo;\n> \n> INSERT INTO foo (my_bytea) VALUES ('some_base64_string'::bytea);\n> \n> And you can still fetch all data directly in batea by binary cursor. \n> \n> Comments?\n> \n> \t\tKarel\n> -- \n> Karel Zak <zakkr@zf.jcu.cz>\n> http://home.zf.jcu.cz/~zakkr/\n> \n> C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 3 Sep 2001 20:16:52 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bytea/Base64 encoders for libpq - interested?" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Where did we leave this?\n\nI don't think adding a datatype just to provide base64 encoding is\na wise approach. The overhead of a new datatype (in the sense of\nproviding operators/functions for it) will be much more than the\nbenefit. I think providing encode/decode functions is sufficient...\nand we have those already, don't we?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 03 Sep 2001 20:48:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bytea/Base64 encoders for libpq - interested? " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Where did we leave this?\n> \n> I don't think adding a datatype just to provide base64 encoding is\n> a wise approach. The overhead of a new datatype (in the sense of\n> providing operators/functions for it) will be much more than the\n> benefit. I think providing encode/decode functions is sufficient...\n> and we have those already, don't we?\n\nAgreed.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 3 Sep 2001 22:14:50 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bytea/Base64 encoders for libpq - interested?" }, { "msg_contents": "> I don't think adding a datatype just to provide base64 encoding is\n> a wise approach. The overhead of a new datatype (in the sense of\n> providing operators/functions for it) will be much more than the\n> benefit. I think providing encode/decode functions is sufficient...\n> and we have those already, don't we?\n>\n\nIt might be nice to have a PQbyteaEscape or some such function available in\nthe libpq client library so that arbitrary binary could be escaped on the\nclient side and used in a sql statement. I actually wrote this already as an\naddition to the PHP PostgreSQL extension, but it would make more sense, now\nthat I think about it, for it to be in libpq and called from PHP (or\nwhatever). Comments?\n\nOn a related note, are there any other bytea functions we should have in the\nbackend before freezing for 7.2? I was thinking it would be nice to have a\nway to cast bytea into text and vice-versa, so that the normal text\nfunctions could be used for things like LIKE and concatenation. Any interest\nin this? If so, any guidance WRT how it should be implemented?\n\n-- Joe\n\n\n", "msg_date": "Mon, 3 Sep 2001 20:25:29 -0700", "msg_from": "\"Joe Conway\" <joseph.conway@home.com>", "msg_from_op": false, "msg_subject": "Re: Bytea/Base64 encoders for libpq - interested? " }, { "msg_contents": "> > I don't think adding a datatype just to provide base64 encoding is\n> > a wise approach. The overhead of a new datatype (in the sense of\n> > providing operators/functions for it) will be much more than the\n> > benefit. I think providing encode/decode functions is sufficient...\n> > and we have those already, don't we?\n> >\n> \n> It might be nice to have a PQbyteaEscape or some such function available in\n> the libpq client library so that arbitrary binary could be escaped on the\n> client side and used in a sql statement. I actually wrote this already as an\n> addition to the PHP PostgreSQL extension, but it would make more sense, now\n> that I think about it, for it to be in libpq and called from PHP (or\n> whatever). Comments?\n\nGood idea. I will commit the non-bytea escape in a day and you can base\na bytea one on that. You will have to pass in the length of the field\nbecause of course it is not null terminated.\n\n> On a related note, are there any other bytea functions we should have in the\n> backend before freezing for 7.2? I was thinking it would be nice to have a\n> way to cast bytea into text and vice-versa, so that the normal text\n> functions could be used for things like LIKE and concatenation. Any interest\n> in this? If so, any guidance WRT how it should be implemented?\n\nI can't see why you can't do that. The only problem is passing a \\0\n(null byte) back to the client.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 4 Sep 2001 00:12:35 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bytea/Base64 encoders for libpq - interested?" }, { "msg_contents": "\"Joe Conway\" <joseph.conway@home.com> writes:\n> I was thinking it would be nice to have a\n> way to cast bytea into text and vice-versa,\n\nHow will you handle a null byte in bytea data? Transforming it directly\ninto an embedded null in a text object is NOT an acceptable answer,\nbecause too many of the text functions will misbehave on such data.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 04 Sep 2001 00:45:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bytea/Base64 encoders for libpq - interested? " }, { "msg_contents": "> > It might be nice to have a PQbyteaEscape or some such function available\nin\n> > the libpq client library so that arbitrary binary could be escaped on\nthe\n> > client side and used in a sql statement. I actually wrote this already\nas an\n> > addition to the PHP PostgreSQL extension, but it would make more sense,\nnow\n> > that I think about it, for it to be in libpq and called from PHP (or\n> > whatever). Comments?\n>\n> Good idea. I will commit the non-bytea escape in a day and you can base\n> a bytea one on that. You will have to pass in the length of the field\n> because of course it is not null terminated.\n\nOK.\n\n>\n> > On a related note, are there any other bytea functions we should have in\nthe\n> > backend before freezing for 7.2? I was thinking it would be nice to have\na\n> > way to cast bytea into text and vice-versa, so that the normal text\n> > functions could be used for things like LIKE and concatenation. Any\ninterest\n> > in this? If so, any guidance WRT how it should be implemented?\n>\n> I can't see why you can't do that. The only problem is passing a \\0\n> (null byte) back to the client.\n\nWell, ISTM the simplest (if not the most efficient) way to do bytea-to-text\nwould be a function that takes the escaped string value from byteaout, and\ncreates a text value directly from it. The only danger I can think of is\nthat very long strings might need to be truncated in length, since the\nescaped string could be significantly longer than the binary.\n\nText-to-bytea should be a straight copy, since nothing that can be\nrepresented as text cannot be represented as bytea.\n\nAny comments or concerns?\n\n-- Joe\n\n\n\n", "msg_date": "Mon, 3 Sep 2001 21:55:23 -0700", "msg_from": "\"Joe Conway\" <joseph.conway@home.com>", "msg_from_op": false, "msg_subject": "Re: Bytea/Base64 encoders for libpq - interested?" }, { "msg_contents": "On Mon, Sep 03, 2001 at 08:48:22PM -0400, Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Where did we leave this?\n> \n> I don't think adding a datatype just to provide base64 encoding is\n> a wise approach. The overhead of a new datatype (in the sense of\n> providing operators/functions for it) will be much more than the\n> benefit. I think providing encode/decode functions is sufficient...\n> and we have those already, don't we?\n\n Agree too. But 1000 \"bad\" chars encoded by base64 vs. encoded by \nescape, what is longer and more expensive for transfer between FE \nand BE?\n\n A base64 problem is that encode all chars in string, but in the \nreal usage some data contains \"bad\" chars occasional only. \n\n\t\t\tKarel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Tue, 4 Sep 2001 10:11:45 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: Bytea/Base64 encoders for libpq - interested?" }, { "msg_contents": "Joe Conway writes:\n\n> On a related note, are there any other bytea functions we should have in the\n> backend before freezing for 7.2?\n\nThe SQL standards has a lot of functions for BLOB...\n\n> I was thinking it would be nice to have a\n> way to cast bytea into text and vice-versa, so that the normal text\n> functions could be used for things like LIKE and concatenation.\n\nBetter write a native LIKE function for bytea, now that some parts are\nthreatening to make the text-LIKE function use the locale collating\nsequence. (Multibyte aware text could also have interesting effects.)\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Tue, 4 Sep 2001 12:25:48 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Bytea/Base64 encoders for libpq - interested? " }, { "msg_contents": "\"Joe Conway\" <joseph.conway@home.com> writes:\n> Well, ISTM the simplest (if not the most efficient) way to do bytea-to-text\n> would be a function that takes the escaped string value from byteaout, and\n> creates a text value directly from it. The only danger I can think of is\n> that very long strings might need to be truncated in length, since the\n> escaped string could be significantly longer than the binary.\n\n> Text-to-bytea should be a straight copy, since nothing that can be\n> represented as text cannot be represented as bytea.\n\nUgh ... if the conversion functions are not inverses then I think they\nlose much of their value. I could see doing either of these:\n\n1. Conversion functions based on byteaout/byteain.\n\n2. Bytea to text escapes *only* null bytes, text to bytea treats only\n\"\\0\" as an escape sequence.\n\nOr maybe both, with two pairs of conversion functions.\n\nIn any case, we have to decide whether these coercion functions should\nbe named after the types --- ie, should they be made invokable as\nimplicit coercions? I'm dubious that that's a good idea; if we do it\nthen all sorts of textual operations will suddenly be allowed for bytea\nwithout any explicit conversion, which is likely to do more harm than\ngood. The reason for having a separate bytea type is exactly so that\nyou *can't* apply text ops to it without thinking.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 04 Sep 2001 09:50:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bytea/Base64 encoders for libpq - interested? " }, { "msg_contents": "> > On a related note, are there any other bytea functions we should have in\nthe\n> > backend before freezing for 7.2?\n>\n> The SQL standards has a lot of functions for BLOB...\n>\n\nOK - thanks. I'll take a look.\n\n> > I was thinking it would be nice to have a\n> > way to cast bytea into text and vice-versa, so that the normal text\n> > functions could be used for things like LIKE and concatenation.\n>\n> Better write a native LIKE function for bytea, now that some parts are\n> threatening to make the text-LIKE function use the locale collating\n> sequence. (Multibyte aware text could also have interesting effects.)\n>\n\nSounds like good advice. I'll try to get both the cast functions and a\nnative bytea LIKE function done.\n\n-- Joe\n\n", "msg_date": "Tue, 4 Sep 2001 08:37:35 -0700", "msg_from": "\"Joe Conway\" <joseph.conway@home.com>", "msg_from_op": false, "msg_subject": "Re: Bytea/Base64 encoders for libpq - interested? " }, { "msg_contents": "> Ugh ... if the conversion functions are not inverses then I think they\n> lose much of their value. I could see doing either of these:\n>\n> 1. Conversion functions based on byteaout/byteain.\n>\n> 2. Bytea to text escapes *only* null bytes, text to bytea treats only\n> \"\\0\" as an escape sequence.\n>\n> Or maybe both, with two pairs of conversion functions.\n>\n> In any case, we have to decide whether these coercion functions should\n> be named after the types --- ie, should they be made invokable as\n> implicit coercions? I'm dubious that that's a good idea; if we do it\n> then all sorts of textual operations will suddenly be allowed for bytea\n> without any explicit conversion, which is likely to do more harm than\n> good. The reason for having a separate bytea type is exactly so that\n> you *can't* apply text ops to it without thinking.\n>\n> regards, tom lane\n\nYou're right, as usual (I was tired when I wrote this last night ;). But I\nthink we have to escape/unescape both null and '\\', don't we?\n\nI agree that it would be better to *not* allow implicit coercions. Given\nthat, any preferences on function names? Are text_to_bytea() and\nbytea_to_text() too ugly?\n\n-- Joe\n\n\n\n", "msg_date": "Tue, 4 Sep 2001 08:46:44 -0700", "msg_from": "\"Joe Conway\" <joseph.conway@home.com>", "msg_from_op": false, "msg_subject": "Re: Bytea/Base64 encoders for libpq - interested? " }, { "msg_contents": "\"Joe Conway\" <joseph.conway@home.com> writes:\n> You're right, as usual (I was tired when I wrote this last night ;). But I\n> think we have to escape/unescape both null and '\\', don't we?\n\nYeah, you're right. My turn to have not thought hard enough.\n\n> I agree that it would be better to *not* allow implicit coercions. Given\n> that, any preferences on function names? Are text_to_bytea() and\n> bytea_to_text() too ugly?\n\nThey're pretty ugly, but more importantly they're only suitable if we\nhave exactly one conversion function each way. If we have two, what\nwill we call the second one?\n\nI think it's okay to let the argument type be implicit in the function\nargument list. Something like text_escaped(bytea) and text_direct(bytea)\n(with inverses bytea_escaped(text) and bytea_direct(text)) might do.\nI'm not totally happy with \"direct\" to suggest minimum escaping, though.\nBetter ideas anyone?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 04 Sep 2001 12:01:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bytea/Base64 encoders for libpq - interested? " }, { "msg_contents": "> You're right, as usual (I was tired when I wrote this last night ;). But I\n> think we have to escape/unescape both null and '\\', don't we?\n\nYes, I think backslashes need special escapes too.\n\nLet me ask a bigger question. We have the length of the text string in\nthe varlena header. Are we concerned about backend code not handling\nNULL in text fields, or frontend code returning strings with embedded\nnulls?\n\nI see problems in the text() functions for nulls, but is such a\nlimitation required for text types?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 4 Sep 2001 12:44:49 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bytea/Base64 encoders for libpq - interested?" }, { "msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [010904 12:01]:\n> They're pretty ugly, but more importantly they're only suitable if we\n> have exactly one conversion function each way. If we have two, what\n> will we call the second one?\n> \n> I think it's okay to let the argument type be implicit in the function\n> argument list. Something like text_escaped(bytea) and text_direct(bytea)\n> (with inverses bytea_escaped(text) and bytea_direct(text)) might do.\n> I'm not totally happy with \"direct\" to suggest minimum escaping, though.\n> Better ideas anyone?\nCooked vs raw? \n\nLER\n\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Tue, 4 Sep 2001 12:13:00 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: Bytea/Base64 encoders for libpq - interested?" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Let me ask a bigger question. We have the length of the text string in\n> the varlena header. Are we concerned about backend code not handling\n> NULL in text fields, or frontend code returning strings with embedded\n> nulls?\n\nThe former.\n\n> I see problems in the text() functions for nulls, but is such a\n> limitation required for text types?\n\nUnless you want to re-implement strcoll() and friends from scratch.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 04 Sep 2001 13:33:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bytea/Base64 encoders for libpq - interested? " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Let me ask a bigger question. We have the length of the text string in\n> > the varlena header. Are we concerned about backend code not handling\n> > NULL in text fields, or frontend code returning strings with embedded\n> > nulls?\n> \n> The former.\n> \n> > I see problems in the text() functions for nulls, but is such a\n> > limitation required for text types?\n> \n> Unless you want to re-implement strcoll() and friends from scratch.\n\nYes, I saw strcoll().\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 4 Sep 2001 13:34:37 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bytea/Base64 encoders for libpq - interested?" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Why not just stick these things into encode() and name them\n> \"my-cool-encoding\" or whatever.\n\nSounds good to me ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 04 Sep 2001 18:56:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bytea/Base64 encoders for libpq - interested? " }, { "msg_contents": "Tom Lane writes:\n\n> > I agree that it would be better to *not* allow implicit coercions. Given\n> > that, any preferences on function names? Are text_to_bytea() and\n> > bytea_to_text() too ugly?\n>\n> They're pretty ugly, but more importantly they're only suitable if we\n> have exactly one conversion function each way. If we have two, what\n> will we call the second one?\n\nWhy not just stick these things into encode() and name them\n\"my-cool-encoding\" or whatever. There is no truly natural conversion\nbetween text and bytea, so encode/decode seem like the proper place.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 5 Sep 2001 00:58:21 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Bytea/Base64 encoders for libpq - interested? " }, { "msg_contents": "> Peter Eisentraut <peter_e@gmx.net> writes:\n> > Why not just stick these things into encode() and name them\n> > \"my-cool-encoding\" or whatever.\n> \n> Sounds good to me ...\n> \n> regards, tom lane\n> \n\nSounds good to me too. Patch forthcoming . . .\n\n-- Joe\n\n", "msg_date": "Tue, 4 Sep 2001 16:09:17 -0700", "msg_from": "\"Joe Conway\" <joseph.conway@home.com>", "msg_from_op": false, "msg_subject": "Re: Bytea/Base64 encoders for libpq - interested? " }, { "msg_contents": "> > > I agree that it would be better to *not* allow implicit coercions.\nGiven\n> > > that, any preferences on function names? Are text_to_bytea() and\n> > > bytea_to_text() too ugly?\n> >\n> > They're pretty ugly, but more importantly they're only suitable if we\n> > have exactly one conversion function each way. If we have two, what\n> > will we call the second one?\n>\n> Why not just stick these things into encode() and name them\n> \"my-cool-encoding\" or whatever. There is no truly natural conversion\n> between text and bytea, so encode/decode seem like the proper place.\n>\n(I'm sending directly to Peter, Tom, and Bruce because you were all involved\nin this thread, and the list seems to be down)\n\nHere's a patch for bytea string functions. As discussed:\n\ntext encode(bytea, 'escape')\nbytea decode(text, 'escape')\n\nto allow conversion bytea-text/text-bytea conversion. Also implemented\n(SQL99 defines Binary Strings with all of these operators):\n\nbyteacat and \"||\" operator\nsubstring\ntrim (only did trim(bytea, bytea) since there is no default trim character\nfor bunary per SQL99)\nlength (just aliased octet_length, which is correct for bytea, I think)\nposition\nlike and \"~~\" operator\nnot like and \"!~~\" operator\n\nI think that's it.\n\nPasses all regression tests. Based on the discussion, I did not create\nfunctions to allow casting text-to-bytea or bytea-to-text -- it sounded like\nwe just want people to use encode/decode. I'm still planning to write\nPQescapeBytea, but that will come later as a seperate patch. One operator\ndefined by SQL99, but not implemented here (or for text datatype, that I\ncould see) is the \"overlay\" function (modifies string argument by replacing\na substring given start and length with a replacement string). It sounds\nuseful -- any interest?\n\nReview and comments much appreciated!\n\n-- Joe", "msg_date": "Wed, 5 Sep 2001 13:34:06 -0700", "msg_from": "\"Joe Conway\" <joseph.conway@home.com>", "msg_from_op": false, "msg_subject": "Bytea string operator support" }, { "msg_contents": "On Wed, Sep 05, 2001 at 01:34:06PM -0700, Joe Conway wrote:\n> > Why not just stick these things into encode() and name them\n> > \"my-cool-encoding\" or whatever. There is no truly natural conversion\n> > between text and bytea, so encode/decode seem like the proper place.\n> \n> Here's a patch for bytea string functions. As discussed:\n> \n> text encode(bytea, 'escape')\n> bytea decode(text, 'escape')\n\nWhy are you using \\xxx encoding there? As the 'escape' encoding\nis supposed to be 'minimalistic' as it escapes only 2\nproblematic values, then IMHO it would be better to use\n\\0 and \\\\ as escapes - takes less room.\n\n\n-- \nmarko\n\n", "msg_date": "Thu, 6 Sep 2001 19:39:49 +0200", "msg_from": "Marko Kreen <marko@l-t.ee>", "msg_from_op": false, "msg_subject": "Re: Bytea string operator support" }, { "msg_contents": "> On Wed, Sep 05, 2001 at 01:34:06PM -0700, Joe Conway wrote:\n> > > Why not just stick these things into encode() and name them\n> > > \"my-cool-encoding\" or whatever. There is no truly natural conversion\n> > > between text and bytea, so encode/decode seem like the proper place.\n> > \n> > Here's a patch for bytea string functions. As discussed:\n> > \n> > text encode(bytea, 'escape')\n> > bytea decode(text, 'escape')\n> \n> Why are you using \\xxx encoding there? As the 'escape' encoding\n> is supposed to be 'minimalistic' as it escapes only 2\n> problematic values, then IMHO it would be better to use\n> \\0 and \\\\ as escapes - takes less room.\n\nAgreed, and I have documented this in the SGML pages. Knowing this,\nbytea becomes a much easier format to use.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 6 Sep 2001 13:43:53 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bytea string operator support" }, { "msg_contents": "> > > Here's a patch for bytea string functions. As discussed:\n> > >\n> > > text encode(bytea, 'escape')\n> > > bytea decode(text, 'escape')\n> >\n> > Why are you using \\xxx encoding there? As the 'escape' encoding\n> > is supposed to be 'minimalistic' as it escapes only 2\n> > problematic values, then IMHO it would be better to use\n> > \\0 and \\\\ as escapes - takes less room.\n>\n> Agreed, and I have documented this in the SGML pages. Knowing this,\n> bytea becomes a much easier format to use.\n\nNo problem -- I kind of like the octal style better, but I can see your\npoint. I'll wait for awhile for more comments, and then send in a new patch.\n\n-- Joe\n\n\n", "msg_date": "Thu, 6 Sep 2001 11:08:26 -0700", "msg_from": "\"Joe Conway\" <joseph.conway@home.com>", "msg_from_op": false, "msg_subject": "Re: Bytea string operator support" }, { "msg_contents": "> > > > Here's a patch for bytea string functions. As discussed:\n> > > >\n> > > > text encode(bytea, 'escape')\n> > > > bytea decode(text, 'escape')\n> > >\n> > > Why are you using \\xxx encoding there? As the 'escape' encoding\n> > > is supposed to be 'minimalistic' as it escapes only 2\n> > > problematic values, then IMHO it would be better to use\n> > > \\0 and \\\\ as escapes - takes less room.\n> >\n> > Agreed, and I have documented this in the SGML pages. Knowing this,\n> > bytea becomes a much easier format to use.\n>\n> No problem -- I kind of like the octal style better, but I can see your\n> point. I'll wait for awhile for more comments, and then send in a new\npatch.\n\nHere's a revised patch. Changes:\n\n1. Now outputs '\\\\' instead of '\\134' when using encode(bytea, 'escape')\nNote that I ended up leaving \\0 as \\000 so that there are no ambiguities\nwhen decoding something like, for example, \\0123.\n\n2. Fixed bug in byteain which allowed input values which were not valid\noctals (e.g. \\789), to be parsed as if they were octals.\n\nJoe", "msg_date": "Thu, 6 Sep 2001 23:47:21 -0700", "msg_from": "\"Joe Conway\" <joseph.conway@home.com>", "msg_from_op": false, "msg_subject": "Re: Bytea string operator support" } ]
[ { "msg_contents": "Can somebody explain to me:\n\n> radius=# explain select count (radiuspk) from radius ;\n> NOTICE: QUERY PLAN:\n>\n> Aggregate (cost=12839.79..12839.79 rows=1 width=8)\n> -> Seq Scan on radius (cost=0.00..11843.43 rows=398543 width=8)\n>\n> EXPLAIN\n\n\nThis query answers me *instantly* after hitting return\n\n> radius=# select count (radiuspk) from radius ;\n> count\n> --------\n> 398543\n> (1 row)\n\nThis query takes about 3 seconds. But the query plan *already* knows the \nnumber of rows (\"rows=398543\"). So why does it take 3 seconds. Is my \nassumption correct that the optimiser still can be optimized a little? :-)\n\nReinoud (not that this is a real problem, just wondering)\n\n\n", "msg_date": "Tue, 28 Aug 2001 13:33:00 +0200 (CEST)", "msg_from": "\"Reinoud van Leeuwen\" <reinoud@xs4all.nl>", "msg_from_op": true, "msg_subject": "performance question" }, { "msg_contents": "Reinoud van Leeuwen writes:\n\n> > radius=# select count (radiuspk) from radius ;\n> > count\n> > --------\n> > 398543\n> > (1 row)\n>\n> This query takes about 3 seconds. But the query plan *already* knows the\n> number of rows (\"rows=398543\").\n\nThis is only an estimate which is only updated by VACUUM. Presumably you\ndidn't add or remove any rows since your last VACUUM.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Tue, 28 Aug 2001 14:32:48 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: performance question" }, { "msg_contents": "On Tue, 28 Aug 2001, Reinoud van Leeuwen wrote:\n\n> Can somebody explain to me:\n> \n> > radius=# explain select count (radiuspk) from radius ;\n> > NOTICE: QUERY PLAN:\n> >\n> > Aggregate (cost=12839.79..12839.79 rows=1 width=8)\n> > -> Seq Scan on radius (cost=0.00..11843.43 rows=398543 width=8)\n> >\n> > EXPLAIN\n> \n> \n> This query answers me *instantly* after hitting return\n> \n> > radius=# select count (radiuspk) from radius ;\n> > count\n> > --------\n> > 398543\n> > (1 row)\n> \n> This query takes about 3 seconds. But the query plan *already* knows the \n> number of rows (\"rows=398543\"). So why does it take 3 seconds. Is my \n> assumption correct that the optimiser still can be optimized a little? :-)\n\nNot in this case. The row numbers from explain are just estimates \nfrom the last vacuum. As you modify the table, the estimated rows\nwill be off.\n\nFor example:\nsszabo=> create table a (a int);\nCREATE\nsszabo=> insert into a values (100);\nINSERT 808899 1\nsszabo=> insert into a values (101);\nINSERT 808900 1\nsszabo=> explain select count(a) from a;\nNOTICE: QUERY PLAN:\n\nAggregate (cost=22.50..22.50 rows=1 width=4)\n -> Seq Scan on a (cost=0.00..20.00 rows=1000 width=4)\n\nEXPLAIN\nsszabo=> vacuum analyze a;\nVACUUM\nsszabo=> explain select count(a) from a;\nNOTICE: QUERY PLAN:\n\nAggregate (cost=1.02..1.02 rows=1 width=4)\n -> Seq Scan on a (cost=0.00..1.02 rows=2 width=4)\n\nEXPLAIN\nsszabo=> insert into a values (102);\nINSERT 808902 1\nsszabo=> explain select count(a) from a;\nNOTICE: QUERY PLAN:\n\nAggregate (cost=1.02..1.02 rows=1 width=4)\n -> Seq Scan on a (cost=0.00..1.02 rows=2 width=4)\n\nEXPLAIN\nsszabo=> vacuum analyze a;\nVACUUM\nsszabo=> explain select count(a) from a;\nNOTICE: QUERY PLAN:\n\nAggregate (cost=1.04..1.04 rows=1 width=4)\n -> Seq Scan on a (cost=0.00..1.03 rows=3 width=4)\n\nEXPLAIN\n\n\n", "msg_date": "Tue, 28 Aug 2001 05:44:33 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: performance question" }, { "msg_contents": "> On Tue, 28 Aug 2001, Reinoud van Leeuwen wrote:\n> \n>> Can somebody explain to me:\n>> \n>> > radius=# explain select count (radiuspk) from radius ;\n>> > NOTICE: QUERY PLAN:\n>> >\n>> > Aggregate (cost=12839.79..12839.79 rows=1 width=8)\n>> > -> Seq Scan on radius (cost=0.00..11843.43 rows=398543 width=8)\n>> >\n>> > EXPLAIN\n>> \n>> \n>> This query answers me *instantly* after hitting return\n>> \n>> > radius=# select count (radiuspk) from radius ;\n>> > count\n>> > --------\n>> > 398543\n>> > (1 row)\n>> \n>> This query takes about 3 seconds. But the query plan *already* knows\n>> the number of rows (\"rows=398543\"). So why does it take 3 seconds. Is\n>> my assumption correct that the optimiser still can be optimized a\n>> little? :-)\n> \n> Not in this case. The row numbers from explain are just estimates \n> from the last vacuum. As you modify the table, the estimated rows will\n> be off.\n\nYes, I just found out that somebody else is running a script on our test \nserver that vacuums all databases each night. That explains a lot.\n\nThanx for thinking with me\n\nReinoud\n\n", "msg_date": "Tue, 28 Aug 2001 15:09:06 +0200 (CEST)", "msg_from": "\"Reinoud van Leeuwen\" <reinoud@xs4all.nl>", "msg_from_op": true, "msg_subject": "Re: performance question" } ]
[ { "msg_contents": "If full SQL92 implementation of INTERVAL would be a welcome addition,\ncould it be added as a TODO item? I would like to work on it, since I\nwant to use some features that are not currently supported.\n\n\nSQL92 INTERVAL data type (also SQL99, I think):\n\n <interval type> ::= INTERVAL {{<start field> TO <end field>} |\n <single datetime field>}\n\n <start field> ::= <non-second datetime field>\n [(<interval leading field precision>)]\n\n <end field> ::= <non second datetime field> |\n SECOND [(<fractional seconds precision>)]\n\n <single datetime field> ::= <non-second datetime field>\n [(<interval leading field precision>)] |\n SECOND[(<interval leading field precision>\n [,<fractional seconds precision>])]\n\n <non-second datetime field> ::= YEAR | MONTH | DAY | HOUR | MINUTE\n\n 0 < <interval leading field precision> < implementation defined maximum\n (default is 2)\n\n 0 <= <fractional seconds precision> < 10\n (default is 6)\n\n INTERVALs may be defined by a range within either YEAR TO MONTH or\n DAY TO SECOND.\n\n\nINTERVAL literals are defined as:\n\n INTERVAL [+|-]'<value string>' <interval qualifier>\n\n <interval qualifier> ::= <start field> [TO <end field>]\n\nPart of this syntax is supported by the parser, but not consistently.\n\n\nValid SQL92 syntax that is not currently supported:\n\n junk=# SELECT INTERVAL '1990' YEAR(4);\n ERROR: parser: parse error at or near \"(\"\n junk=# select INTERVAL '1990' YEAR;\n ERROR: Bad interval external representation '1990'\n junk=# SELECT INTERVAL -'1-1' YEAR TO MONTH;\n ERROR: parser: parse error at or near \"YEAR\"\n junk=# SELECT INTERVAL +'100 0:0:0.1' DAY(3) TO SECOND;\n ERROR: parser: parse error at or near \"DAY\"\n junk=# SELECT INTERVAL +'100 0:0:0.1' DAY TO SECOND;\n ERROR: parser: parse error at or near \"DAY\"\n junk=# -- actually, it doesn't like the +\n junk=# SELECT INTERVAL '0:0:0.0:' HOUR TO SECOND(9);\n ERROR: parser: parse error at or near \"(\"\n junk=# SELECT INTERVAL '100000.001' SECOND(6,3);\n ERROR: parser: parse error at or near \"(\"\n junk=# SELECT INTERVAL '100000.001' SECOND; \n ?column? \n -------------------\n 1 day 03:46:40.00\n (1 row)\n\n junk=# -- should output '100000.001'\n junk=# SELECT INTERVAL -'10' MINUTE;\n ERROR: parser: parse error at or near \"MINUTE\"\n junk=# SELECT INTERVAL '1:1' HOUR(6) TO MINUTE;\n ERROR: parser: parse error at or near \"(\"\n\n\nValid interval value format not currently supported:\n\n year-month\n\n\nSince there are aspects of SQL92 interval representation that clash\nwith the current implementation, I would suggest that current\npractice be followed unless SQL92 syntax is used. So a field that\nis of type INTERVAL without qualification would continue to work\nas it does now (except that I would like to implement range checking).\n\nThe main difference would be in the output format. For a\nSQL92-compliant interval column, the output would be the appropriate\nparts of either\n\n year-month\n\nor\n\n day hour:minute:second.fractional_second\n\naccording to the field definition, without any words (i.e.: \"1 03:46:40.00\"\ninstead of \"1 day 03:46:40.00\", and \"3-5\" instead of \"3 years 5 mons\").\nAll parts within the range will be shown, even if they are trailing zeros.\n\nThe other difference would be that input values would be range-checked\nto see that they didn't exceed the possible range of the type; so\nthe range of INTERVAL HOUR(3) TO MINUTE would be 0 seconds to\n+|-999:59:59.999999 and inserting a value outside the range would be\nan error. Intervals of the current type also need range-checking:\n\njunk=# select interval '199999999 years';\n ?column? \n--------------------------\n -157913942 years -4 mons\n(1 row)\n\n\nWhat do you think?\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Use hospitality one to another without grudging.\" \n I Peter 4:9 \n\n\n", "msg_date": "Tue, 28 Aug 2001 13:50:57 +0100", "msg_from": "\"Oliver Elphick\" <olly@lfix.co.uk>", "msg_from_op": true, "msg_subject": "INTERVAL type: SQL92 implementation" }, { "msg_contents": "> If full SQL92 implementation of INTERVAL would be a welcome addition,\n> could it be added as a TODO item? I would like to work on it, since I\n> want to use some features that are not currently supported.\n...\n> Valid SQL92 syntax that is not currently supported:\n...\n> junk=# SELECT INTERVAL '1990' YEAR(4);\n...\n\nSo far, I've had shift/reduce troubles trying to have a trailing\nqualifier field like this.\n\n> junk=# SELECT INTERVAL -'1-1' YEAR TO MONTH;\n> ERROR: parser: parse error at or near \"YEAR\"\n\nA leading sign in front of a string-like field? Yuck.\n\n...\n> Valid interval value format not currently supported:\n> year-month\n\nI'll look at accepting this for the current INTERVAL type too.\n\n> Since there are aspects of SQL92 interval representation that clash\n> with the current implementation, I would suggest that current\n> practice be followed unless SQL92 syntax is used. So a field that\n> is of type INTERVAL without qualification would continue to work\n> as it does now (except that I would like to implement range checking).\n\nI like this point. Really, SQL99 intervals are a bit unwieldy, though\nthey do have \"extra features\" which someone might find useful.\n\n> The main difference would be in the output format...\n> ... parts of either year-month or\n> day hour:minute:second.fractional_second\n\nWe could probably support this format (now that you have described it to\nus) at least for the \"SQL\" datestyle even for the existing INTERVAL\ntype.\n\n> according to the field definition, without any words (i.e.: \"1 03:46:40.00\"\n> instead of \"1 day 03:46:40.00\", and \"3-5\" instead of \"3 years 5 mons\").\n> All parts within the range will be shown, even if they are trailing zeros.\n\nThis set of conventions might let the date/time parser do a complete\njob. I put in the \"days\" text label to reduce the ambiguity of a single,\nunlabeled integer.\n\n> What do you think?\n\nHave you gotten started yet? Finished yet?? ;)\n\n - Thomas\n", "msg_date": "Thu, 30 Aug 2001 14:53:25 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: INTERVAL type: SQL92 implementation" } ]
[ { "msg_contents": "I have a fatal error message while connecting more than 32 users using\ncurrent:\n\n\tAug 29 11:25:18 srapc1474 postgres[12189]: [1] FATAL 1:\n\tProcGetNewSemIdAndNum: cannot allocate a free semaphore\n\nrather than a more informative message:\n\n Sorry, too many clients already\n\nDoes anynbody know what's happening?\n--\nTatsuo Ishii\n", "msg_date": "Wed, 29 Aug 2001 11:31:54 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "cannot detect too many clients" }, { "msg_contents": ">From that error below it looks to me that you will need to recompile your\nkernel (or whatever) to add support for more shared memory and more shared\nsemaphores.\n\nTry this URL:\n\nhttp://www.ca.postgresql.org/devel-corner/docs/admin/kernel-resources.html\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Tatsuo Ishii\n> Sent: Wednesday, 29 August 2001 10:32 AM\n> To: pgsql-hackers@postgresql.org\n> Subject: [HACKERS] cannot detect too many clients\n>\n>\n> I have a fatal error message while connecting more than 32 users using\n> current:\n>\n> \tAug 29 11:25:18 srapc1474 postgres[12189]: [1] FATAL 1:\n> \tProcGetNewSemIdAndNum: cannot allocate a free semaphore\n>\n> rather than a more informative message:\n>\n> Sorry, too many clients already\n>\n> Does anynbody know what's happening?\n> --\n> Tatsuo Ishii\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n\n", "msg_date": "Wed, 29 Aug 2001 11:06:21 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "RE: cannot detect too many clients" }, { "msg_contents": "> >From that error below it looks to me that you will need to recompile your\n> kernel (or whatever) to add support for more shared memory and more shared\n> semaphores.\n\nMy point is 7.1 or earlier behaves different from current. 7.1 or\nearlier's postmaster won't start if there is not enough shared memory\nand semaphores.\n\nI just want to know if that's a intended behavior.\n--\nTatsuo Ishii\n\n", "msg_date": "Wed, 29 Aug 2001 12:42:31 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "RE: cannot detect too many clients" }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> I have a fatal error message while connecting more than 32 users using\n> current:\n\n> \tAug 29 11:25:18 srapc1474 postgres[12189]: [1] FATAL 1:\n> \tProcGetNewSemIdAndNum: cannot allocate a free semaphore\n\n> rather than a more informative message:\n\n> Sorry, too many clients already\n\n> Does anynbody know what's happening?\n\nUgh. I thought I'd tested that. The problem is that it runs out of\nsemaphores before noting that there's no free slots in the PROC array\n(which is where the \"more informative\" message was supposed to come\nout). Need to rearrange the order of operations, or some such.\nWill fix.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 03 Sep 2001 14:22:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: cannot detect too many clients " }, { "msg_contents": "I wrote:\n> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n>> I have a fatal error message while connecting more than 32 users using\n>> current:\n>> Aug 29 11:25:18 srapc1474 postgres[12189]: [1] FATAL 1:\n>> ProcGetNewSemIdAndNum: cannot allocate a free semaphore\n>> rather than a more informative message:\n>> Sorry, too many clients already\n>> Does anynbody know what's happening?\n\n> Ugh. I thought I'd tested that.\n\nIndeed I had, but I'd used a low value of MaxBackends to test it.\nIf MaxBackends is a multiple of PROC_NSEMS_PER_SET (16) then too-many-\nbackends will be detected here, not later on when we try to insert our\nPROC entry into the sinval array. Whoops. Fix committed.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 05 Sep 2001 23:05:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: cannot detect too many clients " } ]
[ { "msg_contents": "\nFinally figuring that enough is enough, I've been spending the past few\ndays working on the list archives ...\n\nI've reformatted, so far, the following lists into a cleaner format:\n\n\tpgsql-hackers\n\tpgsql-sql\n\tpgsql-bugs\n\tpgsql-general\n\tpgadmin-hackers\n\tpgadmin-support\n\nWith more lists to be worked on over the next few days ...\n\nMajor changes include the following:\n\n\tReplaced the wide banner in the center with two smaller, 120x120\n banners in the corners ...\n\n\tProvide a search facility incorporated into each page that\n searches the mhonarc pages themselves ...\n\n\tChange the colors to better match the main site ...\n\n\tMoved the archives to its own URL/Domain so that it is no\n\tlonger part of the general mirror of the site ...\n\nThere is still alot of work that I'm planning on doing on this, but I want\nto get all of the archives moved first ...\n\nTo access any of the archives that have been moved, go to:\n\n\thttp://archives.postgresql.org/<list>\n\nI've been modifying the links from the main web site for those lists that\nI've moved, as I've moved them, so getting there through 'normal channels'\nshould also work ...\n\nOnce finished, there will also be links to the OpenFTS search facility\nthat we have online, which uses a different way of formatting/displaying\nthe messages, so you will have the choice of using either ...\n\n", "msg_date": "Tue, 28 Aug 2001 23:44:16 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "List archives moved and cleaned up ..." } ]
[ { "msg_contents": "Attached patch fixes following problem: createlang.sh expects one handler\nfor each PL. If a handler function for a new PL is found in pg_languages,\nPL won't be created. So you need to have plperl_call_handler and\nplperlu_call_handler. This patch just does that.\n\n-alex", "msg_date": "Wed, 29 Aug 2001 00:09:52 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "[PATCH] tiny fix for plperlu" }, { "msg_contents": "Alex Pilosov writes:\n\n> Attached patch fixes following problem: createlang.sh expects one handler\n> for each PL. If a handler function for a new PL is found in pg_languages,\n> PL won't be created. So you need to have plperl_call_handler and\n> plperlu_call_handler. This patch just does that.\n\nThis is already fixed by allowing handlers to be shared.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 29 Aug 2001 13:21:07 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: [PATCH] tiny fix for plperlu" }, { "msg_contents": "Nevermind this patch then...\n\nOn Wed, 29 Aug 2001, Peter Eisentraut wrote:\n\n> Alex Pilosov writes:\n> \n> > Attached patch fixes following problem: createlang.sh expects one handler\n> > for each PL. If a handler function for a new PL is found in pg_languages,\n> > PL won't be created. So you need to have plperl_call_handler and\n> > plperlu_call_handler. This patch just does that.\n> \n> This is already fixed by allowing handlers to be shared.\n> \n> \n\n", "msg_date": "Wed, 29 Aug 2001 07:35:43 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] tiny fix for plperlu" } ]
[ { "msg_contents": "Attached patch does the above.\n\nNotes:\n1. Incompatible changes: CURSOR is now a keyword and may not be used as an\nidentifier (tablename, etc). Otherwise, we get shift-reduce conflicts in\ngrammar.\n\n2. Major changes: \n\na) RangeTblEntry (RTE for short) instead of having two possibilities,\nsubquery and non-subquery, now has a rtetype field which can be of 3\npossible states: RTE_RELATION, RTE_SUBSELECT, RTE_PORTAL). The\ntype-specific structures are unionized, so where you used to have\nrte->relid, now you must do rte->u.rel.relid.\n\nProper way to check what is the RTE type is now checking for rte->rtetype\ninstead of checking whether rte->subquery is null.\n\nb) Similarly, RelOptInfo now has a RelOptInfoType which is an enum with 4\nstates, REL_PLAIN,REL_SUBQUERY,REL_JOIN,REL_PORTAL. I did not do the\nunionization of type-specific structures. Maybe I should've if I'm going\nto get in a big change anyway.\n\nc) There's a function PortalRun which fetches N records from portal and\nsets atEnd/atStart values properly. It replaces code duplicated in 2\nplaces. \n\n\nHow to test: \n\ndeclare foo cursor for select * from pg_class;\n\nselect * from cursor foo;\n\nDocumentation updates will be forthcoming ASAP, I just wanted to get this\npatch in queue before the freeze. Or at least making sure someone could\nlook through this patch before freeze. :)\n\nNext patch will be one to support \"SELECT * FROM func(arg1,arg2)\" which\nwould work by creating first a special kind of portal for selection from a\nfunction and then setting query source to be that portal.\n\n-alex", "msg_date": "Wed, 29 Aug 2001 00:23:34 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "[PATCH] [LARGE] select * from cursor foo" }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n> Attached patch does the above.\n> \n> Notes:\n> 1. Incompatible changes: CURSOR is now a keyword and may not be used as an\n> identifier (tablename, etc). Otherwise, we get shift-reduce conflicts in\n> grammar.\n> \n> 2. Major changes: \n> \n> a) RangeTblEntry (RTE for short) instead of having two possibilities,\n> subquery and non-subquery, now has a rtetype field which can be of 3\n> possible states: RTE_RELATION, RTE_SUBSELECT, RTE_PORTAL). The\n> type-specific structures are unionized, so where you used to have\n> rte->relid, now you must do rte->u.rel.relid.\n> \n> Proper way to check what is the RTE type is now checking for rte->rtetype\n> instead of checking whether rte->subquery is null.\n> \n> b) Similarly, RelOptInfo now has a RelOptInfoType which is an enum with 4\n> states, REL_PLAIN,REL_SUBQUERY,REL_JOIN,REL_PORTAL. I did not do the\n> unionization of type-specific structures. Maybe I should've if I'm going\n> to get in a big change anyway.\n> \n> c) There's a function PortalRun which fetches N records from portal and\n> sets atEnd/atStart values properly. It replaces code duplicated in 2\n> places. \n> \n> \n> How to test: \n> \n> declare foo cursor for select * from pg_class;\n> \n> select * from cursor foo;\n> \n> Documentation updates will be forthcoming ASAP, I just wanted to get this\n> patch in queue before the freeze. Or at least making sure someone could\n> look through this patch before freeze. :)\n> \n> Next patch will be one to support \"SELECT * FROM func(arg1,arg2)\" which\n> would work by creating first a special kind of portal for selection from a\n> function and then setting query source to be that portal.\n> \n> -alex\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 4 Sep 2001 01:00:49 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] [LARGE] select * from cursor foo" }, { "msg_contents": "Alex Pilosov <alex@pilosoft.com> writes:\n> Attached patch does the above.\n\nAlex, could we have this resubmitted in \"diff -c\" format? Plain diff\nformat is way too risky to apply.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 17 Sep 2001 17:15:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] [LARGE] select * from cursor foo " }, { "msg_contents": "On Mon, 17 Sep 2001, Tom Lane wrote:\n\n> Alex Pilosov <alex@pilosoft.com> writes:\n> > Attached patch does the above.\n> \n> Alex, could we have this resubmitted in \"diff -c\" format? Plain diff\n> format is way too risky to apply.\nTom,\n\npostgresql.org cvsup repository is broken (and according to my records,\nbeen so for last 4 days at least). Unfortunately, I can't get my changes\nin correct format unless that gets fixed...So I guess that'll go in 7.3 :(\n\n(error I'm getting is this:\nServer message: Collection \"pgsql\" release \"cvs\" is not available here\nwhich leads me to assume a misconfiguration somewhere).\n\n\nThat is, unless you make an exception for a few days and cvsup gets fixed\nin meantime :P)\n\nCVS repository also seems broken right now, I'm unable to log in (cvs\nlogin: authorization failed: server cvs.postgresql.org rejected access to\n/home/projects/pgsql/cvsroot for user anoncvs) in both cvs.postgresql.org\nand anoncvs.postgresql.org with all possible anoncvs passwords (empty,\nanoncvs, postgresql). Or had the anoncvs password been changed, or I\nmissed an announcement?\n\n--\nAlex Pilosov | http://www.acedsl.com/home.html\nCTO - Acecape, Inc. | AceDSL:The best ADSL in the world\n325 W 38 St. Suite 1005 | (Stealth Marketing Works! :)\nNew York, NY 10018 |\n\n\n\n", "msg_date": "Thu, 20 Sep 2001 22:08:51 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] [LARGE] select * from cursor foo " }, { "msg_contents": "On Thu, 20 Sep 2001, Alex Pilosov wrote:\n\n> CVS repository also seems broken right now, I'm unable to log in (cvs\n> login: authorization failed: server cvs.postgresql.org rejected access\n> to /home/projects/pgsql/cvsroot for user anoncvs) in both\n> cvs.postgresql.org and anoncvs.postgresql.org with all possible\n> anoncvs passwords (empty, anoncvs, postgresql). Or had the anoncvs\n> password been changed, or I missed an announcement?\nAugh. A minute later, I find the announcement, but I still have problem,\nafter logging in, I have: \n\ntick-bash# cvs -d :pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot co pgsql\ncvs server: Updating pgsql\ncvs server: failed to create lock directory for `/projects/cvsroot/pgsql'\n(/projects/cvsroot/pgsql/#cvs.lock): Permission denied\ncvs server: failed to obtain dir lock in repository\n`/projects/cvsroot/pgsql'\ncvs [server aborted]: read lock failed - giving up\n\nA minute later:\ncannot create_adm_p /tmp/cvs-serv8577/pgsql\n\n...:(\n\n", "msg_date": "Thu, 20 Sep 2001 22:12:23 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "CVS/CVSup problems (was Re: [PATCH] [LARGE] ) " }, { "msg_contents": "> postgresql.org cvsup repository is broken (and according to my records,\n> been so for last 4 days at least). Unfortunately, I can't get my changes\n> in correct format unless that gets fixed...So I guess that'll go in 7.3 :(\n\nThe beta release schedule is on hold until we can get access. I've\nwasted a few days tracking down a problem which turns out to have been\nfixed in cvs, but since I don't have CVSup access (just like you) I\ncan't get the updates. And I can't finish testing and can't finish\nupdating the regression suite, etc etc.\n\n - Thomas\n", "msg_date": "Fri, 21 Sep 2001 05:04:19 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: [PATCH] [LARGE] select * from cursor foo" }, { "msg_contents": "On Mon, 17 Sep 2001, Tom Lane wrote:\n\n> Alex Pilosov <alex@pilosoft.com> writes:\n> > Attached patch does the above.\n> \n> Alex, could we have this resubmitted in \"diff -c\" format? Plain diff\n> format is way too risky to apply.\n\nResubmitted. In case list server chokes on the attachment's size, it is\nalso available on www.formenos.org/pg/cursor.fix5.diff\n\n\nThanks\n-alex", "msg_date": "Fri, 21 Sep 2001 15:11:10 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] [LARGE] select * from cursor foo " }, { "msg_contents": "Alex Pilosov <alex@pilosoft.com> writes:\n>> Alex, could we have this resubmitted in \"diff -c\" format? Plain diff\n>> format is way too risky to apply.\n\n> Resubmitted. In case list server chokes on the attachment's size, it is\n> also available on www.formenos.org/pg/cursor.fix5.diff\n\nI've looked this over, and I think it's not mature enough to apply at\nthis late stage of the 7.2 cycle; we'd better hold it over for more work\nduring 7.3. Major problems:\n\n1. Insufficient defense against queries that outlive the cursors they\nselect from. For example, I could create a view that selects from a\ncursor. Yes, you check to see if the cursor name still exists ... but\nwhat if that name now refers to a new cursor that delivers a completely\ndifferent set of columns? Instant coredump.\n\n2. I don't understand the semantics of queries that read cursors\nthat've already had some rows fetched from them. Should we reset the\ncursor to the start, so that all its data is implicitly available?\nThat seems strange, but if we don't do it, I think the behavior will be\nquite unpredictable, since some join types are going to result in\nresetting and re-reading the cursor multiple times anyway. (You've\npunted on this issue by not implementing ExecPortalReScan, but that's\nnot acceptable for a production feature.)\n\n3. What does it mean to SELECT FOR UPDATE from a cursor? I don't think\nignoring the FOR UPDATE spec is acceptable; maybe we just have to raise\nan error.\n\n4. Complete lack of documentation, including lack of attention to\nupdating the code's internal documentation. (For instance, you seem\nto have changed some of the conventions described in nodes/relation.h,\nbut you didn't fix those comments.)\n\nThe work you have done so far on changing RTE etc looks good ... but\nI don't think the patch is ready to go. Nor am I comfortable with\napplying it now on the assumption that the problems can be fixed during\nbeta.\n\nI realize you originally sent this in a month ago, and perhaps you would\nhave had time to respond to these concerns if people had reviewed the\npatch promptly. For myself, I can only apologize for not getting to it\nsooner. I've had a few distractions over the past month :-(\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 21 Sep 2001 18:26:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] [LARGE] select * from cursor foo " }, { "msg_contents": "On Fri, 21 Sep 2001, Tom Lane wrote:\n> \n> I've looked this over, and I think it's not mature enough to apply at\n> this late stage of the 7.2 cycle; we'd better hold it over for more work\n> during 7.3. Major problems:\n\n> 1. Insufficient defense against queries that outlive the cursors they\n> select from. For example, I could create a view that selects from a\n> cursor. Yes, you check to see if the cursor name still exists ... but\n> what if that name now refers to a new cursor that delivers a completely\n> different set of columns? Instant coredump.\nGood point. I'll work on it.\n\n> 2. I don't understand the semantics of queries that read cursors\n> that've already had some rows fetched from them. Should we reset the\n> cursor to the start, so that all its data is implicitly available?\n> That seems strange, but if we don't do it, I think the behavior will be\n> quite unpredictable, since some join types are going to result in\n> resetting and re-reading the cursor multiple times anyway. (You've\n> punted on this issue by not implementing ExecPortalReScan, but that's\n> not acceptable for a production feature.)\nYeah, I couldn't figure out which option is worse, which is why I didn't\nimplement it. I think that rewinding the cursor on each query is better,\nbut I wanted to get comments first.\n\n> 3. What does it mean to SELECT FOR UPDATE from a cursor? I don't think\n> ignoring the FOR UPDATE spec is acceptable; maybe we just have to raise\n> an error.\nOK, giving an error makes sense.\n\n> 4. Complete lack of documentation, including lack of attention to\n> updating the code's internal documentation. (For instance, you seem\n> to have changed some of the conventions described in nodes/relation.h,\n> but you didn't fix those comments.)\nOK. \n\n> The work you have done so far on changing RTE etc looks good ... but\n> I don't think the patch is ready to go. Nor am I comfortable with\n> applying it now on the assumption that the problems can be fixed during\n> beta.\n\nIf you want to consider this argument: It won't break anything that's not\nusing the feature. It is needed (its not a 'fringe change' to benefit few)\n(well, at least I think so :). \n\nIt also is a base for my code to do 'select * from func(args)', which is\ndefinitely needed and base of many flames against postgres not having\n'real' stored procedures (ones that return sets). I was hoping to get the\nrest of it in 7.2 so these flames can be put to rest.\n\nChanges to core code are obvious, and all documentation can be taken care\nof during beta.\n\nBut I understand your apprehension...\n\n> I realize you originally sent this in a month ago, and perhaps you would\n> have had time to respond to these concerns if people had reviewed the\n> patch promptly. For myself, I can only apologize for not getting to it\n> sooner. I've had a few distractions over the past month :-(\nCan't blame you, completely understandable with GB situation...\n\n\n\n", "msg_date": "Fri, 21 Sep 2001 22:12:13 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] [LARGE] select * from cursor foo " }, { "msg_contents": "\nAlex, can I have an updated version of this patch for application to\n7.3?\n\n\n---------------------------------------------------------------------------\n\nAlex Pilosov wrote:\n> On Fri, 21 Sep 2001, Tom Lane wrote:\n> > \n> > I've looked this over, and I think it's not mature enough to apply at\n> > this late stage of the 7.2 cycle; we'd better hold it over for more work\n> > during 7.3. Major problems:\n> \n> > 1. Insufficient defense against queries that outlive the cursors they\n> > select from. For example, I could create a view that selects from a\n> > cursor. Yes, you check to see if the cursor name still exists ... but\n> > what if that name now refers to a new cursor that delivers a completely\n> > different set of columns? Instant coredump.\n> Good point. I'll work on it.\n> \n> > 2. I don't understand the semantics of queries that read cursors\n> > that've already had some rows fetched from them. Should we reset the\n> > cursor to the start, so that all its data is implicitly available?\n> > That seems strange, but if we don't do it, I think the behavior will be\n> > quite unpredictable, since some join types are going to result in\n> > resetting and re-reading the cursor multiple times anyway. (You've\n> > punted on this issue by not implementing ExecPortalReScan, but that's\n> > not acceptable for a production feature.)\n> Yeah, I couldn't figure out which option is worse, which is why I didn't\n> implement it. I think that rewinding the cursor on each query is better,\n> but I wanted to get comments first.\n> \n> > 3. What does it mean to SELECT FOR UPDATE from a cursor? I don't think\n> > ignoring the FOR UPDATE spec is acceptable; maybe we just have to raise\n> > an error.\n> OK, giving an error makes sense.\n> \n> > 4. Complete lack of documentation, including lack of attention to\n> > updating the code's internal documentation. (For instance, you seem\n> > to have changed some of the conventions described in nodes/relation.h,\n> > but you didn't fix those comments.)\n> OK. \n> \n> > The work you have done so far on changing RTE etc looks good ... but\n> > I don't think the patch is ready to go. Nor am I comfortable with\n> > applying it now on the assumption that the problems can be fixed during\n> > beta.\n> \n> If you want to consider this argument: It won't break anything that's not\n> using the feature. It is needed (its not a 'fringe change' to benefit few)\n> (well, at least I think so :). \n> \n> It also is a base for my code to do 'select * from func(args)', which is\n> definitely needed and base of many flames against postgres not having\n> 'real' stored procedures (ones that return sets). I was hoping to get the\n> rest of it in 7.2 so these flames can be put to rest.\n> \n> Changes to core code are obvious, and all documentation can be taken care\n> of during beta.\n> \n> But I understand your apprehension...\n> \n> > I realize you originally sent this in a month ago, and perhaps you would\n> > have had time to respond to these concerns if people had reviewed the\n> > patch promptly. For myself, I can only apologize for not getting to it\n> > sooner. I've had a few distractions over the past month :-(\n> Can't blame you, completely understandable with GB situation...\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 22 Feb 2002 08:09:26 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] [LARGE] select * from cursor foo" }, { "msg_contents": "\nSorry, this needs more work. Please continue discussion on hackers.\n\n---------------------------------------------------------------------------\n\nAlex Pilosov wrote:\n> On Mon, 17 Sep 2001, Tom Lane wrote:\n> \n> > Alex Pilosov <alex@pilosoft.com> writes:\n> > > Attached patch does the above.\n> > \n> > Alex, could we have this resubmitted in \"diff -c\" format? Plain diff\n> > format is way too risky to apply.\n> \n> Resubmitted. In case list server chokes on the attachment's size, it is\n> also available on www.formenos.org/pg/cursor.fix5.diff\n> \n> \n> Thanks\n> -alex\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 24 Feb 2002 22:54:28 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] [LARGE] select * from cursor foo" } ]
[ { "msg_contents": "Hi,\n\nwe need to control database changes within BEFORE triggers.\nThere is no problem with triggers called by update, but there is\na problem with triggers called by insert.\n\nWe strongly need to know the oid of a newly inserted tuple. In this case, we \nuse tg_newtuple of the TriggerData structure passed to thetrigger function, \nand its t_data -> t_oid will have the value '0'.\n\nUsing BEFORE and AFTER triggers would make our lives much harder.\n\nIs there any way (even hack) to get the oid the newly inserted tuple will \nreceive?\n\nThank you very much,\n\nMarkus\n\n", "msg_date": "Wed, 29 Aug 2001 11:15:08 +0200", "msg_from": "Markus Wagner <wagner@imsd.uni-mainz.de>", "msg_from_op": true, "msg_subject": "getting the oid for a new tuple in a BEFORE trigger" }, { "msg_contents": "Mark,\n\nThe responses to your problem are gonna be kinda slow, as 2/3 of the\ncore team, and many of the users, are at the Expo right now (and if\nanyone on the list is in the SF Bay Area, join us! BOF session\ntonight!)\n\n> we need to control database changes within BEFORE triggers.\n> There is no problem with triggers called by update, but there is\n> a problem with triggers called by insert.\n\nWhat problem? \n\n> We strongly need to know the oid of a newly inserted tuple. In this\n> case, we \n> use tg_newtuple of the TriggerData structure passed to thetrigger\n> function, \n> and its t_data -> t_oid will have the value '0'.\n> \n> Using BEFORE and AFTER triggers would make our lives much harder.\n\nOnce again, why?\n\n> Is there any way (even hack) to get the oid the newly inserted tuple\n> will \n> receive?\n\nThis specific answer will have to come from someone else. \n\nI could suggest a couple of workarounds, if you gave a fuller\ndescription of exactly what you're trying to accomplish.\n\n-Josh Berkus\n\nP.S. Please do not cross-post to more than 2 lists at a time. The\nPostgres lists have been kept to a managable volume to date; let's keep\nit that way.\n\n\n______AGLIO DATABASE SOLUTIONS___________________________\n Josh Berkus\n Complete information technology josh@agliodbs.com\n and data management solutions (415) 565-7293\n for law firms, small businesses fax 621-2533\n and non-profit organizations. San Francisco", "msg_date": "Wed, 29 Aug 2001 07:38:00 -0700", "msg_from": "\"Josh Berkus\" <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: getting the oid for a new tuple in a BEFORE trigger" }, { "msg_contents": "On Wed, Aug 29, 2001 at 11:15:08AM +0200, Markus Wagner wrote:\n> Hi,\n> \n> we need to control database changes within BEFORE triggers.\n> There is no problem with triggers called by update, but there is\n> a problem with triggers called by insert.\n> \n> We strongly need to know the oid of a newly inserted tuple. In this case, we \n> use tg_newtuple of the TriggerData structure passed to thetrigger function, \n> and its t_data -> t_oid will have the value '0'.\n> \n> Using BEFORE and AFTER triggers would make our lives much harder.\n> \n> Is there any way (even hack) to get the oid the newly inserted tuple will \n> receive?\n> \n> Thank you very much,\n> \n> Markus\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n> end of the original message\n\nRead section 24.2.5.4 'Obtaining other results status' of the Programmer's\nGuide. This is for the PL/pgSQL language, though.\n\n\tFrancesco Casadei\n", "msg_date": "Thu, 30 Aug 2001 11:05:23 +0200", "msg_from": "Francesco Casadei <f_casadei@libero.it>", "msg_from_op": false, "msg_subject": "Re: [SQL] getting the oid for a new tuple in a BEFORE trigger" }, { "msg_contents": "Markus Wagner writes:\n\n> we need to control database changes within BEFORE triggers.\n> There is no problem with triggers called by update, but there is\n> a problem with triggers called by insert.\n>\n> We strongly need to know the oid of a newly inserted tuple. In this case, we\n> use tg_newtuple of the TriggerData structure passed to thetrigger function,\n> and its t_data -> t_oid will have the value '0'.\n\nA less hackish way to do this might be using a sequence object for the\nprimary key and fetch the next sequence value manually.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Thu, 30 Aug 2001 15:28:10 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: getting the oid for a new tuple in a BEFORE trigger" } ]
[ { "msg_contents": "hi,\nIdon't know the best way but how about a quick insert in a temp table and \nadding 1 to the inserted oid column each time the trigger will run.!\nregards\nOmid\n>From: Markus Wagner <wagner@imsd.uni-mainz.de>\n>To: pgsql-general@postgresql.org, pgsql-sql@postgresql.org, \n>pgsql-hackers@postgresql.org\n>Subject: [SQL] getting the oid for a new tuple in a BEFORE trigger\n>Date: Wed, 29 Aug 2001 11:15:08 +0200\n>\n>Hi,\n>\n>we need to control database changes within BEFORE triggers.\n>There is no problem with triggers called by update, but there is\n>a problem with triggers called by insert.\n>\n>We strongly need to know the oid of a newly inserted tuple. In this case, \n>we\n>use tg_newtuple of the TriggerData structure passed to thetrigger function,\n>and its t_data -> t_oid will have the value '0'.\n>\n>Using BEFORE and AFTER triggers would make our lives much harder.\n>\n>Is there any way (even hack) to get the oid the newly inserted tuple will\n>receive?\n>\n>Thank you very much,\n>\n>Markus\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 3: if posting/reading through Usenet, please send an appropriate\n>subscribe-nomail command to majordomo@postgresql.org so that your\n>message can get through to the mailing list cleanly\n\n\n_________________________________________________________________\nGet your FREE download of MSN Explorer at http://explorer.msn.com/intl.asp\n\n", "msg_date": "Wed, 29 Aug 2001 10:11:16 ", "msg_from": "\"omid omoomi\" <oomoomi@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: [SQL] getting the oid for a new tuple in a BEFORE trigger" }, { "msg_contents": "On Wednesday 29 August 2001 20:05, you wrote:\n> hi,\n> Idon't know the best way but how about a quick insert in a temp table and\n> adding 1 to the inserted oid column each time the trigger will run.!\n> regards\n\nAs you don't know how many users access the server concurrently and in which \norder they will be served you will probably noty get what you want unless you \nwork on a single user single client-server connection all the time.\n\nHorst\n", "msg_date": "Thu, 30 Aug 2001 13:40:49 +1000", "msg_from": "Horst Herb <hherb@malleenet.net.au>", "msg_from_op": false, "msg_subject": "Re: Re: [SQL] getting the oid for a new tuple in a BEFORE trigger" } ]
[ { "msg_contents": "\n Hi,\n\nfrom current CVS:\n\n./configure --prefix=/usr/lib/postgresql \\\n --enable-multibyte \\\n --enable-locale \\\n --enable-nls \\\n --enable-recode \\\n --enable-unicode-conversion \\\n --with-openssl \\\n --enable-cassert\n\ngcc -O2 -Wall -Wmissing-prototypes -Wmissing-declarations\n-I../../../../src/include -c -o s_lock.o s_lock.c\ns_lock.c:234: warning: `tas_dummy' defined but not used\n/tmp/ccCLFPji.s: Assembler messages:\n/tmp/ccCLFPji.s:210: Error: unknown pseudo-op: '.frame'\n/tmp/ccCLFPji.s:211: Error: no such instruction: 'll $14,0($4)'\n/tmp/ccCLFPji.s:212: Error: suffix or operands invalid for `or'\n/tmp/ccCLFPji.s:213: Error: no such instruction: `sc $15,0($4)'\n/tmp/ccCLFPji.s:214: Error: no such instruction: `beq $15,0,fail'\n/tmp/ccCLFPji.s:215: Error: no such instruction: `bne $14,0,fail'\n/tmp/ccCLFPji.s:216: Error: no such instruction: `li $2,0'\n/tmp/ccCLFPji.s:217: Error: unknown pseudo-op: `.livereg'\n/tmp/ccCLFPji.s:218: Error: no such instruction: \n $31'\n/tmp/ccCLFPji.s:220: Error: no such instruction: `li $2,1'\n/tmp/ccCLFPji.s:221: Error: no such instruction: \n $31'\nmake[2]: *** [s_lock.o] Error 1\n\n$ gcc -v\nReading specs from /usr/lib/gcc-lib/i386-linux/2.95.4/specs\ngcc version 2.95.4 20010827 (Debian prerelease)\n\nLates change:\ndate: 2001/08/28 15:04:27; author: petere; state: Exp; lines: +4 -4\nChange the conditionals so the mips + gcc code here doesn't apply for Irix.\nThe code in s_lock.h should get used.\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Wed, 29 Aug 2001 13:03:25 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": true, "msg_subject": "compile error: c_lock assembler" }, { "msg_contents": "\n Hmm, I answer it oneself. A problem is in the file \nsrc/backend/storage/buffer/s_lock.c at line 231:\n\n#if defined(__mips__) || !defined(__sgi)\n ^^^^^\n\n this condition is true for all __mips__ or for *everythig* what is\nnot __sgi__ (so i386 too). It's the typical 'OR' problem :-)\n\n\t\t\tKarel\n\nOn Wed, Aug 29, 2001 at 01:03:25PM +0200, Karel Zak wrote:\n> \n> Hi,\n> \n> from current CVS:\n> \n> ./configure --prefix=/usr/lib/postgresql \\\n> --enable-multibyte \\\n> --enable-locale \\\n> --enable-nls \\\n> --enable-recode \\\n> --enable-unicode-conversion \\\n> --with-openssl \\\n> --enable-cassert\n> \n> gcc -O2 -Wall -Wmissing-prototypes -Wmissing-declarations\n> -I../../../../src/include -c -o s_lock.o s_lock.c\n> s_lock.c:234: warning: `tas_dummy' defined but not used\n> /tmp/ccCLFPji.s: Assembler messages:\n> /tmp/ccCLFPji.s:210: Error: unknown pseudo-op: '.frame'\n> /tmp/ccCLFPji.s:211: Error: no such instruction: 'll $14,0($4)'\n> /tmp/ccCLFPji.s:212: Error: suffix or operands invalid for `or'\n> /tmp/ccCLFPji.s:213: Error: no such instruction: `sc $15,0($4)'\n> /tmp/ccCLFPji.s:214: Error: no such instruction: `beq $15,0,fail'\n> /tmp/ccCLFPji.s:215: Error: no such instruction: `bne $14,0,fail'\n> /tmp/ccCLFPji.s:216: Error: no such instruction: `li $2,0'\n> /tmp/ccCLFPji.s:217: Error: unknown pseudo-op: `.livereg'\n> /tmp/ccCLFPji.s:218: Error: no such instruction: \n> $31'\n> /tmp/ccCLFPji.s:220: Error: no such instruction: `li $2,1'\n> /tmp/ccCLFPji.s:221: Error: no such instruction: \n> $31'\n> make[2]: *** [s_lock.o] Error 1\n> \n> $ gcc -v\n> Reading specs from /usr/lib/gcc-lib/i386-linux/2.95.4/specs\n> gcc version 2.95.4 20010827 (Debian prerelease)\n> \n> Lates change:\n> date: 2001/08/28 15:04:27; author: petere; state: Exp; lines: +4 -4\n> Change the conditionals so the mips + gcc code here doesn't apply for Irix.\n> The code in s_lock.h should get used.\n> \n> -- \n> Karel Zak <zakkr@zf.jcu.cz>\n> http://home.zf.jcu.cz/~zakkr/\n> \n> C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Wed, 29 Aug 2001 15:26:03 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": true, "msg_subject": "Re: compile error: c_lock assembler" }, { "msg_contents": "Karel Zak writes:\n\n> Hmm, I answer it oneself. A problem is in the file\n> src/backend/storage/buffer/s_lock.c at line 231:\n>\n> #if defined(__mips__) || !defined(__sgi)\n> ^^^^^\n>\n> this condition is true for all __mips__ or for *everythig* what is\n> not __sgi__ (so i386 too). It's the typical 'OR' problem :-)\n\nYup. It's already fixed. Sorry.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Wed, 29 Aug 2001 15:55:05 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: compile error: c_lock assembler" } ]
[ { "msg_contents": "Patch not attached, apparently mail server rejects large files. \n\nPatch can be found on www.formenos.org/pg/cursor.fix1.diff\n\nNotes:\n1. Incompatible changes: CURSOR is now a keyword and may not be used as an\nidentifier (tablename, etc). Otherwise, we get shift-reduce conflicts in\ngrammar.\n\n2. Major changes: \n\na) RangeTblEntry (RTE for short) instead of having two possibilities,\nsubquery and non-subquery, now has a rtetype field which can be of 3\npossible states: RTE_RELATION, RTE_SUBSELECT, RTE_PORTAL). The\ntype-specific structures are unionized, so where you used to have\nrte->relid, now you must do rte->u.rel.relid.\n\nProper way to check what is the RTE type is now checking for rte->rtetype\ninstead of checking whether rte->subquery is null.\n\nb) Similarly, RelOptInfo now has a RelOptInfoType which is an enum with 4\nstates, REL_PLAIN,REL_SUBQUERY,REL_JOIN,REL_PORTAL. I did not do the\nunionization of type-specific structures. Maybe I should've if I'm going\nto get in a big change anyway.\n\nc) There's a function PortalRun which fetches N records from portal and\nsets atEnd/atStart values properly. It replaces code duplicated in 2\nplaces. \n\n\nHow to test: \n\ndeclare foo cursor for select * from pg_class;\n\nselect * from cursor foo;\n\nDocumentation updates will be forthcoming ASAP, I just wanted to get this\npatch in queue before the freeze. Or at least making sure someone could\nlook through this patch before freeze. :)\n\nNext patch will be one to support \"SELECT * FROM func(arg1,arg2)\" which\nwould work by creating first a special kind of portal for selection from a\nfunction and then setting query source to be that portal.\n\n-alex\n\n", "msg_date": "Wed, 29 Aug 2001 07:42:07 -0400 (EDT)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": true, "msg_subject": "[PATCH] [LARGE] select * from cursor foo " } ]
[ { "msg_contents": "Hello-\n\nI'm receiving the following error message:\nERROR: Relation \"log\" with OID 3694127 no longer exists\n\nWhen running the following script (a stripped-down version of what I'm \nreally doing, but it demostrates the behavior):\n\nCREATE TABLE log (logid int4);\nCREATE TABLE data (dataid int4);\nCREATE RULE r_delete_data\n AS ON DELETE TO data\n DO DELETE FROM log WHERE logid=OLD.dataid;\nCREATE RULE r_insert_data\n AS ON INSERT TO data\n DO INSERT INTO log (logid) VALUES (NEW.dataid);\nINSERT INTO data (dataid) VALUES (1);\nDROP TABLE log;\nCREATE TABLE log (logid int4);\nDELETE FROM data WHERE dataid=1;\n\nMy setup: linux v2.4.9, pg v7.1.2\n\nIs this a bug? If this is *not* a bug in postgres, then any suggestions \non the right way to go about rebuilding the \"log\" table above? In my \nreal application, I've dropped and added some columns to \"log\" (changes \nsuch that ALTER TABLE isn't able to help).\n\nTIA, Jon\n\n-- \n\n-**-*-*---*-*---*-*---*-----*-*-----*---*-*---*-----*-----*-*-----*---\n Jon Lapham\n Extracta Mol�culas Naturais, Rio de Janeiro, Brasil\n email: lapham@extracta.com.br web: http://www.extracta.com.br/\n***-*--*----*-------*------------*--------------------*---------------\n\n", "msg_date": "Wed, 29 Aug 2001 14:27:49 -0300", "msg_from": "Jon Lapham <lapham@extracta.com.br>", "msg_from_op": true, "msg_subject": "Odd rule behavior?" } ]
[ { "msg_contents": "We're discussing an implementation of JDBC's\nStatement.executeBatch() on the pgsql-jdbc list. The idea is to\nsend multiple semicolon separated statements in one call to the\nbackend. The purpose of this feature is of course a performance\nimprovement, since it executes multiple (non-select) statements\nwith one round trip to the server.\n\nIf autocommit is _enabled_ and S1;S2;S3 is send to the database,\nwhat exactly is the behaviour of the backend? For example, what\nhappens if S1 succeeds, S2 fails and S3 would succeed?\n\nDoes autocommit apply to the statement list send in one call as\na whole? Or does it apply to individual statements?\n\nIf autocommit applies to the list as a whole I assume the\nfailure of S2 would cause the entire statement list to fail and\nbe rolled back.\n\nIf autocommit applies to individual statements in the list, I\nassume that S1 succeeds and is committed, S2 fails and is rolled\nback. But is S3 still executed? And what update count is\nreturned to the client in that case?\n\nI will summarize on pgsql-jdbc.\n\nRegards,\nRen� Pijlman <rene@lab.applinet.nl>\n", "msg_date": "Thu, 30 Aug 2001 00:20:07 +0200", "msg_from": "Rene Pijlman <rene@lab.applinet.nl>", "msg_from_op": true, "msg_subject": "Multiple semicolon separated statements and autocommit" }, { "msg_contents": "Rene Pijlman writes:\n\n> If autocommit is _enabled_ and S1;S2;S3 is send to the database,\n> what exactly is the behaviour of the backend? For example, what\n> happens if S1 succeeds, S2 fails and S3 would succeed?\n\nAll three commands are executed in a single transaction. So if S2 fails,\nS3 would not be executed.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Thu, 30 Aug 2001 19:56:53 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Multiple semicolon separated statements and autocommit" }, { "msg_contents": "On Thu, 30 Aug 2001 19:56:53 +0200 (CEST), you wrote:\n>Rene Pijlman writes:\n>> If autocommit is _enabled_ and S1;S2;S3 is send to the database,\n>> what exactly is the behaviour of the backend? For example, what\n>> happens if S1 succeeds, S2 fails and S3 would succeed?\n>\n>All three commands are executed in a single transaction. So if S2 fails,\n>S3 would not be executed.\n\nAnd both S1 and S2 will be rolled back, as I understand it. \n\nThank you.\n\nRegards,\nRen� Pijlman <rene@lab.applinet.nl>\n", "msg_date": "Thu, 30 Aug 2001 21:07:21 +0200", "msg_from": "Rene Pijlman <rene@lab.applinet.nl>", "msg_from_op": true, "msg_subject": "Re: Multiple semicolon separated statements and autocommit" }, { "msg_contents": "Are you sure? I thought all that autocommit meant was that a statement that\nis not enclosed within a begin/commit is automatically committed after it is\nrun. So, in the this case all three queries will be independent, unless the\nfirst statements is a 'begin;' and the last is a 'commit;'...\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Peter Eisentraut\n> Sent: Friday, 31 August 2001 1:57 AM\n> To: Rene Pijlman\n> Cc: pgsql-hackers@postgresql.org; barry@xythos.com\n> Subject: Re: [HACKERS] Multiple semicolon separated statements and\n> autocommit\n>\n>\n> Rene Pijlman writes:\n>\n> > If autocommit is _enabled_ and S1;S2;S3 is send to the database,\n> > what exactly is the behaviour of the backend? For example, what\n> > happens if S1 succeeds, S2 fails and S3 would succeed?\n>\n> All three commands are executed in a single transaction. So if S2 fails,\n> S3 would not be executed.\n>\n> --\n> Peter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://www.postgresql.org/search.mpl\n>\n\n", "msg_date": "Fri, 31 Aug 2001 09:14:21 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Multiple semicolon separated statements and autocommit" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n\n> Are you sure? I thought all that autocommit meant was that a statement that\n> is not enclosed within a begin/commit is automatically committed after it is\n> run. So, in the this case all three queries will be independent, unless the\n> first statements is a 'begin;' and the last is a 'commit;'...\n\nWhat does the JDBC spec say about autocommit and ExecuteBatch()?\n\n-Doug\n-- \nFree Dmitry Sklyarov! \nhttp://www.freesklyarov.org/ \n\nWe will return to our regularly scheduled signature shortly.\n", "msg_date": "30 Aug 2001 21:35:42 -0400", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: Multiple semicolon separated statements and autocommit" }, { "msg_contents": "Christopher Kings-Lynne writes:\n\n> Are you sure?\n\nYes.\n\n> I thought all that autocommit meant was that a statement that\n> is not enclosed within a begin/commit is automatically committed after it is\n> run. So, in the this case all three queries will be independent, unless the\n> first statements is a 'begin;' and the last is a 'commit;'...\n\nNot if they're sent in the same query string.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Fri, 31 Aug 2001 10:51:09 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Multiple semicolon separated statements and autocommit" }, { "msg_contents": "On 30 Aug 2001 21:35:42 -0400, you wrote:\n>\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n>> Are you sure? I thought all that autocommit meant was that a statement that\n>> is not enclosed within a begin/commit is automatically committed after it is\n>> run. So, in the this case all three queries will be independent, unless the\n>> first statements is a 'begin;' and the last is a 'commit;'...\n>\n>What does the JDBC spec say about autocommit and ExecuteBatch()?\n\nNot much, but that's a different story. We're still in the\nprocess of figuring out how to implement this feature exactly.\nThat discussion is on the pgsql-jdbc list.\n\nRegards,\nRen� Pijlman <rene@lab.applinet.nl>\n", "msg_date": "Fri, 31 Aug 2001 12:02:40 +0200", "msg_from": "Rene Pijlman <rene@lab.applinet.nl>", "msg_from_op": true, "msg_subject": "Re: Multiple semicolon separated statements and autocommit" } ]
[ { "msg_contents": "Hello-\n\nI'm receiving the following error message:\nERROR: Relation \"log\" with OID 3694127 no longer exists\n\nWhen running the following script (a stripped-down version of what I'm \nreally doing, but it demostrates the behavior):\n\nCREATE TABLE log (logid int4);\nCREATE TABLE data (dataid int4);\nCREATE RULE r_delete_data\n AS ON DELETE TO data\n DO DELETE FROM log WHERE logid=OLD.dataid;\nCREATE RULE r_insert_data\n AS ON INSERT TO data\n DO INSERT INTO log (logid) VALUES (NEW.dataid);\nINSERT INTO data (dataid) VALUES (1);\nDROP TABLE log;\nCREATE TABLE log (logid int4);\nDELETE FROM data WHERE dataid=1;\n\nMy setup: linux v2.4.9, pg v7.1.2\n\nIs this a bug? If this is *not* a bug in postgres, then any suggestions \non the right way to go about rebuilding the \"log\" table above? In my \nreal application, I've dropped and added some columns to \"log\" (changes \nsuch that ALTER TABLE isn't able to help).\n\nTIA, Jon\n\n-- \n\n-**-*-*---*-*---*-*---*-----*-*-----*---*-*---*-----*-----*-*-----*---\n Jon Lapham\n Extracta Mol�culas Naturais, Rio de Janeiro, Brasil\n email: lapham@extracta.com.br web: http://www.extracta.com.br/\n***-*--*----*-------*------------*--------------------*---------------\n\n\n", "msg_date": "Thu, 30 Aug 2001 00:11:11 -0300", "msg_from": "Jon Lapham <lapham@extracta.com.br>", "msg_from_op": true, "msg_subject": "Odd rule behavior?" }, { "msg_contents": "On Thu, 30 Aug 2001, Jon Lapham wrote:\n\n> I'm receiving the following error message:\n> ERROR: Relation \"log\" with OID 3694127 no longer exists\n> \n> When running the following script (a stripped-down version of what I'm \n> really doing, but it demostrates the behavior):\n> \n> CREATE TABLE log (logid int4);\n> CREATE TABLE data (dataid int4);\n> CREATE RULE r_delete_data\n> AS ON DELETE TO data\n> DO DELETE FROM log WHERE logid=OLD.dataid;\n> CREATE RULE r_insert_data\n> AS ON INSERT TO data\n> DO INSERT INTO log (logid) VALUES (NEW.dataid);\n> INSERT INTO data (dataid) VALUES (1);\n> DROP TABLE log;\n> CREATE TABLE log (logid int4);\n> DELETE FROM data WHERE dataid=1;\n> \n> My setup: linux v2.4.9, pg v7.1.2\n> \n> Is this a bug? If this is *not* a bug in postgres, then any suggestions \n> on the right way to go about rebuilding the \"log\" table above? In my \n> real application, I've dropped and added some columns to \"log\" (changes \n> such that ALTER TABLE isn't able to help).\n\nWhen you drop and recreate the table, you'll need to drop and recreate the\nrules that reference it as well. There's been little to no concensus as to\nwhat the correct behavior should be in such cases: delete the rules when\na referenced table is removed, refuse to remove the table due to the\nreferences, try to reconnect by name (and somehow handle the possibility\nthat the reference is no longer valid, like say the lack of a logid column\nin your case)...\n\n", "msg_date": "Wed, 29 Aug 2001 21:05:55 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Odd rule behavior?" }, { "msg_contents": "Stephan Szabo wrote:\n> \n> When you drop and recreate the table, you'll need to drop and recreate the\n> rules that reference it as well. There's been little to no concensus as to\n> what the correct behavior should be in such cases: delete the rules when\n> a referenced table is removed, refuse to remove the table due to the\n> references, try to reconnect by name (and somehow handle the possibility\n> that the reference is no longer valid, like say the lack of a logid column\n> in your case)...\n> \n> \n\nOkay, thanks, dropping and recreating the rule worked.\n\nAfter thinking a bit about this, it would seem that the 'problem' is \nthat I was *able* to drop a table that had rules referencing it. Would \nit be possible to either not allow this, or to issue some type of \nwarning message? Otherwise, you go down the path of this (for me \nanyway) subtle problem.\n\nAlso, who should I send documentation patches to about this? I couldn't \nfind any mention of this issue in the \"create rule\" documentation (or am \nI looking in the wrong place?) or in \"Chapter 17: The Postgres Rule \nSystem\". Hmmm, further perusal shows that Jan Weick is the author of \nthe Chapter 17 documentation, I guess I can send text to Jan.\n\n-- \n\n-**-*-*---*-*---*-*---*-----*-*-----*---*-*---*-----*-----*-*-----*---\n Jon Lapham\n Extracta Mol�culas Naturais, Rio de Janeiro, Brasil\n email: lapham@extracta.com.br web: http://www.extracta.com.br/\n***-*--*----*-------*------------*--------------------*---------------\n\n", "msg_date": "Thu, 30 Aug 2001 09:39:56 -0300", "msg_from": "Jon Lapham <lapham@extracta.com.br>", "msg_from_op": true, "msg_subject": "Re: Odd rule behavior?" }, { "msg_contents": "Jon Lapham writes:\n\n> I'm receiving the following error message:\n> ERROR: Relation \"log\" with OID 3694127 no longer exists\n\nAs a general rule, this won't work in PostgreSQL:\n\nCREATE TABLE foo (...);\nCREATE RULE bar ... ON foo ...; # view, trigger, etc.\nDROP TABLE foo (...);\nCREATE TABLE foo (...);\n\nThe rule (view, trigger) references the table by oid, not by name. (This\nis a good thing. Consider what happens when the newly created table has a\ntotally different structure.) The correct fix would be to prevent the\nDROP TABLE or drop the rule with it, but it hasn't been done yet.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Thu, 30 Aug 2001 15:26:30 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Odd rule behavior?" }, { "msg_contents": "On Thu, 30 Aug 2001, Jon Lapham wrote:\n\n> Okay, thanks, dropping and recreating the rule worked.\n> \n> After thinking a bit about this, it would seem that the 'problem' is \n> that I was *able* to drop a table that had rules referencing it. Would \n> it be possible to either not allow this, or to issue some type of \n> warning message? Otherwise, you go down the path of this (for me \n> anyway) subtle problem.\n\nThe problem is right now we don't keep track of that sort of information\nin any really usable way (apart from scanning all objects that might refer\nto an oid). There've been discussions on -hackers in the past about this\nand it should be on the todo list.\n\n> Also, who should I send documentation patches to about this? I couldn't \n> find any mention of this issue in the \"create rule\" documentation (or am \n> I looking in the wrong place?) or in \"Chapter 17: The Postgres Rule \n> System\". Hmmm, further perusal shows that Jan Weick is the author of \n> the Chapter 17 documentation, I guess I can send text to Jan.\n\nYou might as well send patches to pgsql-patches and let everyone see them.\n\n", "msg_date": "Thu, 30 Aug 2001 08:57:00 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Odd rule behavior?" } ]
[ { "msg_contents": "> > > >For bytea, follow this rule: to escape a null character, use\nthis:\n> > > >'\\\\0'. To escape a backslash, use this: '\\\\\\\\'.\n\nCan anybody explain in technical terms why this is implemented \nso inconveniently ?\n\nSince bytea is probably not very common among users yet\nwe could imho still change it to not do double escapes.\n\nImho we need to decide where to do the escaping,\neighter in the parser or in the input functions.\n\nI think actually the backend parser has no business changing\nconstants, he is imho only allowed to parse it, so he knows \nwhere a constant begins, and where it ends. \n\nAndreas\n", "msg_date": "Thu, 30 Aug 2001 09:33:15 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "RE: Re: Toast,bytea, Text -blob all confusing" }, { "msg_contents": "Zeugswetter Andreas SB SD wrote:\n> \n> > > > >For bytea, follow this rule: to escape a null character, use\n> this:\n> > > > >'\\\\0'. To escape a backslash, use this: '\\\\\\\\'.\n> \n> Can anybody explain in technical terms why this is implemented\n> so inconveniently ?\n\nI think that this has to to with making textin and textout behave \nsymmetrically, and the requirement that textout must produce a \nvalid C-string for ASCII transfer format.\n\n> Since bytea is probably not very common among users yet\n> we could imho still change it to not do double escapes.\n\nBut how ?\n\n> Imho we need to decide where to do the escaping,\n> eighter in the parser or in the input functions.\n\nIt would be probably hard to make the parser to _not_ unescape some \ntypes, as it does not yet know it\n\n> I think actually the backend parser has no business changing\n> constants, he is imho only allowed to parse it, so he knows\n> where a constant begins, and where it ends.\n\nIf it is any consolation then you have to write the inset of \na single \\ from shell command so:\n\n> psql -c \"insert into t values('\\\\\\\\\\\\\\\\')\"\n\n;)\n\n------------------\nHannu\n", "msg_date": "Thu, 30 Aug 2001 16:32:18 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Re: Toast,bytea, Text -blob all confusing" }, { "msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> I think actually the backend parser has no business changing\n> constants, he is imho only allowed to parse it, so he knows \n> where a constant begins, and where it ends. \n\nHow do you propose to handle embedded quote marks in literals,\nif there is no parser-level escape convention?\n\nDon't suggest a type-specific escape convention; at the time the\nparser runs, it's impossible to know what type the literal will\nturn out to be.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 03 Sep 2001 19:08:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: Toast,bytea, Text -blob all confusing " } ]
[ { "msg_contents": "\nIts been much much too long since I've upgraded Majordomo2 on the server,\nso this is the last email I'm sending out prior to upgrading her today ...\nif anyone notices the lists go suddenly quiet, or the way it works\nchanging, please let me know ...\n\nMy main worry is that in the past 6+ months, some of the defaults might\nhave been reversed, so that they default to off instead of on, or vice\nversa ... just a heads up so that ppl are watching for it ...\n\n\n\n", "msg_date": "Thu, 30 Aug 2001 12:25:10 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Majordomo being upgraded ..." }, { "msg_contents": "> Its been much much too long since I've upgraded Majordomo2 on the server,\n> so this is the last email I'm sending out prior to upgrading her today ...\n> if anyone notices the lists go suddenly quiet, or the way it works\n> changing, please let me know ...\n> \n> My main worry is that in the past 6+ months, some of the defaults might\n> have been reversed, so that they default to off instead of on, or vice\n> versa ... just a heads up so that ppl are watching for it ...\n\nI have not gotten back mails from pgsql-committers. I'm sure I have \ncommitted to the CVS repository yesterday.\n\ncvs log queries.sgml \n[snip]\nrevision 1.8\ndate: 2001/08/30 08:16:42; author: ishii; state: Exp; lines: +2 -2\nFix typo.\n--\nTatsuo Ishii\n", "msg_date": "Fri, 31 Aug 2001 11:06:00 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Majordomo being upgraded ..." } ]
[ { "msg_contents": "\nignore, just making sure it works ...\n\n", "msg_date": "Thu, 30 Aug 2001 13:17:52 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "test 2 ..." } ]
[ { "msg_contents": "> we need to control database changes within BEFORE triggers.\n> There is no problem with triggers called by update, but there is\n> a problem with triggers called by insert.\n> \n> We strongly need to know the oid of a newly inserted tuple.\n> In this case, we use tg_newtuple of the TriggerData structure\n> passed to thetrigger function, and its t_data -> t_oid will\n> have the value '0'.\n> \n> Using BEFORE and AFTER triggers would make our lives much harder.\n> \n> Is there any way (even hack) to get the oid the newly\n> inserted tuple will receive?\n\nJust set t_data->t_oid = newoid() - this is what backend does\nin heapam.c:heap_insert().\n\nVadim\n", "msg_date": "Fri, 31 Aug 2001 09:45:49 -0700", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "Re: getting the oid for a new tuple in a BEFORE trigger" }, { "msg_contents": "> > we need to control database changes within BEFORE triggers.\n> > There is no problem with triggers called by update, but there is\n> > a problem with triggers called by insert.\n> > \n> > We strongly need to know the oid of a newly inserted tuple.\n> > In this case, we use tg_newtuple of the TriggerData structure\n> > passed to thetrigger function, and its t_data -> t_oid will\n> > have the value '0'.\n> > \n> > Using BEFORE and AFTER triggers would make our lives much harder.\n> > \n> > Is there any way (even hack) to get the oid the newly\n> > inserted tuple will receive?\n> \n> Just set t_data->t_oid = newoid() - this is what backend does\n> in heapam.c:heap_insert().\n\nDoes that work? Doesn't that get overwritten when the actual INSERT\nhappens?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 4 Sep 2001 11:33:30 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] getting the oid for a new tuple in a BEFORE" } ]
[ { "msg_contents": "It took me a while to figure out what was going on, but I think I've\nfigured it out.\n\nLets say you have your own variable length datatype called\n'MY_DATATYPE'.\n\nCREATE TABLE test_table (myint integer, mydata MY_DATATYPE);\nINSERT INTO test_table VALUES (1);\n\nAt this point, I'd expect there to be one row in test table. The myint\ncolumn will have the value one, and the mydata column will have the\nvalue NULL.\n\nThis doesnt appear to be the case. It seems that the mydata column will\nhave a structure that looks like a '-'::TEXT structure (ie. the first 4\nbytes are an int representing 5, and the 5th byte is the ASCII '-').\n\nThis is really bad because a \"SELECT * FROM test_table\" will send this\nweird structure to MY_DATATYPE's OUTPUT function. Since this weird\nstructure isn't really a MY_DATATYPE structure, it causes problems.\n\nThis happens even if you explictly set MY_DATATYPE's DEFAULT to NULL.\n\ndave\n", "msg_date": "Fri, 31 Aug 2001 12:32:39 -0700", "msg_from": "Dave Blasby <dblasby@refractions.net>", "msg_from_op": true, "msg_subject": "Bad behaviour when inserting unspecified variable length datatypes" }, { "msg_contents": "Dave Blasby <dblasby@refractions.net> writes:\n> CREATE TABLE test_table (myint integer, mydata MY_DATATYPE);\n> INSERT INTO test_table VALUES (1);\n\n> At this point, I'd expect there to be one row in test table. The myint\n> column will have the value one, and the mydata column will have the\n> value NULL.\n\nCheck...\n\n> This doesnt appear to be the case. It seems that the mydata column will\n> have a structure that looks like a '-'::TEXT structure (ie. the first 4\n> bytes are an int representing 5, and the 5th byte is the ASCII '-').\n\nUh, what did your CREATE TYPE command look like, exactly? This sounds\nlike you specified a default value for the datatype.\n\nMaybe you need to show us your datatype's I/O functions, too. Since\nthis works perfectly fine for the standard variable-length datatypes,\nit's hard to arrive at any other conclusion than that your custom\ndatatype code is erroneous. But there's not enough info here to figure\nout just what is wrong with it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 03 Sep 2001 16:09:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bad behaviour when inserting unspecified variable length\n\tdatatypes" }, { "msg_contents": "> Uh, what did your CREATE TYPE command look like, exactly? This sounds\n> like you specified a default value for the datatype.\n\nOkay, here's two examples;\n\nCREATE TYPE WKB (\n\tinternallength = VARIABLE,\n\tinput = WKB_in,\n\toutput = WKB_out,\n\tstorage= extended\n);\n\n\nCREATE TYPE GEOMETRY (\n\talignment = double,\n\tinternallength = VARIABLE,\n\tinput = geometry_in,\n\toutput = geometry_out,\n\tstorage = main\n);\n\n\nI've tried the WKB type with a \"DEFAULT = NULL\" clause and all the\ndifferent storage types. The same problem occurs everytime.\n\nHere's the create function statements for the _in and _out functions;\n\ncreate function WKB_in(opaque)\n\tRETURNS WKB\n AS '@MODULE_FILENAME@','WKB_in'\n LANGUAGE 'c' with (isstrict);\n\ncreate function WKB_out(opaque)\n\tRETURNS opaque\n AS '@MODULE_FILENAME@','WKB_out'\n LANGUAGE 'c' with (isstrict);\n\ncreate function geometry_in(opaque)\n\tRETURNS GEOMETRY\n AS '@MODULE_FILENAME@'\n LANGUAGE 'c' with (isstrict);\n\ncreate function geometry_out(opaque)\n\tRETURNS opaque\n AS '@MODULE_FILENAME@'\n LANGUAGE 'c' with (isstrict);\n\n\n\n> Maybe you need to show us your datatype's I/O functions, too.\n\nI dont thing they're important. WKB_in and geometry_in are *never*\ncalled, and WKB_out and geometry_out are called with bad values. Only\none line of my code is executed in the _out functions. \n\n\tWellKnownBinary\t\t *WKB = (WellKnownBinary *) \nPG_DETOAST_DATUM(PG_GETARG_DATUM(0));\n\n\t\tor\n\n\tGEOMETRY\t\t *geom1 = (GEOMETRY *) \nPG_DETOAST_DATUM(PG_GETARG_DATUM(0));\n\n(See below for a simpler example)\n\n> Since\n> this works perfectly fine for the standard variable-length datatypes,\n\nYes, on system (7.1.2 under solaris) the following works;\n create table ttt (i integer, mytext text);\n insert into ttt values (1);\n select * from ttt;\n i | mytext \n---+----\n 1 | \n(1 row)\n\n\nHere's the simplest example I can come up with to show the problem;\n\ncreate function WKB_in2(opaque)\n RETURNS WKB\n AS\n'/data1/Refractions/Projects/PostGIS/work_dave/postgis/libpostgis.so.0.5','WKB_in2'\n LANGUAGE 'c' with (isstrict);\n\ncreate function WKB_out2(opaque)\n RETURNS opaque\n AS\n'/data1/Refractions/Projects/PostGIS/work_dave/postgis/libpostgis.so.0.5','WKB_out2'\n LANGUAGE 'c' with (isstrict);\n\n\nCREATE TYPE WKB (\n internallength = VARIABLE,\n input = WKB_in2,\n output = WKB_out2,\n storage= extended\n);\n\n\ndave=# create table m (i integer, mywkb wkb);\ndave=# insert into m values (1);\ndave=# select * from m;\n i | mywkb \n---+-------\n 1 | me\n(1 row)\n\n\nYou'll also get this output from the printf in WKB_out (ASCII 45 is the\n\"-\" char);\nWKB_out2: WKB has length 5 and 1st value 45\n\n\ntypedef struct Well_known_bin {\n\tint32 size; // total size of this structure\n\tunsigned char data[1]; //THIS HOLDS VARIABLE LENGTH DATA\n} WellKnownBinary;\n\n\n\nPG_FUNCTION_INFO_V1(WKB_in2);\nDatum WKB_in2(PG_FUNCTION_ARGS)\n{\n\tchar\t \t\t*str = PG_GETARG_CSTRING(0);\n\t\n\tprintf(\"I never get here!\\n\");\n\n\tPG_RETURN_NULL();\n}\n\nPG_FUNCTION_INFO_V1(WKB_out2);\nDatum WKB_out2(PG_FUNCTION_ARGS)\n{\n\tWellKnownBinary\t\t *WKB = (WellKnownBinary *) \nPG_DETOAST_DATUM(PG_GETARG_DATUM(0));\n\tchar\t\t\t\t\t*result;\n\n\tprintf(\"WKB_out2: WKB has length %i and 1st value %i\\n\", WKB->size, \n(int) ((char *)WKB)[4] );\n\n\t// return something\n\n\tresult = palloc(3);\n\tresult[0] = 'm';\n\tresult[1] = 'e';\n\tresult[2] = 0;\n\n\tPG_RETURN_CSTRING(result);\n}\n", "msg_date": "Tue, 04 Sep 2001 10:16:20 -0700", "msg_from": "Dave Blasby <dblasby@refractions.net>", "msg_from_op": true, "msg_subject": "Re: Bad behaviour when inserting unspecified variable length " }, { "msg_contents": ">> Uh, what did your CREATE TYPE command look like, exactly? This sounds\n>> like you specified a default value for the datatype.\n\n> [ no, he didn't ]\n\nNow that I look at it, CREATE TYPE is totally whacked out about default\nvalues for user-defined datatypes. The reason the system-defined types\nall behave properly is that they are defined (in pg_type.h) with NULL\nentries in the typdefault field of pg_type. But CREATE TYPE doesn't\nfill in a NULL when it sees you haven't specified a default! Look at\nTypeCreate in pg_type.c:\n\n /*\n * initialize the default value for this type.\n */\n values[i] = DirectFunctionCall1(textin, /* 17 */\n CStringGetDatum(defaultTypeValue ? defaultTypeValue : \"-\"));\n\nYech, where'd that come from?\n\nIt turns out this doesn't hurt fixed-length types unless their length is\n1, because there is a test in get_typdefault to verify that the stored\nvalue is the expected length. But for var-length types the \"-\" comes\nthrough.\n\nA short-term workaround for Dave is to explicitly set pg_type.typdefault\nto NULL after creating his type, but clearly we gotta clean this up.\nISTM that TypeCreate should set typdefault to NULL if it's passed a null\ndefault-value item.\n\nAnother problem here is that there's no mechanism that causes the value\nstored in typdefault to be correctly converted to the destination type.\nDefineType and TypeCreate act as though the value is just a text string,\nbut then get_typdefault seems to think that it should find a text object\ncontaining the *internal* representation of the desired value. Yech.\nFor example, to make an integer type with a default of 42, I'd have to\nwrite\n\tdefault = '\\0\\0\\0\\52'\n(which might or might not even work because of the nulls, and certainly\nwould not do what I wanted on a machine of the other endianness).\n\nI propose that we define typdefault as containing the *external*\nrepresentation of the desired value, and have get_typdefault apply the\ntype's input conversion function to produce a Datum. Any objections?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 04 Sep 2001 14:24:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bad behaviour when inserting unspecified variable length\n\tdatatypes" }, { "msg_contents": "I wrote:\n> I propose that we define typdefault as containing the *external*\n> representation of the desired value, and have get_typdefault apply the\n> type's input conversion function to produce a Datum. Any objections?\n\nThis change is committed for 7.2.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 06 Sep 2001 00:49:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bad behaviour when inserting unspecified variable length\n\tdatatypes" } ]
[ { "msg_contents": "Folks,\n\nWe have a database with several very large tables. When trying\nto pg_dump we get the above error, e.g.:\n\npg_dump -v wsdb\n-- saving database definition\n-- last builtin oid is 18539 \n-- reading user-defined types \n-- reading user-defined functions \n-- reading user-defined aggregates \n-- reading user-defined operators \n-- reading user-defined tables \ngetTables(): SELECT (for PRIMARY KEY) failed on table v3otgdsrcq.\nExplanation from backend: ERROR: dtoi4: integer out of range\n\nMaking another small database (same system, 7.1.2 on Debian/GNU Linux\n2.2), gives the same sort of problem:\n\npg_dump -v tmp\n-- saving database definition\n-- last builtin oid is 18539 \n-- reading user-defined types \n-- reading user-defined functions \n-- reading user-defined aggregates \n-- reading user-defined operators \n-- reading user-defined tables \n-- reading indices information \n-- reading table inheritance information \n-- finding the attribute names and types for each table \n-- finding the attrs and types for table: 'tmp' \n-- flagging inherited attributes in subtables \n-- dumping out database comment \nDumpComment: SELECT failed: 'ERROR: dtoi4: integer out of range\n\nIf I init a new db and restart postgres with the new base,\nno problem.\n\nI suspect some sort of corruption but we're not sure where to\nlook. A vacuum did not help. We'd like to recover, if at all\npossible. Any ideas (no luck on other lists or I wouldn't post\nhere)?\n\nTIA,\n\n--Martin\n", "msg_date": "Fri, 31 Aug 2001 16:04:31 -0400", "msg_from": "Martin Weinberg <weinberg@osprey.astro.umass.edu>", "msg_from_op": true, "msg_subject": "Why \"ERROR: dtoi4: integer out of range\" on pg_dump" }, { "msg_contents": "Martin Weinberg <weinberg@osprey.astro.umass.edu> writes:\n> DumpComment: SELECT failed: 'ERROR: dtoi4: integer out of range\n\nHmm. I can reproduce this error message if I suppose that you have\nOIDs exceeding 2 billion. pg_dump will produce queries like:\n\nregression=# select * from pg_description where objoid = 2500000000;\nERROR: dtoi4: integer out of range\n\nA short-term workaround is to hack pg_dump so that it explicitly coerces\nthe literal to OID and/or quotes the literal:\n\nregression=# select * from pg_description where objoid = 2500000000::oid;\n objoid | classoid | objsubid | description\n--------+----------+----------+-------------\n(0 rows)\n\nregression=# select * from pg_description where objoid = '2500000000';\n objoid | classoid | objsubid | description\n--------+----------+----------+-------------\n(0 rows)\n\nThis is done in many places in pg_dump, but not in DumpComment which is\nrelatively new code :-(\n\nA longer-term question is how to persuade the parser to get this right\nwithout such help. I think that this is another variant of the\nperennial numeric-precision issue and will not be real easy to fix.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 03 Sep 2001 17:46:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Why \"ERROR: dtoi4: integer out of range\" on pg_dump " }, { "msg_contents": "Thanks, Tom! This was the problem. Here is my patch to pg_dump.c \nthat appears to fix the problem. Turns out that the oid needed to\nbe coerced in two places.\n-------------------------------------------------------------------------------\n--- pg_dump.c\tThu Sep 6 21:18:21 2001\n+++ pg_dump.c.orig\tThu Sep 6 21:19:08 2001\n@@ -2289,7 +2289,7 @@\n \n \t\t\tresetPQExpBuffer(query);\n \t\t\tappendPQExpBuffer(query,\n-\t\t\t\t\t\t\t \"SELECT Oid FROM pg_index i WHERE i.indisprimary AND i.indrelid = \n'%s'::oid \",\n+\t\t\t\t\t\t\t \"SELECT Oid FROM pg_index i WHERE i.indisprimary AND i.indrelid = %s \n\",\n \t\t\t\t\t\t\t tblinfo[i].oid);\n \t\t\tres2 = PQexec(g_conn, query->data);\n \t\t\tif (!res2 || PQresultStatus(res2) != PGRES_TUPLES_OK)\n@@ -3035,7 +3035,6 @@\n \tquery = createPQExpBuffer();\n \tappendPQExpBuffer(query, \"SELECT description FROM pg_description WHERE \nobjoid = \");\n \tappendPQExpBuffer(query, oid);\n-\tappendPQExpBuffer(query, \"::oid\");\n \n \t/*** Execute query ***/\n \n-------------------------------------------------------------------------------\n\nTom Lane wrote on Mon, 03 Sep 2001 17:46:29 EDT\n>Martin Weinberg <weinberg@osprey.astro.umass.edu> writes:\n>> DumpComment: SELECT failed: 'ERROR: dtoi4: integer out of range\n>\n>Hmm. I can reproduce this error message if I suppose that you have\n>OIDs exceeding 2 billion. pg_dump will produce queries like:\n>\n>regression=# select * from pg_description where objoid = 2500000000;\n>ERROR: dtoi4: integer out of range\n>\n>A short-term workaround is to hack pg_dump so that it explicitly coerces\n>the literal to OID and/or quotes the literal:\n>\n>regression=# select * from pg_description where objoid = 2500000000::oid;\n> objoid | classoid | objsubid | description\n>--------+----------+----------+-------------\n>(0 rows)\n>\n>regression=# select * from pg_description where objoid = '2500000000';\n> objoid | classoid | objsubid | description\n>--------+----------+----------+-------------\n>(0 rows)\n>\n>This is done in many places in pg_dump, but not in DumpComment which is\n>relatively new code :-(\n>\n>A longer-term question is how to persuade the parser to get this right\n>without such help. I think that this is another variant of the\n>perennial numeric-precision issue and will not be real easy to fix.\n>\n>\t\t\tregards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 3: if posting/reading through Usenet, please send an appropriate\n>subscribe-nomail command to majordomo@postgresql.org so that your\n>message can get through to the mailing list cleanly\n>\n\n\n", "msg_date": "Fri, 07 Sep 2001 13:29:34 -0400", "msg_from": "Martin Weinberg <weinberg@osprey.astro.umass.edu>", "msg_from_op": true, "msg_subject": "Re: Why \"ERROR: dtoi4: integer out of range\" on pg_dump " }, { "msg_contents": "Martin Weinberg <weinberg@osprey.astro.umass.edu> writes:\n> Thanks, Tom! This was the problem. Here is my patch to pg_dump.c \n> that appears to fix the problem. Turns out that the oid needed to\n> be coerced in two places.\n\nI've already committed fixes (I found one or two more places that were\nmissing the same coercion :-().\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 07 Sep 2001 15:15:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Why \"ERROR: dtoi4: integer out of range\" on pg_dump " } ]
[ { "msg_contents": "(back on list)\n\n> As far as I can see, it is the same. My examples come from Cannan and Otten\n> on SQL92, but I read the spec for SQL99 and I can't see any obvious\n> change, except that INTERVAL YEAR TO YEAR (and any other X TO X) is no\n> longer allowed. (I take it you have a copy of SQL99?)\n\nWe have a copy of an SQL99 draft which seems to be reasonably complete.\nafaik we haven't come across an actual released version. Let me know if\nyou want me to forward it; perhaps it is on the ftp or web site?\n\n> >o We need to figure out how to parse it in gram.y. I looked at it a\n> >little bit (a couple of hours?) and it was not obvious how to get rid of\n> >shift/reduce problems.\n> I don't have any deep knowledge of yacc/bison...yet.\n\nOh, you will... ;)\n\n> I feel unhappy about multiplying interval types like that. I would rather\n> restrict it to interval (as now), intervalym (YEAR TO MONTH) and intervalds\n> (DAY TO SECOND), with the parameters determining the interval range.\n\nBut that means (perhaps?) that you can't define a column INTERVAL DAY,\nsince internally everything would accept all values DAY TO SECOND. I\nknow you proposed setting an internal mask, but that would be per-value,\nnot per-column, so it doesn't help. The attribute system may not be much\nhelp here either, unless we somehow generalize it (to allow types to\nkeep their own impure storage?).\n\n> otherwise we would have 13 new types and would need to make conversion\n> functions for all of them. SQL99 says that YEAR TO MONTH and DAY TO SECOND\n> are incompatible; the results of other combinations give the combined\n> maximum range: DAY TO HOUR + HOUR TO SECOND = DAY TO SECOND, but I don't\n> see this as being outside the capabilities of the 2 new types I propose.\n> Is there some reason in the internals why it would be necessary to create all\n> 13 new types?\n\n3 for YEAR/MONTH, and 10 for DAY/HOUR/MIN/SEC to get all the\ncombinations. If you convert to a \"super interval\" for internal\noperations, then you may only need the I/O and conversion functions,\nwhich would be easy. \n\nMy example still holds as a test case to evaluate an implementation\nafaik:\n\n create table t (id interval day);\n insert into t(id) select interval '2' day + interval '05' minute;\n\nwill need to be stored with only the day field non-zero. Certainly that\ncolumn can not be allowed to end up holding quantities other than\nintegral days, right?\n\nAlso, the column defined above has no ability to enforce the \"day only\"\ncharacter of the column if we are using only a single type and without\nhelp from the type or attribute system already in place.\n\n> As I said above, I feel that this is to over-complicate things...\n\nHmm, but it may be a required minimum level of complication to meet the\nspec. Given the arcane syntax and limited functionality (note the\ngratuitous editorializing ;) it probably isn't worth doing unless it\ngets us on an obvious path to SQL99-compliant functionality.\n\nAlso, it is one of the edge cases for SQL99, so even if it is a pain to\ndo we are only doing it once. They couldn't possibly come up with\nanything uglier for SQL0x, could they? Please say no...\n\n...\n> the distinction between YEAR TO MONTH and DAY TO SECOND is one that is\n> present in the existing interval type, so perhaps we could even get away with\n> only one new type?\n\nNot sure what you mean here. The existing type does keep years/months\nstored separately from the days/hours/minutes/seconds (a total of two\ninternal fields) but SQL99 asks that these be kept completely away from\neach other from what you've said. Does it define any arithmetic between\nthe two kinds of intervals?\n\n - Thomas\n", "msg_date": "Sat, 01 Sep 2001 01:16:05 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "Re: INTERVAL type: SQL92 implementation" }, { "msg_contents": "Good day,\n\nSorry to post to this list about a patch, but I seem to be having some\ndifficult getting on the pgsql-patches list; keep getting an \"illegal\ncommand\" when I send it \"subscribe\", for some reason. At any rate:\n\nIn documenting the to_char() function for transformation of numbers to\ntext, I noticed that the \"RN\" template character sequence was displaying\nsome unusual behavior; specifically, unless in fill mode (with the \"FM\"\nsequence), it would either return the result of the last query executed\nderived from a to_char() result, or what appears to be a garbage pointer\nif there was no such last query.\n\nExample output from PostgreSQL 7.1.3:\n-------------------------------------------------------\nlx=# SELECT to_char(485, 'RN');\n to_char\n-----------------\n UERY :command 1\n(1 row)\n\nlx=# SELECT to_char(485, 'FMRN');\n to_char\n---------\n CDLXXXV\n(1 row)\n\nlx=# SELECT to_char(485, 'RN');\n to_char\n---------\n CDLXXXV\n(1 row)\n\nlx=# SELECT to_char(1000, 'RN');\n to_char\n---------\n CDLXXXV\n(1 row)\n\nlx=# SELECT 1, 2, to_char(900, '999');\n ?column? | ?column? | to_char\n----------+----------+---------\n 1 | 2 | 900\n(1 row)\n\nlx=# SELECT to_char(485, 'RN');\n to_char\n---------\n 900\n(1 row)\n-------------------------------------------------------\n\nUpon looking into src/backend/utils/adt/formatting.c, I noticed that for\nRN transforms:\n\n strcpy(Np->inout_p, Np->number_p);\n\nwas only being called within the IS_FILLMODE if block. Moving it out, and\nabove that check seems to correct this behavior, and I've attached Patches\nfor both today's pgsql CVS snapshot and postgresql-7.1.3. Both compile,\nbut I've only tested the latter since my data path is not setup for\npre-7.2, and it seems like a fairly small change.\n\nI consider myself a competent programmer, but never having hacked on\nPostgres, I'm not 100% sure that this modification is totally correct\n(e.g., if there are any strange side-effects from doing this?), since I'm\nnot even sure what the Np pointers are achieving in this instance. ;) I'm\nguessing its copying the actual output result into the output value's\nchar* pointer, as that would explain the garbage pointer if it was never\ncopied.\n\nAny explanation would be greatly appreciated, as I'd like to document this\napparent bug correctly.\n\n\nRegards,\nJw.\n-- \njlx@commandprompt.com - John Worsley @ Command Prompt, Inc.\nby way of pgsql-hackers@commandprompt.com", "msg_date": "Fri, 31 Aug 2001 19:28:50 -0700 (PDT)", "msg_from": "\"Command Prompt, Inc.\" <pgsql-hackers@commandprompt.com>", "msg_from_op": false, "msg_subject": "[PATCHES] to_char and Roman Numeral (RN) bug" }, { "msg_contents": "\nThomas Lockhart <lockhart@fourpalms.org> wrote:\n> We have a copy of an SQL99 draft which seems to be reasonably complete.\n> afaik we haven't come across an actual released version. Let me know if\n> you want me to forward it; perhaps it is on the ftp or web site?\n\nftp://ftp.postgresql.org/pub/doc/sql/sql1998.tar.gz\n\nMostly the same files are at\nhttp://gatekeeper.research.compaq.com/pub/standards/sql/\n(or ftp).\n\nI didn't know until recently that the ANSI standard was available in PDF\nform for an almost reasonable price ($18/part) compared to the outrageous\nISO price ($98 to $275 per part).\n\nSee http://webstore.ansi.org/ansidocstore/find.asp?find_spec=sql\n\n[...]\n> Not sure what you mean here. The existing type does keep years/months\n> stored separately from the days/hours/minutes/seconds (a total of two\n> internal fields) but SQL99 asks that these be kept completely away from\n> each other from what you've said. Does it define any arithmetic between\n> the two kinds of intervals?\n\nNo. Days/hours/minutes/seconds are exact quantities whereas years and\nmonths are not, so they don't mix.\n\n\n\n\n", "msg_date": "Sat, 1 Sep 2001 17:17:43 -0400", "msg_from": "\"Ken Hirsch\" <kenhirsch@myself.com>", "msg_from_op": false, "msg_subject": "Re: INTERVAL type: SQL92 implementation" }, { "msg_contents": "I seem to have the complete released (I think) SQL99 docs. If anyone wants\nthem - just reply to me personally. Should they be put on the postgres\nsite? Is that legal?\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Ken Hirsch\n> Sent: Sunday, 2 September 2001 5:18 AM\n> To: Hackers List\n> Subject: Re: [HACKERS] INTERVAL type: SQL92 implementation\n>\n>\n>\n> Thomas Lockhart <lockhart@fourpalms.org> wrote:\n> > We have a copy of an SQL99 draft which seems to be reasonably complete.\n> > afaik we haven't come across an actual released version. Let me know if\n> > you want me to forward it; perhaps it is on the ftp or web site?\n>\n> ftp://ftp.postgresql.org/pub/doc/sql/sql1998.tar.gz\n>\n> Mostly the same files are at\n> http://gatekeeper.research.compaq.com/pub/standards/sql/\n> (or ftp).\n>\n> I didn't know until recently that the ANSI standard was available in PDF\n> form for an almost reasonable price ($18/part) compared to the outrageous\n> ISO price ($98 to $275 per part).\n>\n> See http://webstore.ansi.org/ansidocstore/find.asp?find_spec=sql\n>\n> [...]\n> > Not sure what you mean here. The existing type does keep years/months\n> > stored separately from the days/hours/minutes/seconds (a total of two\n> > internal fields) but SQL99 asks that these be kept completely away from\n> > each other from what you've said. Does it define any arithmetic between\n> > the two kinds of intervals?\n>\n> No. Days/hours/minutes/seconds are exact quantities whereas years and\n> months are not, so they don't mix.\n>\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n", "msg_date": "Mon, 3 Sep 2001 10:56:59 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: INTERVAL type: SQL92 implementation" }, { "msg_contents": "On Fri, Aug 31, 2001 at 07:28:50PM -0700, Command Prompt, Inc. wrote:\n\n> In documenting the to_char() function for transformation of numbers to\n> text, I noticed that the \"RN\" template character sequence was displaying\n> some unusual behavior; specifically, unless in fill mode (with the \"FM\"\n> sequence), it would either return the result of the last query executed\n> derived from a to_char() result, or what appears to be a garbage pointer\n> if there was no such last query.\n\n You are right it's bug. For the 'RM' in non-fillmode is to_char() quiet.\nI will fix it for 7.2.\n\n> I consider myself a competent programmer, but never having hacked on\n> Postgres, I'm not 100% sure that this modification is totally correct\n\n I check it and if it's good solution we use it.\n\n Thanks!\n\n\tKarel\n\nPS. Bruce, please, can you apply my previous (31 Aug) patch with to_char()\n stuff? I want fix this bug in really actual CVS code. Thanks.\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Tue, 4 Sep 2001 15:42:02 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] to_char and Roman Numeral (RN) bug" }, { "msg_contents": "> On Fri, Aug 31, 2001 at 07:28:50PM -0700, Command Prompt, Inc. wrote:\n> \n> > In documenting the to_char() function for transformation of numbers to\n> > text, I noticed that the \"RN\" template character sequence was displaying\n> > some unusual behavior; specifically, unless in fill mode (with the \"FM\"\n> > sequence), it would either return the result of the last query executed\n> > derived from a to_char() result, or what appears to be a garbage pointer\n> > if there was no such last query.\n> \n> You are right it's bug. For the 'RM' in non-fillmode is to_char() quiet.\n> I will fix it for 7.2.\n\nKarel, I assume you will send in a patch yourself, right?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 4 Sep 2001 11:37:48 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] to_char and Roman Numeral (RN) bug" }, { "msg_contents": "On Tue, Sep 04, 2001 at 11:37:48AM -0400, Bruce Momjian wrote:\n> > On Fri, Aug 31, 2001 at 07:28:50PM -0700, Command Prompt, Inc. wrote:\n> > \n> > > In documenting the to_char() function for transformation of numbers to\n> > > text, I noticed that the \"RN\" template character sequence was displaying\n> > > some unusual behavior; specifically, unless in fill mode (with the \"FM\"\n> > > sequence), it would either return the result of the last query executed\n> > > derived from a to_char() result, or what appears to be a garbage pointer\n> > > if there was no such last query.\n> > \n> > You are right it's bug. For the 'RM' in non-fillmode is to_char() quiet.\n> > I will fix it for 7.2.\n> \n> Karel, I assume you will send in a patch yourself, right?\n\n Right. It needs check.\n\n\tKarel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Tue, 4 Sep 2001 17:43:18 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] to_char and Roman Numeral (RN) bug" }, { "msg_contents": "\nI have checked this in CVS and it is working fine. Karel, have you\nfixed this? I can't find a place where I have applied a fix for this.\n\n\n> Good day,\n> \n> Sorry to post to this list about a patch, but I seem to be having some\n> difficult getting on the pgsql-patches list; keep getting an \"illegal\n> command\" when I send it \"subscribe\", for some reason. At any rate:\n> \n> In documenting the to_char() function for transformation of numbers to\n> text, I noticed that the \"RN\" template character sequence was displaying\n> some unusual behavior; specifically, unless in fill mode (with the \"FM\"\n> sequence), it would either return the result of the last query executed\n> derived from a to_char() result, or what appears to be a garbage pointer\n> if there was no such last query.\n> \n> Example output from PostgreSQL 7.1.3:\n> -------------------------------------------------------\n> lx=# SELECT to_char(485, 'RN');\n> to_char\n> -----------------\n> UERY :command 1\n> (1 row)\n> \n> lx=# SELECT to_char(485, 'FMRN');\n> to_char\n> ---------\n> CDLXXXV\n> (1 row)\n> \n> lx=# SELECT to_char(485, 'RN');\n> to_char\n> ---------\n> CDLXXXV\n> (1 row)\n> \n> lx=# SELECT to_char(1000, 'RN');\n> to_char\n> ---------\n> CDLXXXV\n> (1 row)\n> \n> lx=# SELECT 1, 2, to_char(900, '999');\n> ?column? | ?column? | to_char\n> ----------+----------+---------\n> 1 | 2 | 900\n> (1 row)\n> \n> lx=# SELECT to_char(485, 'RN');\n> to_char\n> ---------\n> 900\n> (1 row)\n> -------------------------------------------------------\n> \n> Upon looking into src/backend/utils/adt/formatting.c, I noticed that for\n> RN transforms:\n> \n> strcpy(Np->inout_p, Np->number_p);\n> \n> was only being called within the IS_FILLMODE if block. Moving it out, and\n> above that check seems to correct this behavior, and I've attached Patches\n> for both today's pgsql CVS snapshot and postgresql-7.1.3. Both compile,\n> but I've only tested the latter since my data path is not setup for\n> pre-7.2, and it seems like a fairly small change.\n> \n> I consider myself a competent programmer, but never having hacked on\n> Postgres, I'm not 100% sure that this modification is totally correct\n> (e.g., if there are any strange side-effects from doing this?), since I'm\n> not even sure what the Np pointers are achieving in this instance. ;) I'm\n> guessing its copying the actual output result into the output value's\n> char* pointer, as that would explain the garbage pointer if it was never\n> copied.\n> \n> Any explanation would be greatly appreciated, as I'd like to document this\n> apparent bug correctly.\n> \n> \n> Regards,\n> Jw.\n> -- \n> jlx@commandprompt.com - John Worsley @ Command Prompt, Inc.\n> by way of pgsql-hackers@commandprompt.com\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 7 Sep 2001 16:23:08 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] to_char and Roman Numeral (RN) bug" }, { "msg_contents": "On Fri, Sep 07, 2001 at 04:23:08PM -0400, Bruce Momjian wrote:\n> \n> I have checked this in CVS and it is working fine. Karel, have you\n> fixed this? I can't find a place where I have applied a fix for this.\n\n It is not fixed and I doubt that it is working fine in current CVS. The \nbugfix is in the attached patch. Please apply it. Thanks.\n\n Output must be:\n\ntest=# SELECT to_char(485, 'RN');\n to_char\n-----------------\n CDLXXXV\n(1 row)\n\ntest=# SELECT to_char(485, 'FMRN');\n to_char\n---------\n CDLXXXV\n(1 row)\n\ntest=# SELECT to_char(1000, 'RN');\n to_char\n-----------------\n M\n(1 row)\n\n\ntest=# SELECT to_char(7.2, '\"Welcome to\"9.9 \"release! :-)\"');\n to_char\n-----------------------------\n Welcome to 7.2 release! :-)\n(1 row)\n\n\n Thanks to jlx@commandprompt.com for this bug report!\n\n\t\tKarel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz", "msg_date": "Mon, 10 Sep 2001 14:48:46 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] to_char and Roman Numeral (RN) bug" }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n> On Fri, Sep 07, 2001 at 04:23:08PM -0400, Bruce Momjian wrote:\n> > \n> > I have checked this in CVS and it is working fine. Karel, have you\n> > fixed this? I can't find a place where I have applied a fix for this.\n> \n> It is not fixed and I doubt that it is working fine in current CVS. The \n> bugfix is in the attached patch. Please apply it. Thanks.\n> \n> Output must be:\n> \n> test=# SELECT to_char(485, 'RN');\n> to_char\n> -----------------\n> CDLXXXV\n> (1 row)\n> \n> test=# SELECT to_char(485, 'FMRN');\n> to_char\n> ---------\n> CDLXXXV\n> (1 row)\n> \n> test=# SELECT to_char(1000, 'RN');\n> to_char\n> -----------------\n> M\n> (1 row)\n> \n> \n> test=# SELECT to_char(7.2, '\"Welcome to\"9.9 \"release! :-)\"');\n> to_char\n> -----------------------------\n> Welcome to 7.2 release! :-)\n> (1 row)\n> \n> \n> Thanks to jlx@commandprompt.com for this bug report!\n> \n> \t\tKarel\n> \n> -- \n> Karel Zak <zakkr@zf.jcu.cz>\n> http://home.zf.jcu.cz/~zakkr/\n> \n> C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n\n[ Attachment, skipping... ]\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 Sep 2001 10:17:02 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] to_char and Roman Numeral (RN) bug" }, { "msg_contents": "\nPatch applied. Thanks.\n\n> On Fri, Sep 07, 2001 at 04:23:08PM -0400, Bruce Momjian wrote:\n> > \n> > I have checked this in CVS and it is working fine. Karel, have you\n> > fixed this? I can't find a place where I have applied a fix for this.\n> \n> It is not fixed and I doubt that it is working fine in current CVS. The \n> bugfix is in the attached patch. Please apply it. Thanks.\n> \n> Output must be:\n> \n> test=# SELECT to_char(485, 'RN');\n> to_char\n> -----------------\n> CDLXXXV\n> (1 row)\n> \n> test=# SELECT to_char(485, 'FMRN');\n> to_char\n> ---------\n> CDLXXXV\n> (1 row)\n> \n> test=# SELECT to_char(1000, 'RN');\n> to_char\n> -----------------\n> M\n> (1 row)\n> \n> \n> test=# SELECT to_char(7.2, '\"Welcome to\"9.9 \"release! :-)\"');\n> to_char\n> -----------------------------\n> Welcome to 7.2 release! :-)\n> (1 row)\n> \n> \n> Thanks to jlx@commandprompt.com for this bug report!\n> \n> \t\tKarel\n> \n> -- \n> Karel Zak <zakkr@zf.jcu.cz>\n> http://home.zf.jcu.cz/~zakkr/\n> \n> C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n\n[ Attachment, skipping... ]\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 12 Sep 2001 00:01:52 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] to_char and Roman Numeral (RN) bug" } ]
[ { "msg_contents": "Hi\n\nWould you mind to tell me how the DIVIDEBY operator could be\ndefined (the way Codd E.F [1] or Date C.J [2] define it) in \nPostgreSQL ?\n\nUsing PostgreSQL 7.1.2\n\nThanks in advance.\n\n1 Codd E.F. Relational Completeness of Data Base Sublanguages \n2 Date C.J. An Introduction to Database Systmes 6th ed.\n\n-- \nVladimir Zolotych gsmith@eurocom.od.ua\n", "msg_date": "Sat, 01 Sep 2001 15:18:37 +0300", "msg_from": "\"Vladimir V. Zolotych\" <gsmith@eurocom.od.ua>", "msg_from_op": true, "msg_subject": "DIVIDEBY in PostgreSQL " } ]
[ { "msg_contents": "I understand that the current port of Postgres for Windows requires the \ncygwin package. I'd like to understand the requirement for cygwin,and \npossibly try to port Postgres to run natively on Windows as a NT/2K \nservice. Anyone like to identify the challenges in such a port? Is it \nat all possible? Anyone else trying to do this?\n\nD\n\n", "msg_date": "Sat, 01 Sep 2001 19:32:09 -0400", "msg_from": "Dwayne Miller <dmiller@espgroup.net>", "msg_from_op": true, "msg_subject": "Porting to Native WindowsNT/2000" }, { "msg_contents": "Dwayne Miller wrote:\n> \n> I understand that the current port of Postgres for Windows requires the\n> cygwin package. I'd like to understand the requirement for cygwin,and\n> possibly try to port Postgres to run natively on Windows as a NT/2K\n> service. Anyone like to identify the challenges in such a port? Is it\n> at all possible? Anyone else trying to do this?\n>\n\nI'm not trying to do so, but I'm not sure I would say it is possible without\nthe the type of technology in cygwin.\n\nI have spent a lot of years writing NT drivers and programs. Unless you have a\nreal reason why cygwin is not practical, why bother?\n\nThe OS differences between NT and UNIX are huge. The main difference are\nprocesses. There is no \"fork\" in NT, and that is a huge gulf to cross. Is there\na reason why you would not want to use cygwin?\n\n\n-- \n5-4-3-2-1 Thunderbirds are GO!\n------------------------\nhttp://www.mohawksoft.com\n", "msg_date": "Sat, 01 Sep 2001 19:54:14 -0400", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: Porting to Native WindowsNT/2000" }, { "msg_contents": "Well, for one.... I have no idea what cygwin is, or what it does to \nyour system, or what security vulnerabilities it might add to your \nsystem. It comes with alot of stuff that I may or may not need, but \nwhat components I need to run Postgres is not clear.\n\nTwo.... could Postgres be made more efficient on Windows if it ran \nwithout cygwin?\n\nThree.... can you start cygwin programs on startup of the system?\n\nmlw wrote:\n\n>Dwayne Miller wrote:\n>\n>>I understand that the current port of Postgres for Windows requires the\n>>cygwin package. I'd like to understand the requirement for cygwin,and\n>>possibly try to port Postgres to run natively on Windows as a NT/2K\n>>service. Anyone like to identify the challenges in such a port? Is it\n>>at all possible? Anyone else trying to do this?\n>>\n>\n>I'm not trying to do so, but I'm not sure I would say it is possible without\n>the the type of technology in cygwin.\n>\n>I have spent a lot of years writing NT drivers and programs. Unless you have a\n>real reason why cygwin is not practical, why bother?\n>\n>The OS differences between NT and UNIX are huge. The main difference are\n>processes. There is no \"fork\" in NT, and that is a huge gulf to cross. Is there\n>a reason why you would not want to use cygwin?\n>\n>\n\n\n", "msg_date": "Sun, 02 Sep 2001 00:33:31 -0400", "msg_from": "\"Dwayne Miller\" <dmiller@espgroup.net>", "msg_from_op": false, "msg_subject": "Re: Porting to Native WindowsNT/2000" }, { "msg_contents": "\"Dwayne Miller\" <dmiller@espgroup.net> writes:\n\n> Well, for one.... I have no idea what cygwin is, or what it does to\n> your system, or what security vulnerabilities it might add to your\n> system. It comes with alot of stuff that I may or may not need, but\n> what components I need to run Postgres is not clear.\n\nCygwin is a Unix environment for Windows. For information, see\n http://cygwin.com/\n\nCygwin comes with a lot of stuff which you don't need to run Postgres.\nSimply having that stuff on your computer will not introduce any\nsecurity vulnerabilities if you don't run the programs. Cygwin is\nsimply a DLL and a bunch of Unix programs. It has no server\ncomponent.\n\nIn order to build Postgres, you will need the compiler and associated\ntools. In order to run all the Postgres commands, you will need the\nshell and several of the tools.\n\nIn fact, I believe that a cygwin distribution actually comes with\nPostgres prebuilt and ready to run.\n\n(To be honest, the idea of worrying about security vulnerabilities on\nWindows seems odd to me. If you are honestly worried about security\non your database server, the first step is to stop running Windows.)\n\n> Two.... could Postgres be made more efficient on Windows if it ran\n> without cygwin?\n\nYes. Cygwin adds measurable overhead to all I/O operations, and\nobviously a database does a lot of I/O. Postgres employs operations\nwhich are fast on Unix but are very slow on cygwin, such as fork.\n\nAs mlw said, porting Postgres to run natively on Windows would be a\nsignificant effort. The forking mechanism it uses currently would\nhave to be completely rearchitected. The buffer, file manager, and\nnetworking code would have to be rewritten. Off the top of my head,\nfor a top programmer who is an expert in Unix, Windows, and Postgres,\nit might take a year. There would also be a heavy ongoing maintenance\ncost to keep up with new Postgres releases.\n\n> Three.... can you start cygwin programs on startup of the system?\n\nSure. cygwin programs are just Windows programs which use a\nparticular DLL.\n\nIan\n", "msg_date": "01 Sep 2001 22:08:02 -0700", "msg_from": "Ian Lance Taylor <ian@airs.com>", "msg_from_op": false, "msg_subject": "Re: Porting to Native WindowsNT/2000" }, { "msg_contents": "Ian Lance Taylor wrote:\n> \n> \"Dwayne Miller\" <dmiller@espgroup.net> writes:\n> \n> \n> As mlw said, porting Postgres to run natively on Windows would be a\n> significant effort. The forking mechanism it uses currently would\n> have to be completely rearchitected. The buffer, file manager, and\n> networking code would have to be rewritten. Off the top of my head,\n> for a top programmer who is an expert in Unix, Windows, and Postgres,\n> it might take a year. \n\nIIRC someone had the backend working in multithreaded mode (actually \nhe had one of the forked backends doing it) as a part of some \njava-driven backend.\n\n From the description of it it seemed that it did not take a full \nman-year to get there ;)\n\n------------------\nHannu\n", "msg_date": "Sun, 02 Sep 2001 19:28:01 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Porting to Native WindowsNT/2000" }, { "msg_contents": "\"Ian Lance Taylor\" <ian@airs.com> wrote:\n> \"Dwayne Miller\" <dmiller@espgroup.net> writes:\n>\n> > Well, for one.... I have no idea what cygwin is, or what it does to\n> > your system, or what security vulnerabilities it might add to your\n> > system. It comes with alot of stuff that I may or may not need, but\n> > what components I need to run Postgres is not clear.\n>\n> Cygwin is a Unix environment for Windows. For information, see\n> http://cygwin.com/\n>\n> Cygwin comes with a lot of stuff which you don't need to run Postgres.\n> Simply having that stuff on your computer will not introduce any\n> security vulnerabilities if you don't run the programs. Cygwin is\n> simply a DLL and a bunch of Unix programs. It has no server\n> component.\n>\n> In order to build Postgres, you will need the compiler and associated\n> tools. In order to run all the Postgres commands, you will need the\n> shell and several of the tools.\n\n\n\n>\n> In fact, I believe that a cygwin distribution actually comes with\n> Postgres prebuilt and ready to run.\n\nYes, if you use the setup.exe at cygwin.com, it will by default include\npostgres. It would be nice if we had a minimal list of programs need to run\nPostgresql\n\n>\n> (To be honest, the idea of worrying about security vulnerabilities on\n> Windows seems odd to me. If you are honestly worried about security\n> on your database server, the first step is to stop running Windows.)\n\nThat's just a cheap shot. I've seen no evidence that Windows NT/2000 is\ninherently less secure than any given Unix or Linux distribution, it is just\na lot more popular and tends to have less experienced system administrators.\n\nHaving an easy-to-install Windows set up would be a plus for Postgres.\nThere are millions of Windows NT servers out there.\n\n>\n> > Two.... could Postgres be made more efficient on Windows if it ran\n> > without cygwin?\n>\n> Yes. Cygwin adds measurable overhead to all I/O operations, and\n> obviously a database does a lot of I/O. Postgres employs operations\n> which are fast on Unix but are very slow on cygwin, such as fork.\n>\n> As mlw said, porting Postgres to run natively on Windows would be a\n> significant effort. The forking mechanism it uses currently would\n> have to be completely rearchitected.\n\nThis is true. However, a process-pool architecture would benefit Postgres\non other platforms besides Windows. Postgresql has been ported to the\nHP3000 MPE/iX operating system, for example, which is POSIX-compliant, but\nhas an awfully slow fork().\n\n> The buffer, file manager, and\n> networking code would have to be rewritten.\n\nI don't think this is true. Most of the unix-style interfaces are\nsupported out of the box by the Microsoft C compiler.\n\n>\n> > Three.... can you start cygwin programs on startup of the system?\n>\n> Sure. cygwin programs are just Windows programs which use a\n> particular DLL.\n\nIt's not quite as simple as that. You can run it as a service under the\nSRVANY program, but that doesn't provide for a clean shut-down. Has anybody\nwritten an NT service wrapper for Postgresql?\n\n\n\n", "msg_date": "Sun, 2 Sep 2001 12:27:09 -0400", "msg_from": "\"Ken Hirsch\" <kenhirsch@myself.com>", "msg_from_op": false, "msg_subject": "Re: Porting to Native WindowsNT/2000" }, { "msg_contents": "\"Ken Hirsch\" <kenhirsch@myself.com> writes:\n\n> > (To be honest, the idea of worrying about security vulnerabilities on\n> > Windows seems odd to me. If you are honestly worried about security\n> > on your database server, the first step is to stop running Windows.)\n> \n> That's just a cheap shot. I've seen no evidence that Windows NT/2000 is\n> inherently less secure than any given Unix or Linux distribution, it is just\n> a lot more popular and tends to have less experienced system administrators.\n\nI agree that it looks like a cheap shot, but I didn't intend to make\none. There are various arguments why Windows NT is probably less\nsecure than Unix, ranging from interface design to code maturity to\nplatform popularity to actual statistics of numbers of cracked systems\nand numbers of different successful cracks. I personally don't know\nof any arguments why Unix is less secure than Windows NT, other than\nguessing. Unless you are an expert in the field, which I am not, I\nthink you should follow the preponderance of evidence, which I read as\nsaying that where security is a significant concern, it's best to\navoid Windows. (This is off-topic for the Postgres mailing list,\nthough, so if I reply on this further I'll take it off list.)\n\n\n> Having an easy-to-install Windows set up would be a plus for Postgres.\n> There are millions of Windows NT servers out there.\n\nI agree.\n\n\n> > > Two.... could Postgres be made more efficient on Windows if it ran\n> > > without cygwin?\n> >\n> > Yes. Cygwin adds measurable overhead to all I/O operations, and\n> > obviously a database does a lot of I/O. Postgres employs operations\n> > which are fast on Unix but are very slow on cygwin, such as fork.\n> >\n> > As mlw said, porting Postgres to run natively on Windows would be a\n> > significant effort. The forking mechanism it uses currently would\n> > have to be completely rearchitected.\n> \n> This is true. However, a process-pool architecture would benefit Postgres\n> on other platforms besides Windows. Postgresql has been ported to the\n> HP3000 MPE/iX operating system, for example, which is POSIX-compliant, but\n> has an awfully slow fork().\n\nOn the other hand, POSIX-compliant systems generally are moving toward\na faster and faster fork, as they should given the nature of POSIX\nprograms.\n\nA process pool architecture for a system like Postgres would require\nvery careful attention to memory usage, in order to be able to return\nswap space to the system or at least avoid using it. Otherwise, I\nbelieve the different processes would fragment memory over time,\ndecreasing system performance. Process pools work best for systems\nwith fixed memory usage.\n\n> > The buffer, file manager, and\n> > networking code would have to be rewritten.\n> \n> I don't think this is true. Most of the unix-style interfaces are\n> supported out of the box by the Microsoft C compiler.\n\nI've written code which ran natively on both Unix and Windows, and\nthat kind of statement doesn't get you very far. Even when the\ninterfaces are the same, there are critical differences all over the\nplace (e.g., select() on Windows only works on sockets, not pipes).\nYou can deal with each problem as it comes up, but they keep coming\nup. That's why Steve Chamberlain started the cygwin project in the\nfirst place--we were both at Cygnus at the time, and I spent several\nmonths working on cygwin myself a couple of years later.\n\nIan\n", "msg_date": "02 Sep 2001 13:59:55 -0700", "msg_from": "Ian Lance Taylor <ian@airs.com>", "msg_from_op": false, "msg_subject": "Re: Porting to Native WindowsNT/2000" }, { "msg_contents": "Ian Lance Taylor (& others) wrote:\n\n> > This is true. However, a process-pool architecture would benefit\nPostgres\n> > on other platforms besides Windows. Postgresql has been ported to the\n> > HP3000 MPE/iX operating system, for example, which is POSIX-compliant,\nbut\n> > has an awfully slow fork().\n>\n> On the other hand, POSIX-compliant systems generally are moving toward\n> a faster and faster fork, as they should given the nature of POSIX\n> programs.\n>\n> A process pool architecture for a system like Postgres would require\n> very careful attention to memory usage, in order to be able to return\n> swap space to the system or at least avoid using it. Otherwise, I\n> believe the different processes would fragment memory over time,\n> decreasing system performance. Process pools work best for systems\n> with fixed memory usage.\n\nWhat about a pre-forked model?\n\nWhat about using the Apache Portable Runtime? The Apache & Postgres licenses\nare compatible, are they not?\n\n\nCheers,\n\nColin\n\n\n\n", "msg_date": "Mon, 3 Sep 2001 20:30:11 +0200", "msg_from": "\"Colin 't Hart\" <cthart@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Porting to Native WindowsNT/2000" }, { "msg_contents": "\"Ken Hirsch\" <kenhirsch@myself.com> writes:\n>>> Three.... can you start cygwin programs on startup of the system?\n\n> It's not quite as simple as that. You can run it as a service under the\n> SRVANY program, but that doesn't provide for a clean shut-down. Has anybody\n> written an NT service wrapper for Postgresql?\n\nIIRC, Jason Tishler was working on one awhile back. Check the mailing\nlist archives.\n\nAs far as the general topic goes: this has come up several times before,\nand the conclusion has always been that a native Windows port would\nrequire effort (both initial, and ongoing maintenance) vastly out of\nproportion to the reward.\n\nBut it occurs to me that it might be useful to provide a downloadable\npackage that includes both the Postgres server and as much of Cygwin\nas you need to run it, all wrapped up in a nice friendly installer.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 03 Sep 2001 16:30:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Porting to Native WindowsNT/2000 " } ]
[ { "msg_contents": "Hi\n\nPlease help me compose the query in PostgreSQL.\nUsing PostgreSQL 7.1.2.\n\nSuppose relations A and B have columns:\n {X1, X2, ..., Xm, Y1, Y2, ..., Yn}\nand\n {Y1, Y2, ..., Yn}\nAttributes Y1, Y2, ..., Yn are common for both relations\nand have the same type in both.\n\nHow can I define in PostgreSQL the query producing\nrelation with columns X1,X2,...,Xm containing all those tuples\nsatisfying conditon: relation A contains tupple \n {x1,x2,...xm,y1,y2,...,yn}\nfor _each_ tupple\n {y1,y2,...,yn}\nin relation B ? Where x1 denotes particular value of\ncolum X1 etc.\n\nFor example: consider two tables DEND and DOR.\n\nDEND DOR\n\n s | p p \n----+---- ---- \n s1 | p1 p1 \n s1 | p2 p2 \n s1 | p3 p3 \n s1 | p4 p4 \n s1 | p5 p5 \n s1 | p6 p5 \n s2 | p1 (6 rows)\n s2 | p2\n s3 | p2\n s4 | p2\n s4 | p4\n s4 | p5\n(12 rows)\n\nFor such tables our desired query should return:\n\n s\n----\n s1\n\nThanks in advance.\n\n-- \nVladimir Zolotych gsmith@eurocom.od.ua\n", "msg_date": "Sun, 02 Sep 2001 11:23:23 +0300", "msg_from": "\"Vladimir V. Zolotych\" <gsmith@eurocom.od.ua>", "msg_from_op": true, "msg_subject": "Need help in composing PostgreSQL query" } ]
[ { "msg_contents": "Hello All!!!\n\n Can anyone show me a query (not using any contrib code) lists users\nfrom a group ??? Like \"Users | Group\" ???\n Current groups implementation is inflexible due to arrays not having\na membership function (i.e. 'in' operator).\n\nBest Regards,\nSteve Howe\n\n\n\n", "msg_date": "Sun, 2 Sep 2001 18:39:30 -0300", "msg_from": "\"Steve Howe\" <howe@carcass.dhs.org>", "msg_from_op": true, "msg_subject": "Determining users from group" } ]
[ { "msg_contents": "Hi\n\nThe following is the quote describing WHERE clause of SELECT\n(pgsql/doc/html/sql-select.html).\n\n\"WHERE Clause \n\nThe optional WHERE condition has the general form: \n\nWHERE boolean_expr\n \n\nboolean_expr can consist of any expression which evaluates to a boolean value. In many cases,\nthis expression will be: \n\n expr cond_op expr\n \n\nor \n\n log_op expr\n \n\nwhere cond_op can be one of: =, <, <=, >, >= or <>, a conditional operator like ALL, ANY, IN,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nLIKE, or a locally defined operator, and log_op can be one of: AND, OR, NOT. SELECT will ignore\nall rows for which the WHERE condition does not return TRUE. \"\n\nPlease give me hints how can I use \"conditional operators ALL, ANY\" in\nWHERE clause.\n\nSome examples will be appreciated.\n\nThanks in advance.\n\n-- \nVladimir Zolotych gsmith@eurocom.od.ua\n", "msg_date": "Mon, 03 Sep 2001 11:30:32 +0300", "msg_from": "\"Vladimir V. Zolotych\" <gsmith@eurocom.od.ua>", "msg_from_op": true, "msg_subject": "Conditional operators ALL, ANY in WHERE clause" }, { "msg_contents": "\"Vladimir V. Zolotych\" wrote:\n >Please give me hints how can I use \"conditional operators ALL, ANY\" in\n >WHERE clause.\n \n[This query would have been better directed to the pgsql-sql list.]\n\n >Some examples will be appreciated.\n\nALL is used to test a value against all of a list of items.\n\nFind the customer whose account has been created the longest:\n\n SELECT id, date_opened\n FROM customer\n WHERE date_opened IS NOT NULL AND date_opened <= ALL (SELECT date_opened \n FROM customer\n WHERE date_opened IS NOT NULL);\n\n id | date_opened \n -------+-------------\n 25832 | 1998-01-05\n (1 row)\n\n\nANY is used to compare against any item of the list; \"x = ANY y\" is the\nsame as \"x IN y\":\n\n \n SELECT COUNT(*)\n FROM customer\n WHERE area = ANY (SELECT id\n FROM country);\n count \n -------\n 216\n (1 row)\n\n\nBut note that use of ALL may be very inefficient:\n\nbray=# explain select id,date_opened from customer where date_opened is not \nnull and date_opened <= all (select date_opened from customer where \ndate_opened is not null);\nNOTICE: QUERY PLAN:\n\nSeq Scan on customer (cost=0.00..240125.47 rows=1144 width=16)\n SubPlan\n -> Seq Scan on customer (cost=0.00..139.89 rows=1144 width=4)\n\nEXPLAIN\nbray=# explain select id,date_opened from customer where date_opened is not \nnull and date_opened <= (select min(date_opened) from customer);\nNOTICE: QUERY PLAN:\n\nSeq Scan on customer (cost=0.00..148.47 rows=381 width=16)\n InitPlan\n -> Aggregate (cost=139.89..139.89 rows=1 width=4)\n -> Seq Scan on customer (cost=0.00..131.31 rows=3431 width=4)\n\nEXPLAIN\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"And he said unto his disciples, Therefore I say unto \n you, Take no thought for your life, what ye shall eat;\n neither for the body, what ye shall put on. For life \n is more than meat, and the body is more than clothing.\n Consider the ravens, for they neither sow nor reap; \n they have neither storehouse nor barn; and yet God \n feeds them; how much better you are than the birds!\n Consider the lilies, how they grow; they toil \n not, they spin not; and yet I say unto you, that \n Solomon in all his glory was not arrayed like one of \n these. If then God so clothe the grass, which is to \n day in the field, and tomorrow is cast into the oven;\n how much more will he clothe you, O ye of little \n faith? And seek not what ye shall eat, or what ye \n shall drink, neither be ye of doubtful mind. \n But rather seek ye the kingdom of God; and all these \n things shall be added unto you.\" \n Luke 12:22-24; 27-29; 31. \n\n\n", "msg_date": "Mon, 03 Sep 2001 23:11:54 +0100", "msg_from": "\"Oliver Elphick\" <olly@lfix.co.uk>", "msg_from_op": false, "msg_subject": "Re: Conditional operators ALL, ANY in WHERE clause " } ]
[ { "msg_contents": "> \"Dwayne Miller\" <dmiller@espgroup.net> writes:\n> \n> > Well, for one.... I have no idea what cygwin is, or what it does to\n> > your system, or what security vulnerabilities it might add to your\n> > system. It comes with alot of stuff that I may or may not need, but\n> > what components I need to run Postgres is not clear.\n> \n> Cygwin is a Unix environment for Windows. For information, see\n> http://cygwin.com/\n> \n> Cygwin comes with a lot of stuff which you don't need to run Postgres.\n> Simply having that stuff on your computer will not introduce any\n> security vulnerabilities if you don't run the programs. Cygwin is\n> simply a DLL and a bunch of Unix programs. It has no server\n> component.\n> \n> In order to build Postgres, you will need the compiler and associated\n> tools. In order to run all the Postgres commands, you will need the\n> shell and several of the tools.\n> \n> In fact, I believe that a cygwin distribution actually comes with\n> Postgres prebuilt and ready to run.\n> \n> (To be honest, the idea of worrying about security vulnerabilities on\n> Windows seems odd to me. If you are honestly worried about security\n> on your database server, the first step is to stop running Windows.)\n> \n> > Two.... could Postgres be made more efficient on Windows if it ran\n> > without cygwin?\n> \n> Yes. Cygwin adds measurable overhead to all I/O operations, and\n> obviously a database does a lot of I/O. Postgres employs operations\n> which are fast on Unix but are very slow on cygwin, such as fork.\n> \n> As mlw said, porting Postgres to run natively on Windows would be a\n> significant effort. The forking mechanism it uses currently would\n> have to be completely rearchitected. The buffer, file manager, and\n> networking code would have to be rewritten. Off the top of my head,\n> for a top programmer who is an expert in Unix, Windows, and Postgres,\n> it might take a year. There would also be a heavy ongoing maintenance\n> cost to keep up with new Postgres releases.\n> \n> > Three.... can you start cygwin programs on startup of the system?\n> \n> Sure. cygwin programs are just Windows programs which use a\n> particular DLL.\n> \n> Ian\n> \nCygrunsrv allows postgresql to be run as a service. There's a slight hiccup\non shutdown meaning that the postmaster.pid file gets left. This is due to\nsighup being sent by windows shutdown. I think current cygwin snapshots\nmight cure this, otherwise there is a patch some where that causes SIGHUP to\nbe ignored. I *think* the pre-built binary already has this patch applied.\n\n- Stuart\n\n", "msg_date": "Mon, 3 Sep 2001 10:03:55 +0100 ", "msg_from": "\"Henshall, Stuart - WCP\" <SHenshall@westcountrypublications.co.uk>", "msg_from_op": true, "msg_subject": "Re: Porting to Native WindowsNT/2000" } ]
[ { "msg_contents": "\nhello All\n\nI tried the following commands:\nponto=# explain select * from horarios where funcionario>10000;\nNOTICE: QUERY PLAN:\n\nSeq Scan on horarios (cost=0.00..176.21 rows=2432 width=132)\n\nEXPLAIN\nponto=# explain select * from horarios where funcionario=10000;\nNOTICE: QUERY PLAN:\n\nIndex Scan using horarios_func_data on horarios (cost=0.00..55.37 rows=73 \nwidth=132)\n\nEXPLAIN\n\nSo my question is why in the first case the postgre did'nt use the index \nand made a seq scan ??\n\nthanks and sorry about my english...\n", "msg_date": "3 Sep 2001 13:08:40 -0000", "msg_from": "\"gabriel\" <gabriel@workingnetsp.com.br>", "msg_from_op": true, "msg_subject": "INDEX BUG???" }, { "msg_contents": "\n> hello All\n> \n> I tried the following commands:\n> ponto=# explain select * from horarios where funcionario>10000;\n> NOTICE: QUERY PLAN:\n> \n> Seq Scan on horarios (cost=0.00..176.21 rows=2432 width=132)\n> \n> EXPLAIN\n> ponto=# explain select * from horarios where funcionario=10000;\n> NOTICE: QUERY PLAN:\n> \n> Index Scan using horarios_func_data on horarios (cost=0.00..55.37 rows=73 \n> width=132)\n> \n> EXPLAIN\n> \n> So my question is why in the first case the postgre did'nt use the index \n> and made a seq scan ??\n\nIn the first case it estimates 2432 rows returned, in the second it\nestimates 73 rows. How big is the table in question? Have you vacuum\nanalyzed recently? Are those reasonable estimates? (ie, what would\na select count(*) show for those two conditions)\n\nAt some point, the cost of doing the index scan exceeds that of the seq\nscan because the index scan requires reading the heap file in random\norder so that we know if the tuple is visible to the selecting\ntransaction (in addition to the reading of the index itself). If it's\nchoosing the wrong plan that usually means the estimates are off.\n\n\n", "msg_date": "Mon, 3 Sep 2001 10:43:06 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: INDEX BUG???" }, { "msg_contents": "gabriel writes:\n\n> So my question is why in the first case the postgre did'nt use the index\n> and made a seq scan ??\n\nBecause it thinks the sequential scan will be faster. You didn't show any\nevidence to the contrary.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Mon, 3 Sep 2001 19:57:37 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: INDEX BUG???" } ]
[ { "msg_contents": "\ntesting a fix to a problem ...\n\n", "msg_date": "Mon, 3 Sep 2001 11:46:29 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "ignore ..." } ]
[ { "msg_contents": "\n\nWe have postgres running on a linux machine\nand we connect with 15 winnt 4.0 machines running ACCESS2000\nWhen we change from access 97 to access 2000 we get every 15 minutes \nfollowing problem\n\nafter a process query\n---------------------------------\nfatal 1:set user id user admin is not in eg shadow\n\nand also after we select \n---------------------------------------\nwaiting... \n\nhave sombody already have this problem on how to solve???\n\nVERY URGENT !!!!\n", "msg_date": "Mon, 03 Sep 2001 19:02:33 GMT", "msg_from": "johan27@advalvas.be", "msg_from_op": true, "msg_subject": "BIG problem !!:fatal 1:set user id user admin is not in eg shadow" } ]
[ { "msg_contents": "Postmaster is eating my CPU -- see ps and top output at\nhttp://jamesthornton.com/misc/postgres.txt or below (it wraps too much\nwhen posting to Google, but my server keeps getting overloaded).\n\nAs you can see from the ps output, there are several INSERT statements\n-- these return after restarting Postgres and even rebooting the\nsystem. I checked the system log for that server, and there are only\n~30 INSERTS over the last ~12 hours (all INSERTs called by AOLserver\ninto the referer_log). Futhermore, I haven't been running any INSERT\nstatements from psql, and no one else has access to this system.\n\nYesterday, I ran \"vacuum analyze\" for the first time in a long time --\ncould that have caused this situation?\n\nSystem: Postgres 7.0.3, AOLserver 3.4/OpenACS 3.2.5/Postgres driver\n2.0, Linux 7.1\n\nP.S. -- Here's Don Baccus' reply from the OpenACS bboard...\n\nThis is very strange ... my guess is that for some reason a lock is\nbeing held persistently and your processes are spinning on it. This\nshould never (cough) happen.\n\nThis is one for the PG hackers group, I think - Tom Lane's more likely\nto be able to give you help here than any of us.\n\n----------------------\n\nps -fU postgres...\n\nUID PID PPID C STIME TTY TIME CMD\npostgres 1842 1 0 12:41 ? 00:00:00\n/usr/local/pgsql/bin/postmaster -B 6000 -o -S 2000 -S -D\n/usr/local/pgsql/data\npostgres 1872 1842 82 12:41 ? 01:06:20\n/usr/local/pgsql/bin/postgres localhost nsadmin james INSERT\npostgres 1997 1842 0 13:11 ? 00:00:02\n/usr/local/pgsql/bin/postgres localhost nsadmin james idle\npostgres 2025 1842 21 13:15 ? 00:10:04\n/usr/local/pgsql/bin/postgres localhost nsadmin james INSERT\npostgres 2072 1842 27 13:30 ? 00:08:51\n/usr/local/pgsql/bin/postgres localhost nsadmin james INSERT\npostgres 2077 1842 25 13:31 ? 00:07:41\n/usr/local/pgsql/bin/postgres localhost nsadmin james INSERT\npostgres 2079 1842 25 13:31 ? 00:07:44\n/usr/local/pgsql/bin/postgres localhost nsadmin james INSERT\npostgres 2082 1842 25 13:31 ? 00:07:40\n/usr/local/pgsql/bin/postgres localhost nsadmin james INSERT\npostgres 2086 1842 26 13:33 ? 00:07:41\n/usr/local/pgsql/bin/postgres localhost nsadmin james INSERT\npostgres 2090 1842 0 13:35 ? 00:00:01\n/usr/local/pgsql/bin/postgres localhost nsadmin james idle\npostgres 2122 1842 6 13:41 ? 00:01:22\n/usr/local/pgsql/bin/postgres localhost nsadmin james INSERT\npostgres 2131 1842 0 13:41 ? 00:00:00\n/usr/local/pgsql/bin/postgres localhost nsadmin buxs idle\npostgres 2187 1842 20 13:54 ? 00:01:32\n/usr/local/pgsql/bin/postgres localhost nsadmin james INSERT\npostgres 2189 1842 19 13:54 ? 00:01:28\n/usr/local/pgsql/bin/postgres localhost nsadmin james INSERT\npostgres 2205 1842 0 13:59 ? 00:00:00\n/usr/local/pgsql/bin/postgres localhost nsadmin buxs idle\npostgres 2217 1842 0 14:00 ? 00:00:00\n/usr/local/pgsql/bin/postgres localhost nsadmin james idle\npostgres 2218 1842 0 14:00 ? 00:00:00\n/usr/local/pgsql/bin/postgres localhost nsadmin buxs idle\npostgres 2219 1842 0 14:00 ? 00:00:00\n/usr/local/pgsql/bin/postgres localhost nsadmin buxs idle\npostgres 2220 1842 0 14:00 ? 00:00:00\n/usr/local/pgsql/bin/postgres localhost nsadmin james idle\n\ntop output for postgres user...\n\n 2:19pm up 2:31, 2 users, load average: 3.54, 5.55, 6.03\n118 processes: 113 sleeping, 5 running, 0 zombie, 0 stopped\nCPU states: 195.3% user, 4.6% system, 0.0% nice, 807224.6% idle\nMem: 319596K av, 289688K used, 29908K free, 0K shrd, \n30884K buff\nSwap: 658584K av, 12K used, 658572K free \n79636K cached\n\n PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME COMMAND\n 2122 postgres 9 0 12300 12M 4560 S 31.5 3.8 2:16\npostmaster\n 2072 postgres 9 0 4144 4144 3524 S 23.1 1.2 9:40\npostmaster\n 2189 postgres 15 0 4152 4152 3528 R 21.2 1.2 2:22\npostmaster\n 2077 postgres 15 0 4152 4152 3532 R 18.2 1.2 8:31\npostmaster\n 1872 postgres 9 0 11848 11M 4128 R 16.5 3.7 67:14\npostmaster\n 2187 postgres 9 0 4148 4148 3528 S 16.1 1.2 2:23\npostmaster\n 2082 postgres 9 0 4144 4144 3524 S 15.9 1.2 8:30\npostmaster\n 2079 postgres 9 0 4140 4140 3520 S 14.8 1.2 8:39\npostmaster\n 2086 postgres 9 0 4140 4140 3516 S 14.2 1.2 8:38\npostmaster\n 2025 postgres 9 0 11800 11M 4084 R 11.5 3.6 10:54\npostmaster\n 2090 postgres 9 0 15548 15M 6748 S 1.1 4.8 0:01\npostmaster\n 1842 postgres 8 0 1904 1904 1792 S 0.0 0.5 0:00\npostmaster\n 1997 postgres 9 0 13864 13M 5752 S 0.0 4.3 0:02\npostmaster\n 2131 postgres 9 0 4696 4696 4004 S 0.0 1.4 0:00\npostmaster\n 2205 postgres 9 0 3996 3996 3388 S 0.0 1.2 0:00\npostmaster\n 2217 postgres 9 0 11164 10M 3544 S 0.0 3.4 0:00\npostmaster\n 2218 postgres 9 0 11704 11M 3616 S 0.0 3.6 0:00\npostmaster\n 2219 postgres 9 0 11604 11M 3584 S 0.0 3.6 0:00\npostmaster\n 2220 postgres 9 0 3472 3472 2980 S 0.0 1.0 0:00\npostmaster\n\ntop output for nsadmin user (nsd = AOLserver, which is the only server\naccessing Postgres)...\n\n 2:19pm up 2:31, 2 users, load average: 3.54, 5.55, 6.03\n118 processes: 113 sleeping, 5 running, 0 zombie, 0 stopped\nCPU states: 195.3% user, 4.6% system, 0.0% nice, 807224.6% idle\nMem: 319596K av, 289688K used, 29908K free, 0K shrd, \n30884K buff\nSwap: 658584K av, 12K used, 658572K free \n79636K cached\n\n PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME COMMAND\n 2273 nsadmin 18 0 1084 1084 840 R 4.4 0.3 0:01 top\n 1122 nsadmin 9 0 1408 1408 1020 S 0.0 0.4 0:00 bash\n 1588 nsadmin 9 0 1428 1428 1028 S 0.0 0.4 0:00 bash\n 1852 nsadmin 9 0 19784 19M 1828 S 0.0 6.1 0:03 nsd\n 1855 nsadmin 9 0 58344 56M 1744 S 0.0 18.2 0:03 nsd\n 1858 nsadmin 8 0 58344 56M 1744 S 0.0 18.2 0:00 nsd\n 1859 nsadmin 9 0 58344 56M 1744 S 0.0 18.2 0:00 nsd\n 1860 nsadmin 8 0 19784 19M 1828 S 0.0 6.1 0:00 nsd\n 1861 nsadmin 9 0 19784 19M 1828 S 0.0 6.1 0:00 nsd\n 1862 nsadmin 9 0 58344 56M 1744 S 0.0 18.2 0:00 nsd\n 1863 nsadmin 9 0 19784 19M 1828 S 0.0 6.1 0:00 nsd\n 1864 nsadmin 9 0 19784 19M 1828 S 0.0 6.1 0:00 nsd\n 1865 nsadmin 9 0 58344 56M 1744 S 0.0 18.2 0:00 nsd\n 1946 nsadmin 9 0 58344 56M 1744 S 0.0 18.2 0:00 nsd\n 1951 nsadmin 9 0 58344 56M 1744 S 0.0 18.2 0:00 nsd\n 2044 nsadmin 9 0 58344 56M 1744 S 0.0 18.2 0:01 nsd\n 2067 nsadmin 9 0 58344 56M 1744 S 0.0 18.2 0:00 nsd\n 2069 nsadmin 9 0 58344 56M 1744 S 0.0 18.2 0:00 nsd\n 2078 nsadmin 9 0 58344 56M 1744 S 0.0 18.2 0:00 nsd\n 2080 nsadmin 9 0 58344 56M 1744 S 0.0 18.2 0:00 nsd\n 2081 nsadmin 9 0 58344 56M 1744 S 0.0 18.2 0:00 nsd\n 2174 nsadmin 9 0 58344 56M 1744 S 0.0 18.2 0:00 nsd\n 2182 nsadmin 9 0 58344 56M 1744 S 0.0 18.2 0:00 nsd\n 2188 nsadmin 9 0 58344 56M 1744 S 0.0 18.2 0:00 nsd\n 2190 nsadmin 9 0 58344 56M 1744 S 0.0 18.2 0:00 nsd\n 2191 nsadmin 9 0 58344 56M 1744 S 0.0 18.2 0:00 nsd\n 2192 nsadmin 9 0 58344 56M 1744 S 0.0 18.2 0:00 nsd\n 2197 nsadmin 9 0 58344 56M 1744 S 0.0 18.2 0:00 nsd\n 2268 nsadmin 9 0 58344 56M 1744 S 0.0 18.2 0:00 nsd\n", "msg_date": "3 Sep 2001 16:02:47 -0700", "msg_from": "james@unifiedmind.com (James Thornton)", "msg_from_op": true, "msg_subject": "Postgres is eating my CPU" }, { "msg_contents": "james@unifiedmind.com (James Thornton) writes:\n> As you can see from the ps output, there are several INSERT statements\n> -- these return after restarting Postgres and even rebooting the\n> system.\n\nPostgres backends don't just appear out of nowhere. Somewhere you have\na client app that is connecting to the database and issuing those INSERT\ncommands. Without knowing what that app is or exactly what commands it's\nissuing, it's impossible to say what's going on.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 04 Sep 2001 11:46:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Postgres is eating my CPU " } ]
[ { "msg_contents": "As long as you're hacking pgindent, can you do something about its habit\nof sometimes removing all space before a same-line comment? Here's\nan example from the 7.1 run (in src/backend/storage/lmgr/proc.c):\n\n***************\n*** 607,613 ****\n MyProc->waitHolder = holder;\n MyProc->waitLockMode = lockmode;\n \n! MyProc->errType = STATUS_OK; /* initialize result for success */\n \n /* mark that we are waiting for a lock */\n waitingForLock = true;\n--- 612,618 ----\n MyProc->waitHolder = holder;\n MyProc->waitLockMode = lockmode;\n \n! MyProc->errType = STATUS_OK;/* initialize result for success */\n \n /* mark that we are waiting for a lock */\n waitingForLock = true;\n***************\n\nIMHO there should always be at least one space before a same-line\ncomment.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 03 Sep 2001 23:00:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Another pgindent request" }, { "msg_contents": "\nAlready handled. I ran it on proc.c and got:\n\n MyProc->errType = STATUS_OK; /* initialize result for success */\n\nThe feature was added with:\n\t\n\t# add space after comments that start on tab stops\n\t sed 's,;\\(/\\*.*\\*/\\)$,; \\1,' |\n\nI must have added this since 7.1, probably because of a mention from\nyou.\n\n\n> As long as you're hacking pgindent, can you do something about its habit\n> of sometimes removing all space before a same-line comment? Here's\n> an example from the 7.1 run (in src/backend/storage/lmgr/proc.c):\n> \n> ***************\n> *** 607,613 ****\n> MyProc->waitHolder = holder;\n> MyProc->waitLockMode = lockmode;\n> \n> ! MyProc->errType = STATUS_OK; /* initialize result for success */\n> \n> /* mark that we are waiting for a lock */\n> waitingForLock = true;\n> --- 612,618 ----\n> MyProc->waitHolder = holder;\n> MyProc->waitLockMode = lockmode;\n> \n> ! MyProc->errType = STATUS_OK;/* initialize result for success */\n> \n> /* mark that we are waiting for a lock */\n> waitingForLock = true;\n> ***************\n> \n> IMHO there should always be at least one space before a same-line\n> comment.\n> \n> \t\t\tregards, tom lane\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 3 Sep 2001 23:05:54 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Another pgindent request" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I must have added this since 7.1, probably because of a mention from\n> you.\n\nOh, okay ... I must've forgot complaining about it before ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 03 Sep 2001 23:09:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Another pgindent request " } ]
[ { "msg_contents": "Below is the last message I sent (to patches) regarding the random string\nfunction for contrib. Is there any interest in this? I don't mind changing\nit per Peter's comments, but I don't want to bother if no one sees any value\nin it. Comments?\n\n-- Joe\n\n----- Original Message -----\nFrom: \"Joe Conway\" <joseph.conway@home.com>\nTo: \"Peter Eisentraut\" <peter_e@gmx.net>\nCc: \"Dr. Evil\" <drevil@sidereal.kz>; <pgsql-patches@postgresql.org>\nSent: Thursday, August 09, 2001 10:13 AM\nSubject: Re: [PATCHES] Random strings\n\n\n> > > seconds). The same test with /dev/urandom returns instantly. Perhaps\n> there\n> > > should be an option to use either. For instances where only a few\ntruly\n> > > random bytes is needed (i.e. one session key), use /dev/random. When\nyou\n> > > need many random bytes quickly, use /dev/urandom?\n> >\n> > Not sure if this is intuitive. How many bytes is \"a few\"? Maybe just\nbe\n> > honest about it and name them randomstr and urandomstr or such.\n> >\n>\n> In the patch that I sent last night, I explicitly limited /dev/random to\n64\n> bytes. I agree that this is not very intuitive, but for specific purposes,\n> such as generating a session key for tripledes (24 byte/192 bit random\n> string yielding 168 bits for a the key) periodically, it is quite useful.\n> There's a tradeoff here between cryptographic strength (favoring\n> /dev/random) and application performance (favoring /dev/urandom) that will\n> vary significantly from application to application. It's nice to have the\n> option depending on your needs.\n>\n> Having said that, I'm not married to the idea that we should provide\naccess\n> to both /dev/random and /dev/urandom. I'd be happy to roll another patch,\n> limited to just urandom, and renaming the function if you feel strongly\n> about it. (should we move this discussion back to hackers to get a wider\n> audience?)\n>\n> -- Joe\n>\n\n\n", "msg_date": "Mon, 3 Sep 2001 22:14:38 -0700", "msg_from": "\"Joe Conway\" <joseph.conway@home.com>", "msg_from_op": true, "msg_subject": "Fw: Random strings" }, { "msg_contents": "\"Joe Conway\" <joseph.conway@home.com> writes:\n\n> > Having said that, I'm not married to the idea that we should provide\n> access\n> > to both /dev/random and /dev/urandom. I'd be happy to roll another patch,\n> > limited to just urandom, and renaming the function if you feel strongly\n> > about it. (should we move this discussion back to hackers to get a wider\n> > audience?)\n\nThere was a long discussion on linux-kernel recently about the\ndifference between 'random' and 'urandom'. The upshot seemed to be\nthat 'urandom' is Good Enough in 99% of the cases, since (as long as\nthe generator is seeded well at startup) attackers would have to break \nSHA1 in order to predict the output from it. If someone has the\nresources to do that you're basically screwed anyhow...\n\n-Doug\n-- \nFree Dmitry Sklyarov! \nhttp://www.freesklyarov.org/ \n\nWe will return to our regularly scheduled signature shortly.\n", "msg_date": "04 Sep 2001 10:46:12 -0400", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: Fw: Random strings" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us] \n> Sent: 04 September 2001 06:43\n> To: dave Page\n> Subject: Re: [HACKERS] Porting to Native WindowsNT/2000 \n> \n> \n> I thought this might interest you.\n> **************************************\n\nThanks Tom,\n\n> \"Ken Hirsch\" <kenhirsch@myself.com> writes:\n> >>> Three.... can you start cygwin programs on startup of the system?\n> \n> > It's not quite as simple as that. You can run it as a \n> service under the > SRVANY program, but that doesn't provide \n> for a clean shut-down. Has anybody > written an NT service \n> wrapper for Postgresql?\n> \n> IIRC, Jason Tishler was working on one awhile back. Check \n> the mailing list archives.\n\nJason and others have indeed have indeed got it running as a service using\nCygwins cygrunsrv program. I'm now using this configuration for pgAdmin\nhacking on my laptop and it works well.\n\n> As far as the general topic goes: this has come up several \n> times before, and the conclusion has always been that a \n> native Windows port would require effort (both initial, and \n> ongoing maintenance) vastly out of proportion to the reward.\n> \n> But it occurs to me that it might be useful to provide a \n> downloadable package that includes both the Postgres server \n> and as much of Cygwin as you need to run it, all wrapped up \n> in a nice friendly installer.\n\nJean-Michel Poure and I were discussing this yesterday and were looking into\nwriting a plugin for pgAdmin II that will guide the users through installing\nminimal Cygwin with PostgreSQL & the IPC-Daemon on their system. The idea is\nthat they download and install pgAdmin which is a simple procedure for the\nWindows user (== non *nix user) then run a wizard which downloads and sets\nup the rest for them so they end up with a working PostgreSQL, running as a\nservice, with pgAdmin as the admin front end.\n\nWe're also looking into a pg_hba.conf editor to make it easier to write and\ntest pg_hba.conf files.\n\nRegards, Dave.\n\n", "msg_date": "Tue, 4 Sep 2001 08:14:27 +0100 ", "msg_from": "Dave Page <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: Porting to Native WindowsNT/2000 " } ]
[ { "msg_contents": "There is a TODO list at src/interfaces/odbc/TODO.txt which was last\nupdated in 1998.\n\nDo any of the things in this list remain to be done?\nIf not, perhaps the file should be removed.\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"If any of you lack wisdom, let him ask of God, who\n gives to all men generously and without reproach, and \n it will be given to him.\" James 1:5 \n\n\n", "msg_date": "Tue, 04 Sep 2001 11:56:05 +0100", "msg_from": "\"Oliver Elphick\" <olly@lfix.co.uk>", "msg_from_op": true, "msg_subject": "ODBC TODO list is way out of date" }, { "msg_contents": "> There is a TODO list at src/interfaces/odbc/TODO.txt which was last\n> updated in 1998.\n> \n> Do any of the things in this list remain to be done?\n> If not, perhaps the file should be removed.\n\nFile removed.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 7 Sep 2001 16:39:20 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: ODBC TODO list is way out of date" } ]
[ { "msg_contents": "The following section\n\nhttp://www.ca.postgresql.org/devel-corner/docs/postgres/locking-tables.html\n\ntitled \"Locking and Tables\", has two subsections, \"Table-level locks\" and\n\"Row-level locks\". Under table-level locks we find lock names such as\nRowShareLock and RowExclusiveLock -- are those table-level locks? Under\nrow-level locks we find no specific lock names mentioned.\n\nWhat I wonder is, if I do\n\nBEGIN;\nLOCK table1 IN ROW EXCLUSIVE MODE;\n\nwhat do I lock? The table? A row? Which row?\n\nClarification appreciated.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Tue, 4 Sep 2001 12:59:44 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Table vs. row level locks confusion" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> The following section\n> http://www.ca.postgresql.org/devel-corner/docs/postgres/locking-tables.html\n> titled \"Locking and Tables\", has two subsections, \"Table-level locks\" and\n> \"Row-level locks\". Under table-level locks we find lock names such as\n> RowShareLock and RowExclusiveLock -- are those table-level locks?\n\nYes, despite the names. (The various lock-type names are pretty\nhorrible IMHO, but they are claimed to be Oracle-compatible.)\nAnything you do with a LOCK command is a table-level lock.\n\n> Under row-level locks we find no specific lock names mentioned.\n\nThe only row-level locking mechanism available to users is\nto UPDATE, DELETE, or SELECT FOR UPDATE a particular row.\nAll rows affected by such a command are locked against other\nsuch commands (but not against plain SELECT).\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 04 Sep 2001 11:01:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Table vs. row level locks confusion " } ]
[ { "msg_contents": "Hi,\n\n we have troubles with German umlauts (e.g.: ���) using the Postgresql JDBC\n driver form the 7.1.2 distribution... already tried to debug our Java\n software but it seems that the database driver modifies the umlauts in any\n way - a debug before any INSERT or after a SELECT query shows that the\n umlaut \"�\" for example gets lost on the way though the JDBC driver...\n\n So e.g. the attribute city='M�nchen' gets \"M\\?nchen\" when testing the JDBC\n driver using a simple Java program.\n\n Any idea what happens?\n\n Best regards,\n Alex T.\n\n\n\n\n", "msg_date": "Tue, 4 Sep 2001 15:16:45 +0200 (CEST)", "msg_from": "\"Alexander Troppmann\" <talex@globalinxs.de>", "msg_from_op": true, "msg_subject": "Troubles using German Umlauts with JDBC" }, { "msg_contents": "Alexander,\n\nYou have to set the encoding when you make the connection.\n\nProperties props = new Properties();\nprops.put(\"user\",user);\nprops.put(\"password\",password);\nprops.put(\"charSet\",encoding);\nConnection con = DriverManager.getConnection(url,props);\nwhere encoding is the proper encoding for your database\n\nDave\nOn Tue, 2001-09-04 at 09:16, Alexander Troppmann wrote:\n> Hi,\n> \n> we have troubles with German umlauts (e.g.: ���) using the Postgresql JDBC\n> driver form the 7.1.2 distribution... already tried to debug our Java\n> software but it seems that the database driver modifies the umlauts in any\n> way - a debug before any INSERT or after a SELECT query shows that the\n> umlaut \"�\" for example gets lost on the way though the JDBC driver...\n> \n> So e.g. the attribute city='M�nchen' gets \"M\\?nchen\" when testing the JDBC\n> driver using a simple Java program.\n> \n> Any idea what happens?\n> \n> Best regards,\n> Alex T.\n> \n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n> \n\n\n\n", "msg_date": "04 Sep 2001 09:54:27 -0400", "msg_from": "Dave Cramer <Dave@micro-automation.net>", "msg_from_op": false, "msg_subject": "Re: Troubles using German Umlauts with JDBC" }, { "msg_contents": "[forwarding to pgsql-hackers and Bruce as Todo list maintainer,\nsee comment below]\n\n[insert with JDBC converts Latin-1 umlaut to ?]\nOn 04 Sep 2001 09:54:27 -0400, Dave Cramer wrote:\n>You have to set the encoding when you make the connection.\n>\n>Properties props = new Properties();\n>props.put(\"user\",user);\n>props.put(\"password\",password);\n>props.put(\"charSet\",encoding);\n>Connection con = DriverManager.getConnection(url,props);\n>where encoding is the proper encoding for your database\n\nFor completeness, I quote the answer Barry Lind gave yesterday. \n\n\"[the driver] asks the server what character set is being used\nfor the database. Unfortunatly the server only knows about\ncharacter sets if multibyte support is compiled in. If the\nserver is compiled without multibyte, then it always reports to\nthe client that the character set is SQL_ASCII (where SQL_ASCII\nis 7bit ascii). Thus if you don't have multibyte enabled on the\nserver you can't support 8bit characters through the jdbc\ndriver, unless you specifically tell the connection what\ncharacter set to use (i.e. override the default obtained from\nthe server).\"\n\nThis really is confusing and I think PostgreSQL should be able\nto support single byte encoding conversions without enabling\nmulti-byte. \n\nTo the very least there should be a --enable-encoding-conversion\nor something similar, even if it just enables the current\nmultibyte support.\n\nBruce, can this be put on the TODO list one way or the other?\nThis problem has appeared 4 times in two months or so on the\nJDBC list.\n\nRegards,\nRen� Pijlman <rene@lab.applinet.nl>\n", "msg_date": "Tue, 04 Sep 2001 19:03:41 +0200", "msg_from": "Rene Pijlman <rene@lab.applinet.nl>", "msg_from_op": false, "msg_subject": "Re: [JDBC] Troubles using German Umlauts with JDBC" }, { "msg_contents": "Rene,\n\nI would like to add one additional comment. In current sources the jdbc \ndriver detects (through a hack) that the server doesn't have multibyte \nenabled and then ignores the SQL_ASCII return value and defaults to the \nJVM's character set instead of using SQL_ASCII.\n\nThe problem boils down to the fact that without multibyte enabled, the \nserver has know way of specifiying which 8bit character set is being \nused for a particular database. Thus a client like JDBC doesn't know \nwhat character set to use when converting to UNICODE. Thus the best we \ncan do in JDBC is use our best guess (JVM character set is probably the \nbest default), and allow the user to explicitly specify something else \nif necessary.\n\nthanks,\n--Barry\n\nRene Pijlman wrote:\n> [forwarding to pgsql-hackers and Bruce as Todo list maintainer,\n> see comment below]\n> \n> [insert with JDBC converts Latin-1 umlaut to ?]\n> On 04 Sep 2001 09:54:27 -0400, Dave Cramer wrote:\n> \n>>You have to set the encoding when you make the connection.\n>>\n>>Properties props = new Properties();\n>>props.put(\"user\",user);\n>>props.put(\"password\",password);\n>>props.put(\"charSet\",encoding);\n>>Connection con = DriverManager.getConnection(url,props);\n>>where encoding is the proper encoding for your database\n>>\n> \n> For completeness, I quote the answer Barry Lind gave yesterday. \n> \n> \"[the driver] asks the server what character set is being used\n> for the database. Unfortunatly the server only knows about\n> character sets if multibyte support is compiled in. If the\n> server is compiled without multibyte, then it always reports to\n> the client that the character set is SQL_ASCII (where SQL_ASCII\n> is 7bit ascii). Thus if you don't have multibyte enabled on the\n> server you can't support 8bit characters through the jdbc\n> driver, unless you specifically tell the connection what\n> character set to use (i.e. override the default obtained from\n> the server).\"\n> \n> This really is confusing and I think PostgreSQL should be able\n> to support single byte encoding conversions without enabling\n> multi-byte. \n> \n> To the very least there should be a --enable-encoding-conversion\n> or something similar, even if it just enables the current\n> multibyte support.\n> \n> Bruce, can this be put on the TODO list one way or the other?\n> This problem has appeared 4 times in two months or so on the\n> JDBC list.\n> \n> Regards,\n> Ren� Pijlman <rene@lab.applinet.nl>\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n> \n> \n\n\n", "msg_date": "Tue, 04 Sep 2001 10:40:36 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: [JDBC] Troubles using German Umlauts with JDBC" }, { "msg_contents": "I've added a new section \"Character encoding\" to\nhttp://lab.applinet.nl/postgresql-jdbc/, based on the\ninformation from Dave and Barry.\n\nI haven't seen a confirmation from pgsql-hackers or Bruce yet\nthat this issue will be added to the Todo list. I'm under the\nimpression that the backend developers don't see this as a\nproblem.\n\nRegards,\nRen� Pijlman\n\nOn Tue, 04 Sep 2001 10:40:36 -0700, Barry Lind wrote:\n>I would like to add one additional comment. In current sources the jdbc \n>driver detects (through a hack) that the server doesn't have multibyte \n>enabled and then ignores the SQL_ASCII return value and defaults to the \n>JVM's character set instead of using SQL_ASCII.\n>\n>The problem boils down to the fact that without multibyte enabled, the \n>server has know way of specifiying which 8bit character set is being \n>used for a particular database. Thus a client like JDBC doesn't know \n>what character set to use when converting to UNICODE. Thus the best we \n>can do in JDBC is use our best guess (JVM character set is probably the \n>best default), and allow the user to explicitly specify something else \n>if necessary.\n>\n>thanks,\n>--Barry\n>\n>Rene Pijlman wrote:\n>> [forwarding to pgsql-hackers and Bruce as Todo list maintainer,\n>> see comment below]\n>> \n>> [insert with JDBC converts Latin-1 umlaut to ?]\n>> On 04 Sep 2001 09:54:27 -0400, Dave Cramer wrote:\n>> \n>>>You have to set the encoding when you make the connection.\n>>>\n>>>Properties props = new Properties();\n>>>props.put(\"user\",user);\n>>>props.put(\"password\",password);\n>>>props.put(\"charSet\",encoding);\n>>>Connection con = DriverManager.getConnection(url,props);\n>>>where encoding is the proper encoding for your database\n>>>\n>> \n>> For completeness, I quote the answer Barry Lind gave yesterday. \n>> \n>> \"[the driver] asks the server what character set is being used\n>> for the database. Unfortunatly the server only knows about\n>> character sets if multibyte support is compiled in. If the\n>> server is compiled without multibyte, then it always reports to\n>> the client that the character set is SQL_ASCII (where SQL_ASCII\n>> is 7bit ascii). Thus if you don't have multibyte enabled on the\n>> server you can't support 8bit characters through the jdbc\n>> driver, unless you specifically tell the connection what\n>> character set to use (i.e. override the default obtained from\n>> the server).\"\n>> \n>> This really is confusing and I think PostgreSQL should be able\n>> to support single byte encoding conversions without enabling\n>> multi-byte. \n>> \n>> To the very least there should be a --enable-encoding-conversion\n>> or something similar, even if it just enables the current\n>> multibyte support.\n>> \n>> Bruce, can this be put on the TODO list one way or the other?\n>> This problem has appeared 4 times in two months or so on the\n>> JDBC list.\n>> \n>> Regards,\n>> Ren� Pijlman <rene@lab.applinet.nl>\n", "msg_date": "Sun, 09 Sep 2001 10:51:36 +0200", "msg_from": "Rene Pijlman <rene@lab.applinet.nl>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Troubles using German Umlauts with JDBC" }, { "msg_contents": "\nI can add something if people agree there is an issue here.\n\n> I've added a new section \"Character encoding\" to\n> http://lab.applinet.nl/postgresql-jdbc/, based on the\n> information from Dave and Barry.\n> \n> I haven't seen a confirmation from pgsql-hackers or Bruce yet\n> that this issue will be added to the Todo list. I'm under the\n> impression that the backend developers don't see this as a\n> problem.\n> \n> Regards,\n> Ren? Pijlman\n> \n> On Tue, 04 Sep 2001 10:40:36 -0700, Barry Lind wrote:\n> >I would like to add one additional comment. In current sources the jdbc \n> >driver detects (through a hack) that the server doesn't have multibyte \n> >enabled and then ignores the SQL_ASCII return value and defaults to the \n> >JVM's character set instead of using SQL_ASCII.\n> >\n> >The problem boils down to the fact that without multibyte enabled, the \n> >server has know way of specifiying which 8bit character set is being \n> >used for a particular database. Thus a client like JDBC doesn't know \n> >what character set to use when converting to UNICODE. Thus the best we \n> >can do in JDBC is use our best guess (JVM character set is probably the \n> >best default), and allow the user to explicitly specify something else \n> >if necessary.\n> >\n> >thanks,\n> >--Barry\n> >\n> >Rene Pijlman wrote:\n> >> [forwarding to pgsql-hackers and Bruce as Todo list maintainer,\n> >> see comment below]\n> >> \n> >> [insert with JDBC converts Latin-1 umlaut to ?]\n> >> On 04 Sep 2001 09:54:27 -0400, Dave Cramer wrote:\n> >> \n> >>>You have to set the encoding when you make the connection.\n> >>>\n> >>>Properties props = new Properties();\n> >>>props.put(\"user\",user);\n> >>>props.put(\"password\",password);\n> >>>props.put(\"charSet\",encoding);\n> >>>Connection con = DriverManager.getConnection(url,props);\n> >>>where encoding is the proper encoding for your database\n> >>>\n> >> \n> >> For completeness, I quote the answer Barry Lind gave yesterday. \n> >> \n> >> \"[the driver] asks the server what character set is being used\n> >> for the database. Unfortunatly the server only knows about\n> >> character sets if multibyte support is compiled in. If the\n> >> server is compiled without multibyte, then it always reports to\n> >> the client that the character set is SQL_ASCII (where SQL_ASCII\n> >> is 7bit ascii). Thus if you don't have multibyte enabled on the\n> >> server you can't support 8bit characters through the jdbc\n> >> driver, unless you specifically tell the connection what\n> >> character set to use (i.e. override the default obtained from\n> >> the server).\"\n> >> \n> >> This really is confusing and I think PostgreSQL should be able\n> >> to support single byte encoding conversions without enabling\n> >> multi-byte. \n> >> \n> >> To the very least there should be a --enable-encoding-conversion\n> >> or something similar, even if it just enables the current\n> >> multibyte support.\n> >> \n> >> Bruce, can this be put on the TODO list one way or the other?\n> >> This problem has appeared 4 times in two months or so on the\n> >> JDBC list.\n> >> \n> >> Regards,\n> >> Ren? Pijlman <rene@lab.applinet.nl>\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 9 Sep 2001 10:24:32 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Troubles using German Umlauts with JDBC" }, { "msg_contents": "On Sun, 9 Sep 2001 10:24:32 -0400 (EDT), Bruce Momjian wrote:\n>I can add something if people agree there is an issue here.\n\nIMO the issue is twofold. Without multibyte compiled in: \n\n1) the server cannot tell the client which single byte character\nencoding is being used, so a client like JDBC cannot properly\nconvert to its native encoding\n\n2) its not possible to create a database with a single byte\nencoding other than ASCII (see my posting\nhttp://fts.postgresql.org/db/mw/msg.html?mid=1029462)\n\nI'm not sure to what extent these issues are related.\n\nAlso, client/server character conversion is coupled to multibyte\nsupport (see Peter's reply to my posting). This may be a\nlimitation for other clients, but I'm not sure about that.\n\nBasically, it seems that multibyte support is adding features\nthat are needed in single byte environents as well. Perhaps the\nproblem can be solved by documentation (recommending to enable\nmultibyte support in non-ASCII singlebyte environments), perhaps\nby an alias (--enable-character-encoding), perhaps the\nfunctionality needs to be split into a true multibyte part and a\ngeneric part. I don't know what's best, this probably depends on\nthe \"price\" of compiling in multibyte support.\n\nRegards,\nRen� Pijlman\n\n>> I've added a new section \"Character encoding\" to\n>> http://lab.applinet.nl/postgresql-jdbc/, based on the\n>> information from Dave and Barry.\n>> \n>> I haven't seen a confirmation from pgsql-hackers or Bruce yet\n>> that this issue will be added to the Todo list. I'm under the\n>> impression that the backend developers don't see this as a\n>> problem.\n>> \n>> Regards,\n>> Ren? Pijlman\n>> \n>> On Tue, 04 Sep 2001 10:40:36 -0700, Barry Lind wrote:\n>> >I would like to add one additional comment. In current sources the jdbc \n>> >driver detects (through a hack) that the server doesn't have multibyte \n>> >enabled and then ignores the SQL_ASCII return value and defaults to the \n>> >JVM's character set instead of using SQL_ASCII.\n>> >\n>> >The problem boils down to the fact that without multibyte enabled, the \n>> >server has know way of specifiying which 8bit character set is being \n>> >used for a particular database. Thus a client like JDBC doesn't know \n>> >what character set to use when converting to UNICODE. Thus the best we \n>> >can do in JDBC is use our best guess (JVM character set is probably the \n>> >best default), and allow the user to explicitly specify something else \n>> >if necessary.\n>> >\n>> >thanks,\n>> >--Barry\n>> >\n>> >Rene Pijlman wrote:\n>> >> [forwarding to pgsql-hackers and Bruce as Todo list maintainer,\n>> >> see comment below]\n>> >> \n>> >> [insert with JDBC converts Latin-1 umlaut to ?]\n>> >> On 04 Sep 2001 09:54:27 -0400, Dave Cramer wrote:\n>> >> \n>> >>>You have to set the encoding when you make the connection.\n>> >>>\n>> >>>Properties props = new Properties();\n>> >>>props.put(\"user\",user);\n>> >>>props.put(\"password\",password);\n>> >>>props.put(\"charSet\",encoding);\n>> >>>Connection con = DriverManager.getConnection(url,props);\n>> >>>where encoding is the proper encoding for your database\n>> >>>\n>> >> \n>> >> For completeness, I quote the answer Barry Lind gave yesterday. \n>> >> \n>> >> \"[the driver] asks the server what character set is being used\n>> >> for the database. Unfortunatly the server only knows about\n>> >> character sets if multibyte support is compiled in. If the\n>> >> server is compiled without multibyte, then it always reports to\n>> >> the client that the character set is SQL_ASCII (where SQL_ASCII\n>> >> is 7bit ascii). Thus if you don't have multibyte enabled on the\n>> >> server you can't support 8bit characters through the jdbc\n>> >> driver, unless you specifically tell the connection what\n>> >> character set to use (i.e. override the default obtained from\n>> >> the server).\"\n>> >> \n>> >> This really is confusing and I think PostgreSQL should be able\n>> >> to support single byte encoding conversions without enabling\n>> >> multi-byte. \n>> >> \n>> >> To the very least there should be a --enable-encoding-conversion\n>> >> or something similar, even if it just enables the current\n>> >> multibyte support.\n>> >> \n>> >> Bruce, can this be put on the TODO list one way or the other?\n>> >> This problem has appeared 4 times in two months or so on the\n>> >> JDBC list.\n>> >> \n>> >> Regards,\n>> >> Ren? Pijlman <rene@lab.applinet.nl>\n>> \n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 6: Have you searched our list archives?\n>> \n>> http://www.postgresql.org/search.mpl\n>> \n\n", "msg_date": "Sun, 09 Sep 2001 16:46:30 +0200", "msg_from": "Rene Pijlman <rene@lab.applinet.nl>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Troubles using German Umlauts with JDBC" }, { "msg_contents": "Rene,\n\nTwo comments on your writeup about the problem:\n\n1) Depending on version you will see different behavior:\n 7.0 - default client character set is used\n 7.1 - database character set is used (although it may be reported \nincorrectly as SQL_ASCII)\n 7.2 - database character set is used if multibyte, else use the \nclient character set.\n\nIn all versions it is possible to set the character set explicitly via \nthe charSet parameter.\n\n\n2) The charSet parameter (as can any parameter the driver expects) can \nalso be set in the connection URL. (i.e. \njdbc:postgresql://localhost/dbname?charSet=UTF-8&user=foo&password=bar \nshows passing the charSet, user and password in the URL)\n\nthanks,\n--Barry\n\n\nRene Pijlman wrote:\n> I've added a new section \"Character encoding\" to\n> http://lab.applinet.nl/postgresql-jdbc/, based on the\n> information from Dave and Barry.\n> \n> I haven't seen a confirmation from pgsql-hackers or Bruce yet\n> that this issue will be added to the Todo list. I'm under the\n> impression that the backend developers don't see this as a\n> problem.\n> \n> Regards,\n> Ren� Pijlman\n> \n> On Tue, 04 Sep 2001 10:40:36 -0700, Barry Lind wrote:\n> \n>>I would like to add one additional comment. In current sources the jdbc \n>>driver detects (through a hack) that the server doesn't have multibyte \n>>enabled and then ignores the SQL_ASCII return value and defaults to the \n>>JVM's character set instead of using SQL_ASCII.\n>>\n>>The problem boils down to the fact that without multibyte enabled, the \n>>server has know way of specifiying which 8bit character set is being \n>>used for a particular database. Thus a client like JDBC doesn't know \n>>what character set to use when converting to UNICODE. Thus the best we \n>>can do in JDBC is use our best guess (JVM character set is probably the \n>>best default), and allow the user to explicitly specify something else \n>>if necessary.\n>>\n>>thanks,\n>>--Barry\n>>\n>>Rene Pijlman wrote:\n>>\n>>>[forwarding to pgsql-hackers and Bruce as Todo list maintainer,\n>>>see comment below]\n>>>\n>>>[insert with JDBC converts Latin-1 umlaut to ?]\n>>>On 04 Sep 2001 09:54:27 -0400, Dave Cramer wrote:\n>>>\n>>>\n>>>>You have to set the encoding when you make the connection.\n>>>>\n>>>>Properties props = new Properties();\n>>>>props.put(\"user\",user);\n>>>>props.put(\"password\",password);\n>>>>props.put(\"charSet\",encoding);\n>>>>Connection con = DriverManager.getConnection(url,props);\n>>>>where encoding is the proper encoding for your database\n>>>>\n>>>>\n>>>For completeness, I quote the answer Barry Lind gave yesterday. \n>>>\n>>>\"[the driver] asks the server what character set is being used\n>>>for the database. Unfortunatly the server only knows about\n>>>character sets if multibyte support is compiled in. If the\n>>>server is compiled without multibyte, then it always reports to\n>>>the client that the character set is SQL_ASCII (where SQL_ASCII\n>>>is 7bit ascii). Thus if you don't have multibyte enabled on the\n>>>server you can't support 8bit characters through the jdbc\n>>>driver, unless you specifically tell the connection what\n>>>character set to use (i.e. override the default obtained from\n>>>the server).\"\n>>>\n>>>This really is confusing and I think PostgreSQL should be able\n>>>to support single byte encoding conversions without enabling\n>>>multi-byte. \n>>>\n>>>To the very least there should be a --enable-encoding-conversion\n>>>or something similar, even if it just enables the current\n>>>multibyte support.\n>>>\n>>>Bruce, can this be put on the TODO list one way or the other?\n>>>This problem has appeared 4 times in two months or so on the\n>>>JDBC list.\n>>>\n>>>Regards,\n>>>Ren� Pijlman <rene@lab.applinet.nl>\n>>>\n> \n\n\n", "msg_date": "Sun, 09 Sep 2001 12:51:10 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Troubles using German Umlauts with JDBC" }, { "msg_contents": "Bruce,\n\nI think the TODO item should be:\n\nAbility to set character set for a database without multibyte enabled\n\nCurrently createdb -E (and the corresponding create database sql \ncommand) only works if multibyte is enabled. However it is useful to \nknow which single byte character set is being used even when multibyte \nisn't enabled. Currently there is no way to specify which single byte \ncharacter set a database is using (unless you compile with multibyte).\n\nthanks,\n--Barry\n\n\nBruce Momjian wrote:\n> I can add something if people agree there is an issue here.\n> \n> \n>>I've added a new section \"Character encoding\" to\n>>http://lab.applinet.nl/postgresql-jdbc/, based on the\n>>information from Dave and Barry.\n>>\n>>I haven't seen a confirmation from pgsql-hackers or Bruce yet\n>>that this issue will be added to the Todo list. I'm under the\n>>impression that the backend developers don't see this as a\n>>problem.\n>>\n>>Regards,\n>>Ren? Pijlman\n>>\n>>On Tue, 04 Sep 2001 10:40:36 -0700, Barry Lind wrote:\n>>\n>>>I would like to add one additional comment. In current sources the jdbc \n>>>driver detects (through a hack) that the server doesn't have multibyte \n>>>enabled and then ignores the SQL_ASCII return value and defaults to the \n>>>JVM's character set instead of using SQL_ASCII.\n>>>\n>>>The problem boils down to the fact that without multibyte enabled, the \n>>>server has know way of specifiying which 8bit character set is being \n>>>used for a particular database. Thus a client like JDBC doesn't know \n>>>what character set to use when converting to UNICODE. Thus the best we \n>>>can do in JDBC is use our best guess (JVM character set is probably the \n>>>best default), and allow the user to explicitly specify something else \n>>>if necessary.\n>>>\n>>>thanks,\n>>>--Barry\n>>>\n>>>Rene Pijlman wrote:\n>>>\n>>>>[forwarding to pgsql-hackers and Bruce as Todo list maintainer,\n>>>>see comment below]\n>>>>\n>>>>[insert with JDBC converts Latin-1 umlaut to ?]\n>>>>On 04 Sep 2001 09:54:27 -0400, Dave Cramer wrote:\n>>>>\n>>>>\n>>>>>You have to set the encoding when you make the connection.\n>>>>>\n>>>>>Properties props = new Properties();\n>>>>>props.put(\"user\",user);\n>>>>>props.put(\"password\",password);\n>>>>>props.put(\"charSet\",encoding);\n>>>>>Connection con = DriverManager.getConnection(url,props);\n>>>>>where encoding is the proper encoding for your database\n>>>>>\n>>>>>\n>>>>For completeness, I quote the answer Barry Lind gave yesterday. \n>>>>\n>>>>\"[the driver] asks the server what character set is being used\n>>>>for the database. Unfortunatly the server only knows about\n>>>>character sets if multibyte support is compiled in. If the\n>>>>server is compiled without multibyte, then it always reports to\n>>>>the client that the character set is SQL_ASCII (where SQL_ASCII\n>>>>is 7bit ascii). Thus if you don't have multibyte enabled on the\n>>>>server you can't support 8bit characters through the jdbc\n>>>>driver, unless you specifically tell the connection what\n>>>>character set to use (i.e. override the default obtained from\n>>>>the server).\"\n>>>>\n>>>>This really is confusing and I think PostgreSQL should be able\n>>>>to support single byte encoding conversions without enabling\n>>>>multi-byte. \n>>>>\n>>>>To the very least there should be a --enable-encoding-conversion\n>>>>or something similar, even if it just enables the current\n>>>>multibyte support.\n>>>>\n>>>>Bruce, can this be put on the TODO list one way or the other?\n>>>>This problem has appeared 4 times in two months or so on the\n>>>>JDBC list.\n>>>>\n>>>>Regards,\n>>>>Ren? Pijlman <rene@lab.applinet.nl>\n>>>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 6: Have you searched our list archives?\n>>\n>>http://www.postgresql.org/search.mpl\n>>\n>>\n> \n\n\n", "msg_date": "Sun, 09 Sep 2001 13:03:31 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Troubles using German Umlauts with JDBC" }, { "msg_contents": "\n\nAdded to TODO.\n\n\n> Bruce,\n> \n> I think the TODO item should be:\n> \n> Ability to set character set for a database without multibyte enabled\n> \n> Currently createdb -E (and the corresponding create database sql \n> command) only works if multibyte is enabled. However it is useful to \n> know which single byte character set is being used even when multibyte \n> isn't enabled. Currently there is no way to specify which single byte \n> character set a database is using (unless you compile with multibyte).\n> \n> thanks,\n> --Barry\n> \n> \n> Bruce Momjian wrote:\n> > I can add something if people agree there is an issue here.\n> > \n> > \n> >>I've added a new section \"Character encoding\" to\n> >>http://lab.applinet.nl/postgresql-jdbc/, based on the\n> >>information from Dave and Barry.\n> >>\n> >>I haven't seen a confirmation from pgsql-hackers or Bruce yet\n> >>that this issue will be added to the Todo list. I'm under the\n> >>impression that the backend developers don't see this as a\n> >>problem.\n> >>\n> >>Regards,\n> >>Ren? Pijlman\n> >>\n> >>On Tue, 04 Sep 2001 10:40:36 -0700, Barry Lind wrote:\n> >>\n> >>>I would like to add one additional comment. In current sources the jdbc \n> >>>driver detects (through a hack) that the server doesn't have multibyte \n> >>>enabled and then ignores the SQL_ASCII return value and defaults to the \n> >>>JVM's character set instead of using SQL_ASCII.\n> >>>\n> >>>The problem boils down to the fact that without multibyte enabled, the \n> >>>server has know way of specifiying which 8bit character set is being \n> >>>used for a particular database. Thus a client like JDBC doesn't know \n> >>>what character set to use when converting to UNICODE. Thus the best we \n> >>>can do in JDBC is use our best guess (JVM character set is probably the \n> >>>best default), and allow the user to explicitly specify something else \n> >>>if necessary.\n> >>>\n> >>>thanks,\n> >>>--Barry\n> >>>\n> >>>Rene Pijlman wrote:\n> >>>\n> >>>>[forwarding to pgsql-hackers and Bruce as Todo list maintainer,\n> >>>>see comment below]\n> >>>>\n> >>>>[insert with JDBC converts Latin-1 umlaut to ?]\n> >>>>On 04 Sep 2001 09:54:27 -0400, Dave Cramer wrote:\n> >>>>\n> >>>>\n> >>>>>You have to set the encoding when you make the connection.\n> >>>>>\n> >>>>>Properties props = new Properties();\n> >>>>>props.put(\"user\",user);\n> >>>>>props.put(\"password\",password);\n> >>>>>props.put(\"charSet\",encoding);\n> >>>>>Connection con = DriverManager.getConnection(url,props);\n> >>>>>where encoding is the proper encoding for your database\n> >>>>>\n> >>>>>\n> >>>>For completeness, I quote the answer Barry Lind gave yesterday. \n> >>>>\n> >>>>\"[the driver] asks the server what character set is being used\n> >>>>for the database. Unfortunatly the server only knows about\n> >>>>character sets if multibyte support is compiled in. If the\n> >>>>server is compiled without multibyte, then it always reports to\n> >>>>the client that the character set is SQL_ASCII (where SQL_ASCII\n> >>>>is 7bit ascii). Thus if you don't have multibyte enabled on the\n> >>>>server you can't support 8bit characters through the jdbc\n> >>>>driver, unless you specifically tell the connection what\n> >>>>character set to use (i.e. override the default obtained from\n> >>>>the server).\"\n> >>>>\n> >>>>This really is confusing and I think PostgreSQL should be able\n> >>>>to support single byte encoding conversions without enabling\n> >>>>multi-byte. \n> >>>>\n> >>>>To the very least there should be a --enable-encoding-conversion\n> >>>>or something similar, even if it just enables the current\n> >>>>multibyte support.\n> >>>>\n> >>>>Bruce, can this be put on the TODO list one way or the other?\n> >>>>This problem has appeared 4 times in two months or so on the\n> >>>>JDBC list.\n> >>>>\n> >>>>Regards,\n> >>>>Ren? Pijlman <rene@lab.applinet.nl>\n> >>>>\n> >>---------------------------(end of broadcast)---------------------------\n> >>TIP 6: Have you searched our list archives?\n> >>\n> >>http://www.postgresql.org/search.mpl\n> >>\n> >>\n> > \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 9 Sep 2001 20:14:38 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Troubles using German Umlauts with JDBC" }, { "msg_contents": "\nIs this a jdbc issue or a general backend issue?\n\n\n> Bruce,\n> \n> I think the TODO item should be:\n> \n> Ability to set character set for a database without multibyte enabled\n> \n> Currently createdb -E (and the corresponding create database sql \n> command) only works if multibyte is enabled. However it is useful to \n> know which single byte character set is being used even when multibyte \n> isn't enabled. Currently there is no way to specify which single byte \n> character set a database is using (unless you compile with multibyte).\n> \n> thanks,\n> --Barry\n> \n> \n> Bruce Momjian wrote:\n> > I can add something if people agree there is an issue here.\n> > \n> > \n> >>I've added a new section \"Character encoding\" to\n> >>http://lab.applinet.nl/postgresql-jdbc/, based on the\n> >>information from Dave and Barry.\n> >>\n> >>I haven't seen a confirmation from pgsql-hackers or Bruce yet\n> >>that this issue will be added to the Todo list. I'm under the\n> >>impression that the backend developers don't see this as a\n> >>problem.\n> >>\n> >>Regards,\n> >>Ren? Pijlman\n> >>\n> >>On Tue, 04 Sep 2001 10:40:36 -0700, Barry Lind wrote:\n> >>\n> >>>I would like to add one additional comment. In current sources the jdbc \n> >>>driver detects (through a hack) that the server doesn't have multibyte \n> >>>enabled and then ignores the SQL_ASCII return value and defaults to the \n> >>>JVM's character set instead of using SQL_ASCII.\n> >>>\n> >>>The problem boils down to the fact that without multibyte enabled, the \n> >>>server has know way of specifiying which 8bit character set is being \n> >>>used for a particular database. Thus a client like JDBC doesn't know \n> >>>what character set to use when converting to UNICODE. Thus the best we \n> >>>can do in JDBC is use our best guess (JVM character set is probably the \n> >>>best default), and allow the user to explicitly specify something else \n> >>>if necessary.\n> >>>\n> >>>thanks,\n> >>>--Barry\n> >>>\n> >>>Rene Pijlman wrote:\n> >>>\n> >>>>[forwarding to pgsql-hackers and Bruce as Todo list maintainer,\n> >>>>see comment below]\n> >>>>\n> >>>>[insert with JDBC converts Latin-1 umlaut to ?]\n> >>>>On 04 Sep 2001 09:54:27 -0400, Dave Cramer wrote:\n> >>>>\n> >>>>\n> >>>>>You have to set the encoding when you make the connection.\n> >>>>>\n> >>>>>Properties props = new Properties();\n> >>>>>props.put(\"user\",user);\n> >>>>>props.put(\"password\",password);\n> >>>>>props.put(\"charSet\",encoding);\n> >>>>>Connection con = DriverManager.getConnection(url,props);\n> >>>>>where encoding is the proper encoding for your database\n> >>>>>\n> >>>>>\n> >>>>For completeness, I quote the answer Barry Lind gave yesterday. \n> >>>>\n> >>>>\"[the driver] asks the server what character set is being used\n> >>>>for the database. Unfortunatly the server only knows about\n> >>>>character sets if multibyte support is compiled in. If the\n> >>>>server is compiled without multibyte, then it always reports to\n> >>>>the client that the character set is SQL_ASCII (where SQL_ASCII\n> >>>>is 7bit ascii). Thus if you don't have multibyte enabled on the\n> >>>>server you can't support 8bit characters through the jdbc\n> >>>>driver, unless you specifically tell the connection what\n> >>>>character set to use (i.e. override the default obtained from\n> >>>>the server).\"\n> >>>>\n> >>>>This really is confusing and I think PostgreSQL should be able\n> >>>>to support single byte encoding conversions without enabling\n> >>>>multi-byte. \n> >>>>\n> >>>>To the very least there should be a --enable-encoding-conversion\n> >>>>or something similar, even if it just enables the current\n> >>>>multibyte support.\n> >>>>\n> >>>>Bruce, can this be put on the TODO list one way or the other?\n> >>>>This problem has appeared 4 times in two months or so on the\n> >>>>JDBC list.\n> >>>>\n> >>>>Regards,\n> >>>>Ren? Pijlman <rene@lab.applinet.nl>\n> >>>>\n> >>---------------------------(end of broadcast)---------------------------\n> >>TIP 6: Have you searched our list archives?\n> >>\n> >>http://www.postgresql.org/search.mpl\n> >>\n> >>\n> > \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 9 Sep 2001 20:15:20 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Troubles using German Umlauts with JDBC" }, { "msg_contents": "General backend issue.\n\n--Barry\n\nBruce Momjian wrote:\n> Is this a jdbc issue or a general backend issue?\n> \n> \n> \n>>Bruce,\n>>\n>>I think the TODO item should be:\n>>\n>>Ability to set character set for a database without multibyte enabled\n>>\n>>Currently createdb -E (and the corresponding create database sql \n>>command) only works if multibyte is enabled. However it is useful to \n>>know which single byte character set is being used even when multibyte \n>>isn't enabled. Currently there is no way to specify which single byte \n>>character set a database is using (unless you compile with multibyte).\n>>\n>>thanks,\n>>--Barry\n>>\n>>\n>>Bruce Momjian wrote:\n>>\n>>>I can add something if people agree there is an issue here.\n>>>\n>>>\n>>>\n>>>>I've added a new section \"Character encoding\" to\n>>>>http://lab.applinet.nl/postgresql-jdbc/, based on the\n>>>>information from Dave and Barry.\n>>>>\n>>>>I haven't seen a confirmation from pgsql-hackers or Bruce yet\n>>>>that this issue will be added to the Todo list. I'm under the\n>>>>impression that the backend developers don't see this as a\n>>>>problem.\n>>>>\n>>>>Regards,\n>>>>Ren? Pijlman\n>>>>\n>>>>On Tue, 04 Sep 2001 10:40:36 -0700, Barry Lind wrote:\n>>>>\n>>>>\n>>>>>I would like to add one additional comment. In current sources the jdbc \n>>>>>driver detects (through a hack) that the server doesn't have multibyte \n>>>>>enabled and then ignores the SQL_ASCII return value and defaults to the \n>>>>>JVM's character set instead of using SQL_ASCII.\n>>>>>\n>>>>>The problem boils down to the fact that without multibyte enabled, the \n>>>>>server has know way of specifiying which 8bit character set is being \n>>>>>used for a particular database. Thus a client like JDBC doesn't know \n>>>>>what character set to use when converting to UNICODE. Thus the best we \n>>>>>can do in JDBC is use our best guess (JVM character set is probably the \n>>>>>best default), and allow the user to explicitly specify something else \n>>>>>if necessary.\n>>>>>\n>>>>>thanks,\n>>>>>--Barry\n>>>>>\n>>>>>Rene Pijlman wrote:\n>>>>>\n>>>>>\n>>>>>>[forwarding to pgsql-hackers and Bruce as Todo list maintainer,\n>>>>>>see comment below]\n>>>>>>\n>>>>>>[insert with JDBC converts Latin-1 umlaut to ?]\n>>>>>>On 04 Sep 2001 09:54:27 -0400, Dave Cramer wrote:\n>>>>>>\n>>>>>>\n>>>>>>\n>>>>>>>You have to set the encoding when you make the connection.\n>>>>>>>\n>>>>>>>Properties props = new Properties();\n>>>>>>>props.put(\"user\",user);\n>>>>>>>props.put(\"password\",password);\n>>>>>>>props.put(\"charSet\",encoding);\n>>>>>>>Connection con = DriverManager.getConnection(url,props);\n>>>>>>>where encoding is the proper encoding for your database\n>>>>>>>\n>>>>>>>\n>>>>>>>\n>>>>>>For completeness, I quote the answer Barry Lind gave yesterday. \n>>>>>>\n>>>>>>\"[the driver] asks the server what character set is being used\n>>>>>>for the database. Unfortunatly the server only knows about\n>>>>>>character sets if multibyte support is compiled in. If the\n>>>>>>server is compiled without multibyte, then it always reports to\n>>>>>>the client that the character set is SQL_ASCII (where SQL_ASCII\n>>>>>>is 7bit ascii). Thus if you don't have multibyte enabled on the\n>>>>>>server you can't support 8bit characters through the jdbc\n>>>>>>driver, unless you specifically tell the connection what\n>>>>>>character set to use (i.e. override the default obtained from\n>>>>>>the server).\"\n>>>>>>\n>>>>>>This really is confusing and I think PostgreSQL should be able\n>>>>>>to support single byte encoding conversions without enabling\n>>>>>>multi-byte. \n>>>>>>\n>>>>>>To the very least there should be a --enable-encoding-conversion\n>>>>>>or something similar, even if it just enables the current\n>>>>>>multibyte support.\n>>>>>>\n>>>>>>Bruce, can this be put on the TODO list one way or the other?\n>>>>>>This problem has appeared 4 times in two months or so on the\n>>>>>>JDBC list.\n>>>>>>\n>>>>>>Regards,\n>>>>>>Ren? Pijlman <rene@lab.applinet.nl>\n>>>>>>\n>>>>---------------------------(end of broadcast)---------------------------\n>>>>TIP 6: Have you searched our list archives?\n>>>>\n>>>>http://www.postgresql.org/search.mpl\n>>>>\n>>>>\n>>>>\n>>\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 2: you can get off all lists at once with the unregister command\n>> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>>\n>>\n> \n\n\n", "msg_date": "Sun, 09 Sep 2001 18:50:15 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Troubles using German Umlauts with JDBC" } ]
[ { "msg_contents": "\tWe're running Postgres 7.0.3.2. We're running into a referential\nintegrity violation that seems to crop up randomly, and only when stress\ntesting the system for a day or so.\n\n\tWe've created some stress test code to fill the tables with about\n500 nodes, then delete them from the top and let the cascade-delete delete\nall the children. (the test code is a script for our own scripting\nlanguage). Each insert and delete has a trigger that simply rearranges each\nnode in the table, like a linked list. That trigger code is only a few lines\nand doesn't look to be the problem, since the problem only crops up randomly\nafter several hours of stressing.\n\n\tThe repeated adding and deleting works fine for quite a few hours\nwith the stress test program, but then randomly it'll error out and give a\nreferential integrity violation in one of the tables. In the stress code\nwe'll do a delete from system where system_index = XX and expect it to\ncascade delete, but a table, like the bts table, will give something like\n\"ERROR: bts_fk_constraint referential integrity violation - key referenced\nfrom bts not found in system\"\n\n\tAre there any known bugs in 7.0.3.2 that might cause something like\nthis to crop up randomly?\n\tAny ideas or things to check would be greatly appreciated.\n\n\n\nHere are the 6 tables. It's a parent-child-grandchild relationship. The\ntable below each table, simply references back to the previous one as the\nforeign key, and builds the foreign key from the foreign key of its parent.\n\n\ncreate sequence omc_index_seq;\ncreate TABLE omc (\n omc_index int4 PRIMARY KEY DEFAULT NEXTVAL('omc_index_seq'),\n serial_number varchar(32),\n operator_string varchar(255) DEFAULT 'Value not specified.',\n debug_level int4 DEFAULT 1,\n software_version varchar(32),\n hardware_version varchar(32),\n software_failure_reason int2\n);\n\n\ncreate TABLE system (\n system_index int4,\n display_name varchar(32),\n operator_string varchar(255),\n id varchar(32),\n next_system_index int4,\n \n parent_omc_index int4 NOT NULL,\n \n CONSTRAINT system_fk_constraint FOREIGN KEY (parent_omc_index)\n REFERENCES omc (omc_index) ON DELETE CASCADE,\n CONSTRAINT system_pkey_constraint PRIMARY KEY (parent_omc_index,\n system_index),\n CONSTRAINT system_display_name_unique UNIQUE (display_name)\n); \n\n\n\ncreate TABLE bts (\n bts_index int4,\n display_name varchar(32),\n operator_string varchar(255),\n id varchar(32),\n location varchar(255),\n next_bts_index int4,\n \n parent_omc_index int4 NOT NULL,\n parent_system_index int4 NOT NULL,\n \n CONSTRAINT bts_fk_constraint FOREIGN KEY (parent_omc_index,\n parent_system_index)\n REFERENCES system (parent_omc_index, system_index) ON DELETE\nCASCADE,\n CONSTRAINT bts_pkey_constraint PRIMARY KEY (parent_omc_index,\n parent_system_index,\n bts_index),\n CONSTRAINT bts_display_name_unique UNIQUE (display_name,\n parent_system_index)\n); \n\n\ncreate TABLE cell_area (\n cell_area_index int4,\n display_name varchar(32),\n operator_string varchar(255),\n cluster_orientation varchar(255),\n id varchar(32),\n chan_1_link_channel_num int4,\n chan_2_link_channel_num int4,\n chan_1_coverage_channel_num int4,\n chan_2_coverage_channel_num int4,\n next_cell_area_index int4,\n \n parent_omc_index int4 NOT NULL,\n parent_system_index int4 NOT NULL,\n parent_bts_index int4 NOT NULL,\n \n CONSTRAINT cell_area_fk_constraint FOREIGN KEY (parent_omc_index,\n parent_system_index,\n parent_bts_index)\n REFERENCES bts (parent_omc_index, parent_system_index,\n bts_index) ON DELETE CASCADE,\n CONSTRAINT cell_area_pkey_constraint PRIMARY KEY (parent_omc_index,\n\n parent_system_index,\n parent_bts_index,\n cell_area_index),\n CONSTRAINT cell_area_display_name_unique UNIQUE (display_name,\n parent_system_index,\n parent_bts_index)\n);\n \n\ncreate TABLE unit (\n unit_index int4,\n display_name varchar(32),\n operator_string varchar(255),\n ip_address varchar(15) UNIQUE NOT NULL,\n phone_number varchar(32),\n type char(1),\n next_unit_index int4,\n \n parent_omc_index int4 NOT NULL,\n parent_system_index int4 NOT NULL,\n parent_bts_index int4 NOT NULL,\n parent_cell_area_index int4 NOT NULL,\n \n CONSTRAINT unit_fk_constraint FOREIGN KEY (parent_omc_index,\n parent_system_index,\n parent_bts_index,\n parent_cell_area_index)\n REFERENCES cell_area (parent_omc_index, parent_system_index,\n parent_bts_index, cell_area_index)\n ON DELETE CASCADE,\n parent_system_index,\n parent_bts_index,\n parent_cell_area_index,\n unit_index),\n CONSTRAINT unit_display_name_unique UNIQUE (display_name,\n parent_system_index,\n parent_bts_index,\n parent_cell_area_index),\n FOREIGN KEY (type) REFERENCES spice_types (type) MATCH FULL\n);\n\n\n\n", "msg_date": "Tue, 4 Sep 2001 12:05:28 -0700 ", "msg_from": "Mike Cianflone <mcianflone@littlefeet-inc.com>", "msg_from_op": true, "msg_subject": "Referential Integrity Stress Problem" } ]
[ { "msg_contents": "I seem to be unable to access the CVS server:\n\ncvs [update aborted]: connect to cvs.postgresql.org:2401 failed:\n\nMarc, do you know anthing about this?\n--\nTatsuo Ishii\n", "msg_date": "Wed, 05 Sep 2001 18:28:42 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "CVS access" }, { "msg_contents": "Thus spake Tatsuo Ishii\n> I seem to be unable to access the CVS server:\n> \n> cvs [update aborted]: connect to cvs.postgresql.org:2401 failed:\n> \n> Marc, do you know anthing about this?\n\nAnd while you are at it can you get my access working again please. I\nknow that you are busy but I have things to change in PyGreSQL. Putting\nthat into the PostgreSQL CVS tree was supposed to make things easier\nall around, not impossible. I haven't been able to make any changes to\nPyGreSQL for at least a month now.\n\nPlease and thank you.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Thu, 6 Sep 2001 08:18:48 -0400 (EDT)", "msg_from": "darcy@druid.net (D'Arcy J.M. Cain)", "msg_from_op": false, "msg_subject": "Re: CVS access" }, { "msg_contents": "\nokay, is this still a problem, and is this with the new password that I\nissued/email'd about a couple of weeks back?\n\nOn Wed, 5 Sep 2001, Tatsuo Ishii wrote:\n\n> I seem to be unable to access the CVS server:\n>\n> cvs [update aborted]: connect to cvs.postgresql.org:2401 failed:\n>\n> Marc, do you know anthing about this?\n> --\n> Tatsuo Ishii\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n", "msg_date": "Thu, 6 Sep 2001 12:25:19 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: CVS access" }, { "msg_contents": "Thus spake Marc G. Fournier\n> okay, is this still a problem, and is this with the new password that I\n> issued/email'd about a couple of weeks back?\n\nAre you also addressing my problem connecting?\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Fri, 7 Sep 2001 11:26:21 -0400 (EDT)", "msg_from": "darcy@druid.net (D'Arcy J.M. Cain)", "msg_from_op": false, "msg_subject": "Re: CVS access" }, { "msg_contents": "\nwith the new passowwrd, your problem should be resolved too ... is it not?\n\nOn Fri, 7 Sep 2001, D'Arcy J.M. Cain wrote:\n\n> Thus spake Marc G. Fournier\n> > okay, is this still a problem, and is this with the new password that I\n> > issued/email'd about a couple of weeks back?\n>\n> Are you also addressing my problem connecting?\n>\n> --\n> D'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\n> http://www.druid.net/darcy/ | and a sheep voting on\n> +1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n>\n\n", "msg_date": "Fri, 7 Sep 2001 14:37:04 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: CVS access" }, { "msg_contents": "Thus spake Marc G. Fournier\n> with the new passowwrd, your problem should be resolved too ... is it not?\n\nWhat new password? Certainly the one I have on the mail machine doesn't work.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Fri, 7 Sep 2001 18:33:19 -0400 (EDT)", "msg_from": "darcy@druid.net (D'Arcy J.M. Cain)", "msg_from_op": false, "msg_subject": "Re: CVS access" }, { "msg_contents": "D'Arcy J.M. Cain wrote:\n> Thus spake Marc G. Fournier\n> > with the new passowwrd, your problem should be resolved too ... is it not?\n>\n> What new password? Certainly the one I have on the mail machine doesn't work.\n\n If you don't tend to succeed at first, skydiving isn't for\n you (for me it ain't either).\n\n It's not an ssh-login password. It's a :pserver: CVS\n password.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Sat, 8 Sep 2001 03:15:02 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: CVS access" } ]
[ { "msg_contents": "Hello, \n \nI have quite strange problem. Postgres refuses to start. \nThis is 7.1.2. Actually this is Aug 15 snapshot of REL7_1_STABLE branch. \nThis is what should be 7.1.2. Any ideas how to repair data? \nThis is quite urgent. The system is live, and now stopped. \n \nSep 5 08:42:30 mx postgres[5341]: [1] DEBUG: database system shutdown was interrupted at 2001-09-05 08:26:25 EDT \nSep 5 08:42:30 mx postgres[5341]: [2] DEBUG: CheckPoint record at (23, 2431142676) \nSep 5 08:42:30 mx postgres[5341]: [3] DEBUG: Redo record at (23, 2431142676); Undo record at (0, 0); Shutdown TRUE \nSep 5 08:42:30 mx postgres[5341]: [4] DEBUG: NextTransactionId: 13932307; NextOid: 9127687 \nSep 5 08:42:30 mx postgres[5341]: [5] DEBUG: database system was not properly shut down; automatic recovery in progress... \nSep 5 08:42:30 mx postgres[5341]: [6] DEBUG: redo starts at (23, 2431142740) \nSep 5 08:42:30 mx postgres[5341]: [7] DEBUG: ReadRecord: record with zero len at (23, 2432317444) \nSep 5 08:42:30 mx postgres[5341]: [8] DEBUG: redo done at (23, 2432317408) \nSep 5 08:42:30 mx postgres[5341]: [9] FATAL 2: XLogFlush: request is not satisfied \nSep 5 08:44:42 mx postgres[5441]: [1] DEBUG: database system shutdown was interrupted at 2001-09-05 08:42:30 EDT \nSep 5 08:44:42 mx postgres[5441]: [2] DEBUG: CheckPoint record at (23, 2431142676) \n \n-- \nSincerely Yours, \nDenis Perchine \n \n---------------------------------- \nE-Mail: dyp@perchine.com \nHomePage: http://www.perchine.com/dyp/ \nFidoNet: 2:5000/120.5 \n---------------------------------- \n \n", "msg_date": "Wed, 5 Sep 2001 22:47:40 +0700", "msg_from": "Denis Perchine <dyp@perchine.com>", "msg_from_op": true, "msg_subject": "Problems starting up postgres" }, { "msg_contents": "Denis Perchine <dyp@perchine.com> writes:\n> Sep 5 08:42:30 mx postgres[5341]: [9] FATAL 2: XLogFlush: request is not satisfied \n\nHmm. I think you must be running into some kind of logic bug (boundary\ncondition maybe?) in XLogFlush. Could you add some debugging printouts,\nalong the line of\n\n*** src/backend/access/transam/xlog.c~\tWed Sep 5 12:18:07 2001\n--- src/backend/access/transam/xlog.c\tWed Sep 5 12:20:17 2001\n***************\n*** 1266,1272 ****\n \t\t\tXLogWrite(WriteRqst);\n \t\t\tS_UNLOCK(&(XLogCtl->logwrt_lck));\n \t\t\tif (XLByteLT(LogwrtResult.Flush, record))\n! \t\t\t\telog(STOP, \"XLogFlush: request is not satisfied\");\n \t\t\tbreak;\n \t\t}\n \t\tS_LOCK_SLEEP(&(XLogCtl->logwrt_lck), spins++, XLOG_LOCK_TIMEOUT);\n--- 1266,1274 ----\n \t\t\tXLogWrite(WriteRqst);\n \t\t\tS_UNLOCK(&(XLogCtl->logwrt_lck));\n \t\t\tif (XLByteLT(LogwrtResult.Flush, record))\n! \t\t\t\telog(STOP, \"XLogFlush: request (%u, %u) is not satisfied --- flushed to (%u, %u)\",\n! \t\t\t\t\t record.xlogid, record.xrecoff,\n! \t\t\t\t\t LogwrtResult.Flush.xlogid, LogwrtResult.Flush.xrecoff);\n \t\t\tbreak;\n \t\t}\n \t\tS_LOCK_SLEEP(&(XLogCtl->logwrt_lck), spins++, XLOG_LOCK_TIMEOUT);\n\n\n(this patch is for current sources, line numbers are probably different\nin 7.1.*)\n\nBTW, how did you get into this state --- did you have a system crash?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 05 Sep 2001 12:23:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Problems starting up postgres " }, { "msg_contents": "On Wednesday 05 September 2001 23:23, you wrote: \n> Denis Perchine <dyp@perchine.com> writes: \n \n> Hmm. I think you must be running into some kind of logic bug (boundary \n> condition maybe?) in XLogFlush. Could you add some debugging printouts, \n> along the line of \n \nSep 6 02:09:28 mx postgres[13468]: [2] DEBUG: CheckPoint record at (23, 2431142676) \nSep 6 02:09:28 mx postgres[13468]: [3] DEBUG: Redo record at (23, 2431142676); Undo record at (0, 0); Shutdown TRUE \nSep 6 02:09:28 mx postgres[13468]: [4] DEBUG: NextTransactionId: 13932307; NextOid: 9127687 \nSep 6 02:09:28 mx postgres[13468]: [5] DEBUG: database system was not properly shut down; automatic recovery in progress... \nSep 6 02:09:29 mx postgres[13468]: [6] DEBUG: redo starts at (23, 2431142740) \nSep 6 02:09:30 mx postgres[13468]: [7] DEBUG: ReadRecord: record with zero len at (23, 2432317444) \nSep 6 02:09:30 mx postgres[13468]: [8] DEBUG: redo done at (23, 2432317408) \nSep 6 02:09:30 mx postgres[13468]: [9] FATAL 2: XLogFlush: request(1494286336, 786458) is not satisfied -- \nflushed to (23, 2432317444) \n \n> BTW, how did you get into this state --- did you have a system crash? \n \nYes. I was forced to fsck. \n \n-- \nSincerely Yours, \nDenis Perchine \n \n---------------------------------- \nE-Mail: dyp@perchine.com \nHomePage: http://www.perchine.com/dyp/ \nFidoNet: 2:5000/120.5 \n---------------------------------- \n \n", "msg_date": "Thu, 6 Sep 2001 16:11:05 +0700", "msg_from": "Denis Perchine <dyp@perchine.com>", "msg_from_op": true, "msg_subject": "Re: Problems starting up postgres" }, { "msg_contents": "Denis Perchine <dyp@perchine.com> writes:\n> Sep 6 02:09:30 mx postgres[13468]: [9] FATAL 2: XLogFlush: request(1494286336, 786458) is not satisfied -- \n> flushed to (23, 2432317444) \n\nYeek. Looks like you have a page somewhere in the database with a bogus\nLSN value (xlog pointer) ... and, most likely, other corruption as well.\n \n>> BTW, how did you get into this state --- did you have a system crash? \n \n> Yes. I was forced to fsck. \n\nOkay. As a temporary recovery measure, I'd suggest reducing that\nparticular elog from STOP to DEBUG level. That will let you start up\nand run the database. You'll need to look through your tables and try\nto figure out which one(s) have lost data. It might be interesting to\ntry to figure out just which page has the bad LSN value --- that might\ngive us a clue why the WAL did not provide protection against this\nfailure. Unfortunately XLogFlush doesn't have any idea who its caller\nis, so the only way I can think of to check that directly is to set a\nbreakpoint at this error report and look at the call stack.\n\nVadim, what do you think of reducing this elog from STOP to a notice\non a permanent basis? ISTM we saw cases during 7.1 beta where this\nSTOP prevented people from recovering, so I'm thinking it does more\nharm than good to overall system reliability.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 06 Sep 2001 09:49:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Problems starting up postgres " }, { "msg_contents": "On Thursday 06 September 2001 20:49, Tom Lane wrote: \n> Denis Perchine <dyp@perchine.com> writes: \n> Okay. As a temporary recovery measure, I'd suggest reducing that \n> particular elog from STOP to DEBUG level. That will let you start up \n> and run the database. You'll need to look through your tables and try \n> to figure out which one(s) have lost data. It might be interesting to \n> try to figure out just which page has the bad LSN value --- that might \n> give us a clue why the WAL did not provide protection against this \n> failure. Unfortunately XLogFlush doesn't have any idea who its caller \n> is, so the only way I can think of to check that directly is to set a \n> breakpoint at this error report and look at the call stack. \n \nOK. I will do this tomorrow. I have no space, and I forced to tgz, untgz \ndatabase. \n \n-- \nSincerely Yours, \nDenis Perchine \n \n---------------------------------- \nE-Mail: dyp@perchine.com \nHomePage: http://www.perchine.com/dyp/ \nFidoNet: 2:5000/120.5 \n---------------------------------- \n \n\n", "msg_date": "Thu, 6 Sep 2001 21:09:07 +0700", "msg_from": "Denis Perchine <dyp@perchine.com>", "msg_from_op": true, "msg_subject": "Re: Problems starting up postgres" } ]
[ { "msg_contents": "Hello,\n\nReading the TO DO list, I found the following item:\n\"Allow cursors to be DECLAREd/OPENed/CLOSEed outside transactions\"\n\nI badly need this functionnality to interface postgres in my company\ndatabase abstraction layer. Do you have any idea of when it should be\navailable?\nIf you think it can be of reasonnable complexity if you give me some hints,\nI can take some time to do it (about one week).\n\nBest regards,\n\nLudovic\n\n\n", "msg_date": "Wed, 5 Sep 2001 19:31:10 +0200", "msg_from": "\"Ludovic P���net\" <lpenet@cubicsoft.com>", "msg_from_op": true, "msg_subject": "Trans-transactions cursors" }, { "msg_contents": "Hi,\n\nI am currently building a small web based app, with postgres as back end. I \nfound that in ecpg you can declare and use cursor without declaring a \ntransaction. In several places I have used cursors for selects only. That's \nthe only way I found to make ecpg fetch multiple rows.\n\nAnd in ecpg I have to give an explicit open cursor statement to make fetching \npossible.\n\nI am usig 7.1.2.\n\n HTH\n \n Shridhar \n\nOn Wednesday 05 September 2001 23:01, Ludovic P�net wrote:\n> \"Allow cursors to be DECLAREd/OPENed/CLOSEed outside transactions\"\n> I badly need this functionnality to interface postgres in my company\n> database abstraction layer. Do you have any idea of when it should be\n> available?\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Mon, 10 Sep 2001 22:49:05 +0530", "msg_from": "Chamanya <chamanya@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Trans-transactions cursors" }, { "msg_contents": "Ludovic P�net wrote:\n> Hello,\n>\n> Reading the TO DO list, I found the following item:\n> \"Allow cursors to be DECLAREd/OPENed/CLOSEed outside transactions\"\n>\n> I badly need this functionnality to interface postgres in my company\n> database abstraction layer. Do you have any idea of when it should be\n> available?\n> If you think it can be of reasonnable complexity if you give me some hints,\n> I can take some time to do it (about one week).\n\n That now depends on your programming skills, how familiar you\n are with the Postgres code and how you define one week - or\n Wieck since it's basically pronounced the same :-) - more\n like \"veek\" - but who cares?\n\n Anyway, the basic problem on cursors spanning multiple\n transactions would be, that currently a cursor in Postgres is\n an executor engine on hold. That means, a completely parsed,\n optimized and prepared execution plan that's opened and ready\n to return result rows on a call to ExecutorRun(). That\n requires that each of the scan nodes inside the execution\n plan (the executor nodes that read from a table and return\n heap tuples according to the passed down scankey) has a valid\n scan snapshot, which in turn requires an existing\n transaction.\n\n Thus, when opening a cursor that should span multiple\n transactions, your backend would have to deal with two\n transactions, one for what you're doing currently, the other\n one for what you do with cursors. And here you're entering\n the area of big trouble, because Postgres has MVCC, so each\n transaction has it's own snapshot view of the database. So a\n row you've seen in the Xact of the cursor might have been\n deleted and reinserted multiple times by other transactions\n until you actually decide to deal with it. Is THAT what you\n WANT to do? If so, go ahead, make a proposal and implement\n the FEATURE. I'd call it a BUG because it follow's the\n definition of most M$ features, but that's another\n discussion.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Tue, 11 Sep 2001 02:50:19 -0400 (EDT)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Trans-transactions cursors" }, { "msg_contents": "Chamanya wrote:\n\n> I am currently building a small web based app, with postgres as back end. I\n> found that in ecpg you can declare and use cursor without declaring a\n> transaction. In several places I have used cursors for selects only. That's\n> the only way I found to make ecpg fetch multiple rows.\n>\n> And in ecpg I have to give an explicit open cursor statement to make fetching\n> possible.\n\nThat's simply because ecpg starts a new transaction on any SQL statement if no\ntransaction is active.\nI consider this (autocommit on) one of the worst traps you can lay for yourself.\n\nChristof\n\n\n", "msg_date": "Tue, 18 Sep 2001 14:05:03 +0200", "msg_from": "Christof Petig <christof@petig-baender.de>", "msg_from_op": false, "msg_subject": "Re: Trans-transactions cursors" } ]
[ { "msg_contents": "\tIs there a problem with running vacuum, or vacuum analyze in the\nmiddle of making transactions? If there happens to be a transaction running\nat the time I do a vacuum analyze, the transaction has problems and the\ntrigger doesn't get completed all the way, and the transaction fails.\n\nThanks for any pointers.\n\n", "msg_date": "Wed, 5 Sep 2001 17:29:58 -0700 ", "msg_from": "Mike Cianflone <mcianflone@littlefeet-inc.com>", "msg_from_op": true, "msg_subject": "Is there a problem running vacuum in the middle of a transaction?" }, { "msg_contents": "Mike Cianflone <mcianflone@littlefeet-inc.com> writes:\n\n> \tIs there a problem with running vacuum, or vacuum analyze in the\n> middle of making transactions? If there happens to be a transaction running\n> at the time I do a vacuum analyze, the transaction has problems and the\n> trigger doesn't get completed all the way, and the transaction fails.\n\nHmmm--AFAIK, VACUUM is supposed to grab locks on the tables it\nprocesses, which will block until all open transactions against that\ntable are finished. So either VACUUM or your transactions will have\nto wait, but they shouldn't interfere with each other. \n\nHow about posting some log messages from when the problem occurs?\n\n-Doug\n-- \nFree Dmitry Sklyarov! \nhttp://www.freesklyarov.org/ \n\nWe will return to our regularly scheduled signature shortly.\n", "msg_date": "05 Sep 2001 23:28:10 -0400", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: Is there a problem running vacuum in the middle of a transaction?" }, { "msg_contents": "Doug McNaught <doug@wireboard.com> writes:\n> Hmmm--AFAIK, VACUUM is supposed to grab locks on the tables it\n> processes, which will block until all open transactions against that\n> table are finished. So either VACUUM or your transactions will have\n> to wait, but they shouldn't interfere with each other. \n\nWell, it's uglier than that. Normally, read and write locks are not\nmutually exclusive, so if you have a client that is holding an open\ntransaction and not doing anything, it doesn't matter if it read or\nwrote a table earlier in the transaction. Other clients can proceed\nto read or write that table despite the existence of a lock owned by\nthe open transaction.\n\nBut VACUUM wants an exclusive lock on the table, so it will block\nuntil all clients holding read or write locks commit. Once VACUUM has\nblocked, subsequent read or write requests also block, because they\nqueue up behind the VACUUM exclusive-lock request. (We could allow\nthem to go in front, but that would create the likelihood that VACUUM\ncould *never* get its lock, in the face of a steady stream of read \nor write lockers.)\n\nUpshot: a client holding an open transaction, plus another client trying\nto do VACUUM, can clog up the database for everyone else.\n\nRestarting the whole database is severe overreaction; aborting the\ntransaction of either of the clients at fault would be sufficient to\nclear the logjam.\n\n7.2 will be less prone to this problem, since the default form of VACUUM\nin 7.2 will not require exclusive lock. But you'd still see it if you\nhave some clients that want to acquire exclusive table locks for some\nreason. Bottom line is that dawdling around with an open transaction is\nbad form if you have a heavily concurrent application. Once you've done\nsomething, you should commit or roll back within a reasonably short\ninterval.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 06 Sep 2001 01:03:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Is there a problem running vacuum in the middle of a transaction?" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> Doug McNaught <doug@wireboard.com> writes:\n> > Hmmm--AFAIK, VACUUM is supposed to grab locks on the tables it\n> > processes, which will block until all open transactions against that\n> > table are finished. So either VACUUM or your transactions will have\n> > to wait, but they shouldn't interfere with each other. \n> \n> Upshot: a client holding an open transaction, plus another client trying\n> to do VACUUM, can clog up the database for everyone else.\n\nThanks for the clarification. But the original poster's problem, that \nVACUUM caused his transactions to fail, theoretically shouldn't\nhappen--right?\n\n-Doug\n-- \nFree Dmitry Sklyarov! \nhttp://www.freesklyarov.org/ \n\nWe will return to our regularly scheduled signature shortly.\n", "msg_date": "06 Sep 2001 08:39:27 -0400", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: Is there a problem running vacuum in the middle of a transaction?" }, { "msg_contents": "Doug McNaught <doug@wireboard.com> writes:\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n>> Upshot: a client holding an open transaction, plus another client trying\n>> to do VACUUM, can clog up the database for everyone else.\n\n> Thanks for the clarification. But the original poster's problem, that \n> VACUUM caused his transactions to fail, theoretically shouldn't\n> happen--right?\n\nIt wasn't clear to me that anything was actually failing, as opposed to\njust getting blocked for a long time. Mike?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 06 Sep 2001 11:13:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Is there a problem running vacuum in the middle of a transaction?" } ]
[ { "msg_contents": "Hi,\n\nI have a serious problem with hanging backends.\nThis is wat I get from ps ->\npostgres 23507 22794 0 13:33 ? 00:00:09\n/usr/local/pgsql/bin/postgres 1.1.3.123 postgres bts01 SELECT waiting\n\n\nThis blocks all the other clients. (ODBC call failed)\n\nThe situation\n\n- PostgreSQL 6.4.2 on i686-pc-linux-gnu, compiled by gcc 2.7.2.\n(SlackWare)\n- 30 Access 2000 clients connecting through 6.40.00.06 ODBC driver \n\n- ODBC connections error's\n\n -> CONN ERROR: func=SQLGetConnectOption, \t\t\n\tdesc='fOption=30002', errnum=205, errmsg='Unknown connect \n\toption (Get)'\n\n------------------------------------------------------------\n \t henv=144902176, conn=144863832, status=0, \n\tnum_stmts=16\n so ck=144902128, stmts=144902048, lobj_type=-999\n\n----------------SocketInfo-------------------------------\n socket=-1, reverse=0, errornumber=0, errormsg='(NULL)'\n buffer_in=144870304, buffer_out=144874408\n buffer_filled_in=0, buffer_filled_out=0, buffer_read_in=0\n\n CONN ERROR: func=SQLSetConnectOption, \t\t\t\n\tdesc='fOption=30002, vParam=143791720', errnum=205, \n\terrmsg='Unknown connect option (Set)'\n\n------------------------------------------------------------\n \thenv=144902176, conn=144863832, status=0, num_stmts=16\n \tsock=144902128, stmts=144902048, lobj_type=-999\n\n----------------SocketInfo-------------------------------\n socket=-1, reverse=0, errornumber=0, errormsg='(NULL)'\n buffer_in=144870304, buffer_out=144874408\n buffer_filled_in=0, buffer_filled_out=0, \t\t\t\n\tbuffer_read_in=0\n\n- In de server-log I found that the access-application wants to\nconnect with the user Admin then we get \n\tFATAL 1 : user Admin not in pg_shadow\n\n\tBut in the connection properties the user postgres is set.\n \tThis (connection with user Admin) seems to have started with \n\tthe conversion 97 -> 2000\n\n\tI've added the user, this error was gone, but some of the jobs\n\n\tare admin owned other postgres\n\n\t- I had problems before with the MaxBackendId\n\tI've rebuild with MaxBackendId set to 128 and started the \t\n\tpostmaster with -B 256 parameter.\n\tThis was because of the message \"cache invalidation \t\t\n\tinitialization failed\"\n \tSo this was solved, or is\n\n- In the loggings the SELECT waiting jobs hang on ProcessQuery and\nthe other InitPostgres fail.\n\n- vacuum verbose analyze doen't seem to report errors\n\n- debug higher then 4 is not usable (2 gig in 10 minutes)\n\n\nIn which direction do I have to search?\n Can there be data corruption without being reported?\n Can it be solved with a pg_dump file, destroydb, createdb, \t\n\tpsql -e file ? (Not done already -> production machine)\n Is our version to old -> should an upgrade help?\n .... ? \n\nThanks,\n\nJeroen Ryckeboer\n\n\n\n\n\n", "msg_date": "Thu, 06 Sep 2001 05:22:07 GMT", "msg_from": "johan27@advalvas.be", "msg_from_op": true, "msg_subject": "I have a serious problem with hanging backends.:SELECT waiting" } ]
[ { "msg_contents": "Anyone know how inherited tables are shown on an ERD... both the table\nand the connector (if any)?\n\nWhat I am looking for is how to draw this in the diagram.\n\nPeter\n\n-- \n+---------------------------\n| Data Architect\n| your data; how you want it\n| http://www.codebydesign.com\n+---------------------------\n", "msg_date": "Wed, 05 Sep 2001 23:49:21 -0700", "msg_from": "Peter Harvey <pharvey@codebydesign.com>", "msg_from_op": true, "msg_subject": "Inherited Table" }, { "msg_contents": "Peter Harvey wrote:\n> \n> Anyone know how inherited tables are shown on an ERD... both the table\n> and the connector (if any)?\n> \n> What I am looking for is how to draw this in the diagram.\n\nAn old S-Designor does it so:", "msg_date": "Thu, 06 Sep 2001 12:28:43 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Inherited Table" }, { "msg_contents": "Hannu Krosing wrote:\n> \n> Peter Harvey wrote:\n> >\n> > Anyone know how inherited tables are shown on an ERD... both the table\n> > and the connector (if any)?\n> >\n> > What I am looking for is how to draw this in the diagram.\n> \n> An old S-Designor does it so:\n> \n\nYes. But pgsql does not support multiple inheritance (from what I can\ntell) so the extra symbol along the ref line (\"can be\"/\"must be\") seems\nsilly. I am thinking that I should draw the new table slightly diff in\nsome way just as is done with Views. However; some purests would freak.\n<Hmmm>\n\nThanks\n\nPeter\n", "msg_date": "Thu, 06 Sep 2001 09:08:47 -0700", "msg_from": "Peter Harvey <pharvey@codebydesign.com>", "msg_from_op": true, "msg_subject": "Re: Inherited Table" }, { "msg_contents": "Peter Harvey wrote:\n >Yes. But pgsql does not support multiple inheritance (from what I can\n >tell)...\n\nIn fact it does:\n\n create table child (col1 text) inherits (parent1, parent2);\n\nIdentically named and typed columns in the parents are merged.\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Behold, I stand at the door, and knock; if any man \n hear my voice, and open the door, I will come in to \n him, and will sup with him, and he with me.\" \n Revelation 3:20 \n\n\n", "msg_date": "Thu, 06 Sep 2001 21:43:20 +0100", "msg_from": "\"Oliver Elphick\" <olly@lfix.co.uk>", "msg_from_op": false, "msg_subject": "Re: Inherited Table " }, { "msg_contents": "If I try to get the columns from pg_attribute using the oid of a child\ntable created with INHERIT I get its columns AND all of its inherited\ncolumns.\n\nHow do I just get the columns added by the child table?\n\nI figure I could check each column to see if they also exist in\npg_attribute under a parent table but I figure there must be an\neasier/faster way? Another pg_* table perhaps? Besides; I am not certian\nthis would work if there is a common column between a child and one of\nits parents (if that is even allowed)?\n\nAs usual, any help would be greatly appreciated.\n\nPeter\n\nBTW: I checked pg_dump source and did not see an answer.\n\n-- \n+---------------------------\n| Data Architect\n| your data; how you want it\n| http://www.codebydesign.com\n+---------------------------\n", "msg_date": "Sun, 09 Sep 2001 08:08:56 -0700", "msg_from": "Peter Harvey <pharvey@codebydesign.com>", "msg_from_op": true, "msg_subject": "INHERIT" }, { "msg_contents": "Oliver Elphick wrote:\n> \n> Peter Harvey wrote:\n> >If I try to get the columns from pg_attribute using the oid of a child\n> >table created with INHERIT I get its columns AND all of its inherited\n> >columns.\n> >\n> >How do I just get the columns added by the child table?\n> >\n> >I figure I could check each column to see if they also exist in\n> >pg_attribute under a parent table but I figure there must be an\n> >easier/faster way? Another pg_* table perhaps? Besides; I am not certian\n> >this would work if there is a common column between a child and one of\n> >its parents (if that is even allowed)?\n\nhannu=# \\d parent \n Table \"parent\"\n Attribute | Type | Modifier \n-----------+---------+----------\n parid | integer | not null\nIndex: parent_pkey\n\nhannu=# create table badchild (parid text) inherits (parent);\nNOTICE: CREATE TABLE: merging attribute \"parid\" with inherited\ndefinition\nERROR: CREATE TABLE: attribute \"parid\" type conflict (int4 and text)\nhannu=# \n\nAnd anyway, in the current state I would advise you not to put too much \nhope in postgreSQL's OO features, especially inheritance ;)\n\n-------------\nHannu\n", "msg_date": "Sun, 09 Sep 2001 23:23:57 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: INHERIT" }, { "msg_contents": "Peter Harvey wrote:\n >If I try to get the columns from pg_attribute using the oid of a child\n >table created with INHERIT I get its columns AND all of its inherited\n >columns.\n >\n >How do I just get the columns added by the child table?\n >\n >I figure I could check each column to see if they also exist in\n >pg_attribute under a parent table but I figure there must be an\n >easier/faster way? Another pg_* table perhaps? Besides; I am not certian\n >this would work if there is a common column between a child and one of\n >its parents (if that is even allowed)?\n >\n >As usual, any help would be greatly appreciated.\n \n\nSELECT a.attname\n FROM pg_attribute AS a,\n pg_class AS c,\n pg_inherits AS i\n WHERE a.attrelid = c.oid AND\n i.inhrelid = c.oid AND\n c.relname = 'child_table'\n AND a.attname NOT IN (\n SELECT a.attname\n FROM pg_attribute AS a,\n pg_class AS c,\n pg_inherits AS i\n WHERE i.inhrelid = c.oid AND\n a.attrelid = i.inhparent AND\n c.relname = 'child_table'\n )\n\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Submit yourselves therefore to God. Resist the devil, \n and he will flee from you.\" James 4:7 \n\n\n", "msg_date": "Sun, 09 Sep 2001 20:16:00 +0100", "msg_from": "\"Oliver Elphick\" <olly@lfix.co.uk>", "msg_from_op": false, "msg_subject": "Re: INHERIT " } ]
[ { "msg_contents": "Hi,\n\nUpgrading from 7.0.2 to 7.1.3 (great job, btw!), brought us\nsome problems. Watch the following:\n\n| versuch=# create table chkntfy (\n| versuch(# NUMMER integer default 0 not null,\n| versuch(# TEXT varchar(255) default '?' not null\n| versuch(# );\n| CREATE\n| versuch=# \n| versuch=# CREATE RULE ru_u_chkntfy AS ON UPDATE TO chkntfy DO NOTIFY CHKNTFY;\n| CREATE\n| versuch=# insert into chkntfy (nummer,text) values ('1','eins');\n| INSERT 110522 1\n| versuch=# update chkntfy set nummer=10 where nummer = 1;\n| ERROR: Conditional NOTIFY is not implemented\n| versuch=# \n\nMatthew Copeland noted the same in pgsql-admin and was told to use triggers\ninstead of rules. Well, it's a little nuisance, since we don't have statement\ntriggers (you'd get a notify for each row modified, which in our case can\nbe a killer.)\n\nSomehow the notify seems to take up the `where' qualifier of the query\ntriggering the rule (you don't get the error message if you use an unqualified\nquery).\n\nIs this considered a bug? Is a fix in sight?\n\nRegards\n-- tomas\n", "msg_date": "Thu, 6 Sep 2001 15:15:26 +0200", "msg_from": "tomas@fabula.de", "msg_from_op": true, "msg_subject": "Conditional NOTIFY is not implemented" }, { "msg_contents": "tomas@fabula.de writes:\n> versuch=# CREATE RULE ru_u_chkntfy AS ON UPDATE TO chkntfy DO NOTIFY CHKNTFY;\n> CREATE\n> versuch=# update chkntfy set nummer=10 where nummer = 1;\n> ERROR: Conditional NOTIFY is not implemented\n\n> Somehow the notify seems to take up the `where' qualifier of the query\n> triggering the rule (you don't get the error message if you use an\n> unqualified query).\n\n> Is this considered a bug? Is a fix in sight?\n\nBefore 7.1, the system simply failed to account for the condition that\nshould be applied to the notify --- the notify ended up being fired all\nthe time, even if it shouldn't have been. In this case, the notify\nshould only occur if there are rows in chkntfy with nummer = 1 --- but\npre-7.1, it'd occur regardless. (We were rather fortunate to avoid a\ncrash, actually, since the code would attach a condition to a NOTIFY\nquerytree that should never have had one ... but then ignore it.)\n\nPresent behavior is to error out if the rewriter tries to attach a\nnonempty condition to a NOTIFY query.\n\nIt'd be a simple code change to revert to the pre-7.1 behavior (notify\nfires unconditionally), but actually making it work *correctly* is a\nlot harder. NOTIFYs don't normally have any plan associated with them\nat all, so there's no way to test a query condition.\n\nSince we've seen several complaints about the new behavior, whereas\nno one ever complained about excess NOTIFYs pre-7.1, perhaps the best\nsolution in the short run is to revert to the old behavior. We could\njust document that NOTIFYs in rules are fired unconditionally.\n\nComments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 06 Sep 2001 15:16:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Conditional NOTIFY is not implemented " }, { "msg_contents": "On Thu, Sep 06, 2001 at 03:16:26PM -0400, Tom Lane wrote:\n> tomas@fabula.de writes:\n> > versuch=# CREATE RULE ru_u_chkntfy AS ON UPDATE TO chkntfy DO NOTIFY CHKNTFY;\n> > CREATE\n> > versuch=# update chkntfy set nummer=10 where nummer = 1;\n> > ERROR: Conditional NOTIFY is not implemented\n> \n> > Somehow the notify seems to take up the `where' qualifier of the query\n> > triggering the rule (you don't get the error message if you use an\n> > unqualified query).\n> \n> > Is this considered a bug? Is a fix in sight?\n> \n> Before 7.1, the system simply failed to account for the condition that\n> should be applied to the notify --- the notify ended up being fired all\n> the time, even if it shouldn't have been. In this case, the notify\n> should only occur if there are rows in chkntfy with nummer = 1 --- but\n> pre-7.1, it'd occur regardless. (We were rather fortunate to avoid a\n> crash, actually, since the code would attach a condition to a NOTIFY\n> querytree that should never have had one ... but then ignore it.)\n> \n> Present behavior is to error out if the rewriter tries to attach a\n> nonempty condition to a NOTIFY query.\n> \n> It'd be a simple code change to revert to the pre-7.1 behavior (notify\n> fires unconditionally), but actually making it work *correctly* is a\n> lot harder. NOTIFYs don't normally have any plan associated with them\n> at all, so there's no way to test a query condition.\n> \n> Since we've seen several complaints about the new behavior, whereas\n> no one ever complained about excess NOTIFYs pre-7.1, perhaps the best\n> solution in the short run is to revert to the old behavior. We could\n> just document that NOTIFYs in rules are fired unconditionally.\n> \n> Comments?\n\nThanks for the clear explanation. I see now. Well -- I think either behavior\nis a little strange, so I'd go for the one you think best for the moment\nand stick with it, putting on a big red warning sign ;-)\n\nMy pattern of use for ``CREATE RULE... NOTIFY...'' was, up to now, to get\na notice when anything changed on a table and then go look what happened;\na `poor man's statement level trigger' if you wish. Thus, the old behavior\ndidn't bother me that much. I don't know how others are using it.\n\nCome to think of it, the CREATE RULE ``triggers'' on statements anyway --\nit looks at the parsed statement and is independent of the actual content\nof the database: so the old behaviour seems a bit more natural to me:\n\n``Look: someone has called an UPDATE on this table: I don't know whether\n it is going to hit any records, but...''\n\nthe CREATE RULE acts then as a kind of `qualifier barrier' and therefore the\nNOTIFY doesn't see it.\n\nWhat do you think?\n\nThanks again for your great work\n\nCheers\n-- tomas\n", "msg_date": "Fri, 7 Sep 2001 06:10:01 +0200", "msg_from": "tomas@fabula.de", "msg_from_op": true, "msg_subject": "Re: Conditional NOTIFY is not implemented" }, { "msg_contents": "tomas@fabula.de writes:\n> My pattern of use for ``CREATE RULE... NOTIFY...'' was, up to now, to get\n> a notice when anything changed on a table and then go look what happened;\n> a `poor man's statement level trigger' if you wish. Thus, the old behavior\n> didn't bother me that much. I don't know how others are using it.\n\nYeah, that is the normal and recommended usage pattern for NOTIFY, so\ngetting a NOTIFY when nothing actually happened is fairly harmless.\n(Undoubtedly that's why no one complained before.)\n\nChanging the rewriter to error out when it couldn't really Do The Right\nThing seemed like a good idea at the time, but now it seems clear that\nthis didn't do anything to enhance the usefulness of the system. Unless\nsomeone objects, I'll change it back for 7.2.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 07 Sep 2001 00:30:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Conditional NOTIFY is not implemented " }, { "msg_contents": "On Fri, Sep 07, 2001 at 12:30:44AM -0400, Tom Lane wrote:\n> tomas@fabula.de writes:\n> > My pattern of use for ``CREATE RULE... NOTIFY...'' was [...]\n\n> Yeah, that is the normal and recommended usage pattern for NOTIFY, so\n> getting a NOTIFY when nothing actually happened is fairly harmless.\n> (Undoubtedly that's why no one complained before.)\n> \n> Changing the rewriter to error out when it couldn't really Do The Right\n> Thing seemed like a good idea at the time, but now it seems clear that\n> this didn't do anything to enhance the usefulness of the system. Unless\n> someone objects, I'll change it back for 7.2.\n> \n> \t\t\tregards, tom lane\n\nI won't object ;-)\n\nThank you again\n-- tomas\n", "msg_date": "Fri, 7 Sep 2001 16:33:33 +0200", "msg_from": "tomas@fabula.de", "msg_from_op": true, "msg_subject": "Re: Conditional NOTIFY is not implemented" }, { "msg_contents": "\n\nOn Fri, 7 Sep 2001 tomas@fabula.de wrote:\n\n> On Fri, Sep 07, 2001 at 12:30:44AM -0400, Tom Lane wrote:\n> > tomas@fabula.de writes:\n> > > My pattern of use for ``CREATE RULE... NOTIFY...'' was [...]\n>\n> > Yeah, that is the normal and recommended usage pattern for NOTIFY, so\n> > getting a NOTIFY when nothing actually happened is fairly harmless.\n> > (Undoubtedly that's why no one complained before.)\n> >\n> > Changing the rewriter to error out when it couldn't really Do The Right\n> > Thing seemed like a good idea at the time, but now it seems clear that\n> > this didn't do anything to enhance the usefulness of the system. Unless\n> > someone objects, I'll change it back for 7.2.\n> >\n> > \t\t\tregards, tom lane\n>\n> I won't object ;-)\n>\n> Thank you again\n> -- tomas\n\nI wouldn't really mind that either. My employer wouldn't allow me to\nchange the current set of RULE/NOTIFY's to triggers so that we could\nupgrade to 7.1.3 from 7.0.3, so having it act the same way in 7.2 as it\ndid in 7.0.3 would be nice for me. :)\n\nMatthew M. Copeland\n\n", "msg_date": "Fri, 7 Sep 2001 09:40:53 -0500 (CDT)", "msg_from": "<matthew.copeland@honeywell.com>", "msg_from_op": false, "msg_subject": "Re: Conditional NOTIFY is not implemented" }, { "msg_contents": "I said:\n> Changing the rewriter to error out when it couldn't really Do The Right\n> Thing seemed like a good idea at the time, but now it seems clear that\n> this didn't do anything to enhance the usefulness of the system. Unless\n> someone objects, I'll change it back for 7.2.\n\nNot having seen any objections, I've committed the change. If you need\na fix in place before 7.2, it's really a trivial change: just replace\nthe elog calls (there are two) by \"return\". See attached patch against\ncurrent sources.\n\n\t\t\tregards, tom lane\n\n\n*** /home/postgres/pgsql/src/backend/rewrite/rewriteManip.c.orig\tWed Apr 18 16:42:55 2001\n--- /home/postgres/pgsql/src/backend/rewrite/rewriteManip.c\tFri Sep 7 16:52:31 2001\n***************\n*** 592,606 ****\n \n \tif (parsetree->commandType == CMD_UTILITY)\n \t{\n- \n \t\t/*\n! \t\t * Noplace to put the qual on a utility statement.\n \t\t *\n! \t\t * For now, we expect utility stmt to be a NOTIFY, so give a specific\n! \t\t * error message for that case.\n \t\t */\n \t\tif (parsetree->utilityStmt && IsA(parsetree->utilityStmt, NotifyStmt))\n! \t\t\telog(ERROR, \"Conditional NOTIFY is not implemented\");\n \t\telse\n \t\t\telog(ERROR, \"Conditional utility statements are not implemented\");\n \t}\n--- 592,612 ----\n \n \tif (parsetree->commandType == CMD_UTILITY)\n \t{\n \t\t/*\n! \t\t * There's noplace to put the qual on a utility statement.\n! \t\t *\n! \t\t * If it's a NOTIFY, silently ignore the qual; this means that the\n! \t\t * NOTIFY will execute, whether or not there are any qualifying rows.\n! \t\t * While clearly wrong, this is much more useful than refusing to\n! \t\t * execute the rule at all, and extra NOTIFY events are harmless for\n! \t\t * typical uses of NOTIFY.\n \t\t *\n! \t\t * If it isn't a NOTIFY, error out, since unconditional execution\n! \t\t * of other utility stmts is unlikely to be wanted. (This case is\n! \t\t * not currently allowed anyway, but keep the test for safety.)\n \t\t */\n \t\tif (parsetree->utilityStmt && IsA(parsetree->utilityStmt, NotifyStmt))\n! \t\t\treturn;\n \t\telse\n \t\t\telog(ERROR, \"Conditional utility statements are not implemented\");\n \t}\n***************\n*** 634,648 ****\n \n \tif (parsetree->commandType == CMD_UTILITY)\n \t{\n- \n \t\t/*\n! \t\t * Noplace to put the qual on a utility statement.\n \t\t *\n! \t\t * For now, we expect utility stmt to be a NOTIFY, so give a specific\n! \t\t * error message for that case.\n \t\t */\n \t\tif (parsetree->utilityStmt && IsA(parsetree->utilityStmt, NotifyStmt))\n! \t\t\telog(ERROR, \"Conditional NOTIFY is not implemented\");\n \t\telse\n \t\t\telog(ERROR, \"Conditional utility statements are not implemented\");\n \t}\n--- 640,652 ----\n \n \tif (parsetree->commandType == CMD_UTILITY)\n \t{\n \t\t/*\n! \t\t * There's noplace to put the qual on a utility statement.\n \t\t *\n! \t\t * See comments in AddQual for motivation.\n \t\t */\n \t\tif (parsetree->utilityStmt && IsA(parsetree->utilityStmt, NotifyStmt))\n! \t\t\treturn;\n \t\telse\n \t\t\telog(ERROR, \"Conditional utility statements are not implemented\");\n \t}\n", "msg_date": "Fri, 07 Sep 2001 16:58:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Conditional NOTIFY is not implemented " } ]
[ { "msg_contents": "Ack, what would be the reasoning for \"moving it into the postgres cvs tree\"?\nDo you mean including DBD::Pg with the postgres distribution? How would this\naffect users of, say, CPAN and whatnot?\n\n--\nalex j. avriette\nperl hacker.\na_avriette@acs.org\n$dbh -> do('unhose');\n\n", "msg_date": "Thu, 6 Sep 2001 11:43:40 -0400 ", "msg_from": "Alex Avriette <a_avriette@acs.org>", "msg_from_op": true, "msg_subject": "Re: [GENERAL] DBD::Pg errstr method doesn't return full" }, { "msg_contents": "> Ack, what would be the reasoning for \"moving it into the postgres cvs tree\"?\n> Do you mean including DBD::Pg with the postgres distribution? How would this\n> affect users of, say, CPAN and whatnot?\n\nWe would still upload to CPAN for any changes. It just allows us to\nship it with the PostgreSQL distibution and lets us accept patches and\napply them directly.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 6 Sep 2001 11:53:37 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] DBD::Pg errstr method doesn't return full error" } ]
[ { "msg_contents": "The src/test/regress/README file is gone, but still referred to by the\nINSTALL file. We need some place to point the user for information when\nthey run the regression test.\n\nUnless someone has a better idea, I would put the 'make check' after 'make\ninstall' in the order of instructions, so that they have the HTML docs\ninstalled when they run the tests, so we just point them there.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n", "msg_date": "Thu, 6 Sep 2001 17:51:11 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Regression test README" } ]
[ { "msg_contents": "> > Sep 6 02:09:30 mx postgres[13468]: [9] FATAL 2:\n> > XLogFlush: request(1494286336, 786458) is not satisfied -- \n> > flushed to (23, 2432317444) \n\nFirst note that Denis could just restart with wal_debug = 1\nto see bad request, without code change. (We should ask ppl\nto set wal_debug ON in the case of any WAL problem...)\nDenis, could you provide us with debug output?\n\n> Yeek. Looks like you have a page somewhere in the database\n> with a bogus LSN value (xlog pointer) ... and, most likely,\n> other corruption as well.\n\nWe got error during checkpoint, when backend flushes pages\nchanged by REDO (and *only those pages*). So, that page X (with\nbad LSN) was \"recovered\" from log. We didn't see CRC errors,\nso log is Ok, physically. We should know what is the X page\n(by setting breakpoint as suggested by Tom) and than look\ninto debug output to see where we got bad LSN.\nMaybe it comes from restored pages or from checkpoint LSN,\ndue to errors in XLogCtl initialization, but for sure it looks\nlike bug in WAL code.\n\n> Vadim, what do you think of reducing this elog from STOP to a notice\n> on a permanent basis? ISTM we saw cases during 7.1 beta where this\n\nAnd increase probability that ppl will just miss/ignore NOTICE\nand bug in WAL will continue to harm others?\n\n> STOP prevented people from recovering, so I'm thinking it does more\n\nAnd we fixed bug in WAL that time...\n\n> harm than good to overall system reliability.\n\nNo reliability having bugs in WAL code, so I object. But I'd move\ncheck into XLogWrite code to STOP if flush request is beyond write\npoint.\n\nDenis, please help us to fix this bug. Some GDB-ing probably will be\nrequired. If you have not enough time/disk resources but able to\ngive us copy of data-dir, it would be great (I have RedHat 7.? and\nSolaris 2.6 hosts, Tom ?). In any case debug output is the first\nthing I'd like to see. If it's big please send it to Tom and me only.\nAnd of course you can contact with me in Russian -:)\n\nVadim\n", "msg_date": "Thu, 6 Sep 2001 10:05:52 -0700 ", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "Re: Problems starting up postgres " } ]