threads listlengths 1 2.99k |
|---|
[
{
"msg_contents": "Mark,\n\nThis is why I choose to use the term \"LOCATION\" instead of \"TABLESPACE\"\n. A \"LOCATION\" is a directory just like Postgresql has today. All the\npatch would add is the ability to put object under different \"LOCATION\"\nfor the same database.\n\nJim\n\n\n\n\n> Tom Lane wrote:\n> > \n> > \"Jim Buttafuoco\" <jim@buttafuoco.net> writes:\n> > > I propose to add a default data location, index and temporary\nlocations\n> > > to the pg_shadow table to allow a DBA to specify locations for\neach\n> > > user when they create databases, tables and indexes or need\ntemporary\n> > > disk storage (either for temporary tables or sort files).\n> > \n> > Have you read any of the previous discussions about tablespaces?\n> > This seems to be tablespaces with an off-the-cuff syntax. I'd\n> > suggest taking a hard look at Oracle's tablespace facility and\n> > seeing how closely we want to duplicate that.\n> \n> Sorry I missed the conversation about tablespaces. One of the reasons\nI think\n> Postgres is so usable is because it does not require the use of\ntablespace\n> files. If by tablespace, you mean to declare a directory on a device\nas a\n> tablespace, then cool. If you want to create tablespace \"files\" ala\nOracle, you\n> are heading toward an administration nightmare. Don't get me wrong,\nthe ability\n> to use a file as a tablespace would be kind of cool, i.e. you can\nprobably use\n> raw devices, but please to not abandon the way postgres currently\nworks.\n> \n> On our Oracle server, we have run out of space on our tablespace files\nand not\n> known it was coming. I am the system architect, not the DBA, so I\ndon't have\n> (nor want) direct control over the oracle database operation. Our\nnewbe DBA did\n> not make the table correctly, so they did not grow. Alas he was laid\noff, thus\n> we were left trying to figure out what was happening.\n> \n> Postgres is easier to configure and get right. IMHO that is one of its\nvery\n> important strengths. It is almost trivial to get a working SQL system\nup and\n> running which performs well.\n> \n> \n\n\n",
"msg_date": "Wed, 7 Nov 2001 16:43:43 -0500",
"msg_from": "\"Jim Buttafuoco\" <jim@buttafuoco.net>",
"msg_from_op": true,
"msg_subject": "Re: Storage Location Patch Proposal for V7.3"
},
{
"msg_contents": "Jim Buttafuoco wrote:\n> \n> Mark,\n> \n> This is why I choose to use the term \"LOCATION\" instead of \"TABLESPACE\"\n> . A \"LOCATION\" is a directory just like Postgresql has today. All the\n> patch would add is the ability to put object under different \"LOCATION\"\n> for the same database.\n\nThat is a very excellent point. While I am not in the circle that makes these\ndecisions, I hope your words are heard.\n\nI understand the desire to stay with \"standards\" and it is impossible to deny\ndefacto standards, but I do understand that defacto standards have to be\nchallenged when they don't make sense. A prime example is PostgreSQL's\ninner/outer join syntax. It is incompatible with Oracle, but compatible with\nthe documented SQL standard.\n\nSince \"tablespace\" is not part of the SQL standard, maybe it makes sense to\ndefine a more specific syntax. The term \"location\" makes sense, because it is\nnot a tablespace as Oracle defines it. There is a real danger is trying to\nsupport a different interpretation of an existing \"defacto\" syntax, in that it\nwill behave differently than expected.\n",
"msg_date": "Wed, 07 Nov 2001 23:49:49 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Storage Location Patch Proposal for V7.3"
}
] |
[
{
"msg_contents": "07:59\n\n",
"msg_date": "Thu, 8 Nov 2001 07:58:54 +1000 (EST)",
"msg_from": "speedboy <speedboy@nomicrosoft.org>",
"msg_from_op": true,
"msg_subject": "test"
}
] |
[
{
"msg_contents": "The parser seems to have changed from 7.1.3->7.2B2 in a bad way:\n\n test=# create table test(timestamp timestamp);\n ERROR: parser: parse error at or near \"timestamp\"\n\nDon't-ever-say-I-didn't-test-B2,\n-Kevin\n\nPS: CC me on the results, and let me know if there is a better place to\n submit bugs. There didn't seem to be anything obvious on the developer\n site.\n\n--\nKevin Jacobs\nThe OPAL Group - Enterprise Systems Architect\nVoice: (216) 986-0710 x 19 E-mail: jacobs@theopalgroup.com\nFax: (216) 986-0714 WWW: http://www.theopalgroup.com\n\n\n",
"msg_date": "Wed, 7 Nov 2001 17:39:27 -0500 (EST)",
"msg_from": "Kevin Jacobs <jacobs@penguin.theopalgroup.com>",
"msg_from_op": true,
"msg_subject": "7.2 Beta2 bug report"
},
{
"msg_contents": "Kevin Jacobs <jacobs@penguin.theopalgroup.com> writes:\n> The parser seems to have changed from 7.1.3->7.2B2 in a bad way:\n> test=# create table test(timestamp timestamp);\n> ERROR: parser: parse error at or near \"timestamp\"\n\nUnfortunately, this isn't a bug.\n\nUnless we can figure out some way to accept SQL92 timestamp type\ndeclarations without requiring TIMESTAMP to be a reserved word.\nHmm...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 08 Nov 2001 09:04:07 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 7.2 Beta2 bug report "
},
{
"msg_contents": "On Wed, Nov 07, 2001 at 05:39:27PM -0500, Kevin Jacobs wrote:\n> The parser seems to have changed from 7.1.3->7.2B2 in a bad way:\n> \n> test=# create table test(timestamp timestamp);\n> ERROR: parser: parse error at or near \"timestamp\"\n> \n> Don't-ever-say-I-didn't-test-B2,\n\n(or even from 7.2devel -> 7.2b2) however\n\ntest=# create table test(\"timestamp\" timestamp);\nCREATE\n\n=> I presume that timestamp has become reserved whereas before it wasn't?\n\nPatrick\n",
"msg_date": "Thu, 8 Nov 2001 14:08:57 +0000",
"msg_from": "Patrick Welche <prlw1@newn.cam.ac.uk>",
"msg_from_op": false,
"msg_subject": "Re: 7.2 Beta2 bug report"
},
{
"msg_contents": "On Thu, 8 Nov 2001, Tom Lane wrote:\n> Kevin Jacobs <jacobs@penguin.theopalgroup.com> writes:\n> > The parser seems to have changed from 7.1.3->7.2B2 in a bad way:\n> > test=# create table test(timestamp timestamp);\n> > ERROR: parser: parse error at or near \"timestamp\"\n>\n> Unfortunately, this isn't a bug.\n>\n> Unless we can figure out some way to accept SQL92 timestamp type\n> declarations without requiring TIMESTAMP to be a reserved word.\n> Hmm...\n\nI'm not very familiar with the grammar, but it seems strange that this works\nwhen timestamp does not:\n\n test=# create table test (date date);\n\n-Kevin\n\n--\nKevin Jacobs\nThe OPAL Group - Enterprise Systems Architect\nVoice: (216) 986-0710 x 19 E-mail: jacobs@theopalgroup.com\nFax: (216) 986-0714 WWW: http://www.theopalgroup.com\n\n\n",
"msg_date": "Thu, 8 Nov 2001 09:11:23 -0500 (EST)",
"msg_from": "Kevin Jacobs <jacobs@penguin.theopalgroup.com>",
"msg_from_op": true,
"msg_subject": "Re: 7.2 Beta2 bug report "
},
{
"msg_contents": "Kevin Jacobs <jacobs@penguin.theopalgroup.com> writes:\n> I'm not very familiar with the grammar, but it seems strange that this works\n> when timestamp does not:\n\n> test=# create table test (date date);\n\nType \"date\" hasn't got all those gnarly SQL92-isms that we support now:\n\n\tTIMESTAMP\n\tTIMESTAMP(precision)\n\tTIMESTAMP WITH TIME ZONE\n\tTIMESTAMP(precision) WITH TIME ZONE\n\tTIMESTAMP WITHOUT TIME ZONE\n\tTIMESTAMP(precision) WITHOUT TIME ZONE\n\nHowever, I'm wondering if something could be done by not having an\nexplicit production for the first of these, only the other ones.\nOff to experiment with bison ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 08 Nov 2001 09:16:28 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 7.2 Beta2 bug report "
},
{
"msg_contents": "On Wed, 7 Nov 2001, Kevin Jacobs wrote:\n\n> The parser seems to have changed from 7.1.3->7.2B2 in a bad way:\n> \n> test=# create table test(timestamp timestamp);\n> ERROR: parser: parse error at or near \"timestamp\"\n\nI believe timestamp is now a reserved word. To name a column timestamp\nyou must double-quote it - during creation and anytime you want to use\nit later - as in a query etc.\n\n\nCheers,\nRod\n-- \n Let Accuracy Triumph Over Victory\n\n Zetetic Institute\n \"David's Sling\"\n Marc Stiegler\n\n",
"msg_date": "Thu, 8 Nov 2001 06:41:32 -0800 (PST)",
"msg_from": "\"Roderick A. Anderson\" <raanders@tincan.org>",
"msg_from_op": false,
"msg_subject": "Re: 7.2 Beta2 bug report"
},
{
"msg_contents": "\nTimestamp is a reserved word. You are a naughty person for using it\n:). However, I think that you will find that:\n\nCREATE TABLE test (\"timestamp\" timestamp);\n\nWorks just fine.\n\nJason\n\nKevin Jacobs <jacobs@penguin.theopalgroup.com> writes:\n\n> The parser seems to have changed from 7.1.3->7.2B2 in a bad way:\n> \n> test=# create table test(timestamp timestamp);\n> ERROR: parser: parse error at or near \"timestamp\"\n> \n> Don't-ever-say-I-didn't-test-B2,\n> -Kevin\n> \n> PS: CC me on the results, and let me know if there is a better place to\n> submit bugs. There didn't seem to be anything obvious on the developer\n> site.\n> \n> --\n> Kevin Jacobs\n> The OPAL Group - Enterprise Systems Architect\n> Voice: (216) 986-0710 x 19 E-mail: jacobs@theopalgroup.com\n> Fax: (216) 986-0714 WWW: http://www.theopalgroup.com\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n",
"msg_date": "08 Nov 2001 07:57:01 -0700",
"msg_from": "Jason Earl <jason.earl@simplot.com>",
"msg_from_op": false,
"msg_subject": "Re: 7.2 Beta2 bug report"
}
] |
[
{
"msg_contents": "\nGood evening ...\n\n\tBack on October 25th, 2001, the PostgreSQL Global Development\nGroup quietly released Beta1 of PostgreSQL v7.2, in order to get the first\nround of packaging and testing of our upcoming release in motion.\n\n\tToday, almost two weeks later and with few major bugs reported, we\nare please to announce our second Beta for broader testing.\n\n\tv7.2 of PostgreSQL includes over 6 months of development since we\nreleased v7.1 back in April, 2001, and, as with all our releases, contains\nmore improvements, enhancements and bug fixes then one would put into an\nemail.\n\n\tMajor highlights for this release include:\n\n VACUUM - VACUUM no longer locks tables, allowing normal user\naccess during the VACUUM. A new VACUUM FULL command does old-style\nvacuum by locking the table and shrinking the on-disk copy of the table.\n\n Transactions - There is no longer a problem with installations\nthat exceed four billion transactions.\n\n OID's - OID's are now optional. Users can now create tables\nwithout OID's for cases where OID usage is excessive.\n\n Optimizer - The system now computes histogram column statistics\nduring ANALYZE, allowing much better optimizer choices.\n\n Security - A new MD5 encryption option allows much more secure\nstorage and transfer of passwords. A new unix-domain socket\nauthentication option is available on Linux and *BSD systems.\n\n Statistics - Administrators can use the new table access\nstatistics module to get fine-grained information about table and index\nusage.\n\n Internationalization - Error messages can now be displayed in\nseveral languages.\n\n\tWith a complete list of changes listed in the HISTORY file.\n\n\tAs well, as with all of our major releases, v7.2 will require a\ncomplete dump and restore when upgrading from previous versions.\n\n\tv7.2b2 is available at ftp://ftp.postgresql.org, as well as\nthrough all of our official mirror sites.\n\n\tBug reports, as always, should be directed to\npgsql-bugs@postgresql.org, and the severity of all bugs reported will\ndetermine whether we move to the release cycle, or do another Beta, so we\nencourage as many administrators as possible to test this current release.\n\n\nMarc G. Fournier\nCoordinator, PGDG\n\n",
"msg_date": "Wed, 7 Nov 2001 18:43:06 -0500 (EST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "PostgreSQL v7.2b2 Released"
},
{
"msg_contents": "On Mi� 07 Nov 2001 20:43, you wrote:\n> Good evening ...\n>\n> \tBack on October 25th, 2001, the PostgreSQL Global Development\n> Group quietly released Beta1 of PostgreSQL v7.2, in order to get the first\n> round of packaging and testing of our upcoming release in motion.\n>\n> \tToday, almost two weeks later and with few major bugs reported, we\n> are please to announce our second Beta for broader testing.\n>\n> \tv7.2 of PostgreSQL includes over 6 months of development since we\n> released v7.1 back in April, 2001, and, as with all our releases, contains\n> more improvements, enhancements and bug fixes then one would put into an\n> email.\n>\n> \tMajor highlights for this release include:\n>\n> VACUUM - VACUUM no longer locks tables, allowing normal user\n> access during the VACUUM. A new VACUUM FULL command does old-style\n> vacuum by locking the table and shrinking the on-disk copy of the table.\n\nWhat does VACUUM do if it doesn�t shrink the size of the database?\n\nSaludos... :-)\n\n-- \nPorqu� usar una base de datos relacional cualquiera,\nsi pod�s usar PostgreSQL?\n-----------------------------------------------------------------\nMart�n Marqu�s | mmarques@unl.edu.ar\nProgramador, Administrador, DBA | Centro de Telematica\n Universidad Nacional\n del Litoral\n-----------------------------------------------------------------\n",
"msg_date": "Thu, 8 Nov 2001 09:02:11 -0300",
"msg_from": "=?iso-8859-1?q?Mart=EDn=20Marqu=E9s?= <martin@bugs.unl.edu.ar>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL v7.2b2 Released"
},
{
"msg_contents": "Asked here in case anyone else is in the same situation.\n\nI was lurking on pgreplication-general@greatbridge.org.\nUnsurprisingly, it appears to be gone.\n\nThe TODO list at http://developer.postgresql.org/todo.php does not\nappear to have any links to the state of the project, beyond the\npage of old hacker's list messages from dec to july.\n\nDarren, or someone, would you please post a link or two? And might\nI humbly suggest that for a project listed at the very top under\nurgent, it would be nice to make it easy to find?\n\nThanks,\n\n-Brad\n",
"msg_date": "Thu, 8 Nov 2001 11:44:08 -0500",
"msg_from": "Bradley McLean <brad@bradm.net>",
"msg_from_op": false,
"msg_subject": "OT?: PGReplication project dead? "
},
{
"msg_contents": "> I was lurking on pgreplication-general@greatbridge.org.\n> Unsurprisingly, it appears to be gone.\n\nThere actually is work being done. However, since the list of off line,\nthere is a lot of communication via email between Darren and I and Betina\n(and a few others). Once the list comes back, Darren will be posting all\nthat was discussed there. We have the 6.4 version working and are moving\nthe code to 7.2ish to work from that point on. There are a lot of new\nfeatures in 7.2, however, that need to be considered.\n\nSo...\n\nIt is being worked on, but w/o a mailing list up, it's hard to get the\ninfo out. Please mail Darren or myself if you care to know more.\n\n> Darren, or someone, would you please post a link or two? And might\n> I humbly suggest that for a project listed at the very top under\n> urgent, it would be nice to make it easy to find?\n\nThe info is now available on gborg:\n\nhttp://gborg.postgresql.org/project/pgreplication/projdisplay.php\n\n- Brandon\n\n\n----------------------------------------------------------------------------\n c: 646-456-5455 h: 201-798-4983\n b. palmer, bpalmer@crimelabs.net pgp:crimelabs.net/bpalmer.pgp5\n\n",
"msg_date": "Thu, 8 Nov 2001 12:23:41 -0500 (EST)",
"msg_from": "bpalmer <bpalmer@crimelabs.net>",
"msg_from_op": false,
"msg_subject": "Re: OT?: PGReplication project dead? "
},
{
"msg_contents": "* bpalmer (bpalmer@crimelabs.net) [011108 12:28]:\n> \n> It is being worked on, but w/o a mailing list up, it's hard to get the\n> info out. Please mail Darren or myself if you care to know more.\n\nThanks, Brandon; Good to hear. \n\nIs replacing the list a work in process? Is assistance required?\n\n> http://gborg.postgresql.org/project/pgreplication/projdisplay.php\n\nWould I be out of line in suggesting this get grafted into the TODO\nlist? And possibly the projects page? Bruce?\n\n-Brad\n",
"msg_date": "Thu, 8 Nov 2001 12:40:02 -0500",
"msg_from": "Bradley McLean <brad@bradm.net>",
"msg_from_op": false,
"msg_subject": "Re: OT?: PGReplication project dead?"
},
{
"msg_contents": "> * bpalmer (bpalmer@crimelabs.net) [011108 12:28]:\n> > \n> > It is being worked on, but w/o a mailing list up, it's hard to get the\n> > info out. Please mail Darren or myself if you care to know more.\n> \n> Thanks, Brandon; Good to hear. \n> \n> Is replacing the list a work in process? Is assistance required?\n> \n> > http://gborg.postgresql.org/project/pgreplication/projdisplay.php\n> \n> Would I be out of line in suggesting this get grafted into the TODO\n> list? And possibly the projects page? Bruce?\n\nSure. I wonder if I should just put that URL on the TODO list.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 8 Nov 2001 13:14:41 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OT?: PGReplication project dead?"
},
{
"msg_contents": "> Is replacing the list a work in process? Is assistance required?\n\nThat is a gborg issue. We could setup a list on one of our servers, but\nit would be better just to get the old list back up.\n\n> Would I be out of line in suggesting this get grafted into the TODO\n> list? And possibly the projects page? Bruce?\n\nI think that it would be smarter to hyperlink to the URL than to graft the\ninfo onto the TODO, that seems like overkill and replication is being\ndeveloped in parallel with the main tree as was the full text search stuff\nand will be merged in at the proper time.\n\n- Brandon\n\n\n----------------------------------------------------------------------------\n c: 646-456-5455 h: 201-798-4983\n b. palmer, bpalmer@crimelabs.net pgp:crimelabs.net/bpalmer.pgp5\n\n\n",
"msg_date": "Thu, 8 Nov 2001 13:17:03 -0500 (EST)",
"msg_from": "bpalmer <bpalmer@crimelabs.net>",
"msg_from_op": false,
"msg_subject": "Re: OT?: PGReplication project dead?"
},
{
"msg_contents": "\nNot sure what the status is of it being back online ... Chris?\n\n\nOn Thu, 8 Nov 2001, bpalmer wrote:\n\n> > I was lurking on pgreplication-general@greatbridge.org.\n> > Unsurprisingly, it appears to be gone.\n>\n> There actually is work being done. However, since the list of off line,\n> there is a lot of communication via email between Darren and I and Betina\n> (and a few others). Once the list comes back, Darren will be posting all\n> that was discussed there. We have the 6.4 version working and are moving\n> the code to 7.2ish to work from that point on. There are a lot of new\n> features in 7.2, however, that need to be considered.\n>\n> So...\n>\n> It is being worked on, but w/o a mailing list up, it's hard to get the\n> info out. Please mail Darren or myself if you care to know more.\n>\n> > Darren, or someone, would you please post a link or two? And might\n> > I humbly suggest that for a project listed at the very top under\n> > urgent, it would be nice to make it easy to find?\n>\n> The info is now available on gborg:\n>\n> http://gborg.postgresql.org/project/pgreplication/projdisplay.php\n>\n> - Brandon\n>\n>\n> ----------------------------------------------------------------------------\n> c: 646-456-5455 h: 201-798-4983\n> b. palmer, bpalmer@crimelabs.net pgp:crimelabs.net/bpalmer.pgp5\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n",
"msg_date": "Thu, 8 Nov 2001 13:33:15 -0500 (EST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Re: OT?: PGReplication project dead? "
},
{
"msg_contents": "\n> What does VACUUM do if it doesn�t shrink the size of the database?\n>\n\nI was wondering the same thing, so I looked at the development docs and it \nappears that regular VACUUM frees the dead tuples so that the space on a page \nmay be reused. This approach doesn't actually reduce the number of pages \nallocated though, it reduces the chances that more pages will be allocated \n(because the pages have free space to make tuples in). VACUUM FULL packs all \nthe tuples together and actually reduces the number of allocated pages. You \nshould be able to run a DB 24x7 by issuing only VACUUM without the disk usage \ngrowing out of control.\n\nJeff Davis\n\n> Saludos... :-)\n",
"msg_date": "Thu, 8 Nov 2001 11:15:38 -0800",
"msg_from": "Jeff Davis <list-pgsql-general@dynworks.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL v7.2b2 Released"
},
{
"msg_contents": "Jeff Davis <list-pgsql-general@dynworks.com> writes:\n> I was wondering the same thing, so I looked at the development docs\n> and it appears that regular VACUUM frees the dead tuples so that the\n> space on a page may be reused. This approach doesn't actually reduce\n> the number of pages allocated though, it reduces the chances that more\n> pages will be allocated (because the pages have free space to make\n> tuples in).\n\nMaybe the docs still need some work on this point. Plain VACUUM will\nstill try to reduce the number of pages in a table, but it does so only\nby removing wholly-empty end pages. (And it won't move tuples across\npages to make end pages empty, which turns out to have been the single\nslowest, most complex action old-style VACUUM performs.) Also, it\ncan't remove any pages unless it can secure a temporary exclusive lock\non the table while it does so --- but unlike old-style VACUUM, it\ndoesn't insist on being able to do so. If there are concurrent\nreaders/writers then it just forgets about truncating the table and\nmoves on.\n\nBottom line is that it's a pretty laid-back approach to reclaiming\ndisk space. I believe that it will work pretty well for maintaining\na steady-state average disk usage of heavily updated tables, but in\ncases such as having just deleted 80% of the tuples in a table (that\nyou're not planning to refill just as fast) a VACUUM FULL might still\nbe appropriate.\n\nI expect we'll be experimenting with the behavior for awhile to come.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 08 Nov 2001 20:24:42 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL v7.2b2 Released "
},
{
"msg_contents": "--- Frans Thamura <fth4mura@yahoo.com> wrote:\n> Where I can get the Postgre Win binary version..\n> \n> So, I just install and run it\n\nDownload Cygwin (http://cygwin.com/).\n\nBrent\n\n__________________________________________________\nDo You Yahoo!?\nFind a job, post your resume.\nhttp://careers.yahoo.com\n",
"msg_date": "Fri, 9 Nov 2001 06:30:34 -0800 (PST)",
"msg_from": "\"Brent R. Matzelle\" <bmatzelle@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Postgre for Windows"
},
{
"msg_contents": "Where I can get the Postgre Win binary version..\n\nSo, I just install and run it\n\nfrans\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Sat, 10 Nov 2001 03:47:45 +0700",
"msg_from": "\"Frans Thamura\" <fth4mura@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Postgre for Windows"
},
{
"msg_contents": "Hello everybody!\n\nI am trying to restore a 2.5 Gb. backup file made using pg_dumpall, and I get \nthe \"file too big\" error message from postgres\n\nwhat can I do to solve this? is this a bug? I have 18 Gb. free disk space, a \ntwo PIII processor machine with 1 Gb. ram, running Red Hat 7.1 with \nPostgresql 7.1.3 (installed from official rpms)\n\nthank in advance for your help!\n\ngreetings!\n\nJorge Sarmiento\n",
"msg_date": "Fri, 9 Nov 2001 20:52:58 -0500",
"msg_from": "Jorge Sarmiento <jsarmiento@ccom.org>",
"msg_from_op": false,
"msg_subject": "psql -f backup.out || file too big"
},
{
"msg_contents": "As far as I know the PGReplication project is still alive. Darren E-mails about \nonce a week checking on the mail list status. At this point I have the mailing \nlists semi-functional. Archives are working and the administration of lists \nworks for the most part. I haven't had enough time of late to finish getting \nthe server setup to actually send/receive mail. When I get there I plan on \nsending out a mass mailing to all the lists.\n\nChris Ryan\n\n\nQuoting \"Marc G. Fournier\" <scrappy@hub.org>:\n\n> \n> Not sure what the status is of it being back online ... Chris?\n> \n> \n> On Thu, 8 Nov 2001, bpalmer wrote:\n> \n> > > I was lurking on pgreplication-general@greatbridge.org.\n> > > Unsurprisingly, it appears to be gone.\n> >\n> > There actually is work being done. However, since the list of off\n> line,\n> > there is a lot of communication via email between Darren and I and\n> Betina\n> > (and a few others). Once the list comes back, Darren will be posting\n> all\n> > that was discussed there. We have the 6.4 version working and are\n> moving\n> > the code to 7.2ish to work from that point on. There are a lot of\n> new\n> > features in 7.2, however, that need to be considered.\n> >\n> > So...\n> >\n> > It is being worked on, but w/o a mailing list up, it's hard to get\n> the\n> > info out. Please mail Darren or myself if you care to know more.\n> >\n> > > Darren, or someone, would you please post a link or two? And\n> might\n> > > I humbly suggest that for a project listed at the very top under\n> > > urgent, it would be nice to make it easy to find?\n> >\n> > The info is now available on gborg:\n> >\n> > http://gborg.postgresql.org/project/pgreplication/projdisplay.php\n> >\n> > - Brandon\n> >\n> >\n> >\n> ----------------------------------------------------------------------------\n> > c: 646-456-5455 h:\n> 201-798-4983\n> > b. palmer, bpalmer@crimelabs.net \n> pgp:crimelabs.net/bpalmer.pgp5\n> >\n> >\n> > ---------------------------(end of\n> broadcast)---------------------------\n> > TIP 4: Don't 'kill -9' the postmaster\n> >\n> \n> \n",
"msg_date": "Sat, 10 Nov 2001 22:36:14 -0500 (EST)",
"msg_from": "Chris Ryan <ryan@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: OT?: PGReplication project dead?"
},
{
"msg_contents": "Am Samstag, 10. November 2001 02:52 schrieb Jorge Sarmiento:\n\n> I am trying to restore a 2.5 Gb. backup file made using pg_dumpall,\n> and I get the \"file too big\" error message from postgres\n\nIts not a bug in postgres i guess. your filesystem reports an error. \nYou can use a zip utility like gzip. Just pipe your dump to your \nfavorite zip programm and zip it on the fly before you write it to \nyour dump_file.\n\nif it�s still to big use 'split' to split your dump over various \nfiles.\n\nIt�s documented in Chapter 8: 8.1.3 Large Databases in \"Practical \nPostgreSQL\" It is not printed yet i think, but its available online.\n\nI can send you a copy of this page per PM if you like.\n\nJanning\n\n\n-- \nPlanwerk 6 /websolutions\nHerzogstra�e 86\n40215 D�sseldorf\n\nfon 0211-6015919\nfax 0211-6015917\nhttp://www.planwerk6.de\n",
"msg_date": "Mon, 12 Nov 2001 10:08:22 +0100",
"msg_from": "Janning Vygen <vygen@planwerk6.de>",
"msg_from_op": false,
"msg_subject": "Re: psql -f backup.out || file too big"
},
{
"msg_contents": "I solved my problem, it was quite easy, and the error was cause by a \nlimitation of the psql proggie.\n\nthe solution was:\n\npsql < backup.out\n\ninstead of\n\npsql -f backup.out\n\naccording to the man pages, using \"psql -f\" or \"psql <\" would give us the \nsame result, but the -f parameter will give us \"better messages\"...\n\nwhat that psql -f limit documented somewhere?\n\nthanx all for your help!\n\nJorge S.\n\nOn Friday 09 November 2001 08:52 pm, Jorge Sarmiento wrote:\n> Hello everybody!\n>\n> I am trying to restore a 2.5 Gb. backup file made using pg_dumpall, and I\n> get the \"file too big\" error message from postgres\n>\n> what can I do to solve this? is this a bug? I have 18 Gb. free disk space,\n> a two PIII processor machine with 1 Gb. ram, running Red Hat 7.1 with\n> Postgresql 7.1.3 (installed from official rpms)\n>\n> thank in advance for your help!\n>\n> greetings!\n>\n> Jorge Sarmiento\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n",
"msg_date": "Mon, 12 Nov 2001 11:09:01 -0500",
"msg_from": "Jorge Sarmiento <jsarmiento@ccom.org>",
"msg_from_op": false,
"msg_subject": "Re: psql -f backup.out || file too big - SOLVED"
},
{
"msg_contents": "On Mon, 12 Nov 2001, Jorge Sarmiento wrote:\n\n[backup.out > 2GB]\n> the solution was:\n> \n> psql < backup.out\n> \n> instead of\n> \n> psql -f backup.out\n[...]\n> what that psql -f limit documented somewhere?\n\nThis is not a limitation of psql, but of lacking \"large file support\"\nselected at compile time.\n\nIf you recompile psql and add the option \"-D_FILE_OFFSET_BITS=64\" then\npsql -f should work as well.\n\nRegards\n-- \nHelge Bahmann <bahmann@math.tu-freiberg.de> /| \\__\nNetwork admin, systems programmer /_|____\\\n _/\\ | __)\n$ ./configure \\\\ \\|__/__|\nchecking whether build environment is sane... yes \\\\/___/ | \nchecking for AIX... no (we already did this) |\n\n",
"msg_date": "Mon, 12 Nov 2001 18:26:31 +0100 (MET)",
"msg_from": "Helge Bahmann <bahmann@math.tu-freiberg.de>",
"msg_from_op": false,
"msg_subject": "Re: psql -f backup.out || file too big - SOLVED"
}
] |
[
{
"msg_contents": " \n> Since \"tablespace\" is not part of the SQL standard, maybe it makes\nsense to\n> define a more specific syntax. The term \"location\" makes sense,\nbecause it is\n> not a tablespace as Oracle defines it.\n\nIt *is* an \"OS managed tablespace\" in terms of IBM DB2.\nMethinks the term \"TABLESPACE\" is perfect for PostgreSQL.\nThe fact whether it is a directory, a file or even a raw device\ndepends on how you create the tablespace.\n\nThe point is, that the syntax for \"create table\" and \"create index\"\ncan be compatible in this case, imho without confusing many. \nNot the \"create tablespace\" syntax, but that is imho not an issue.\n\nAndreas\n",
"msg_date": "Thu, 8 Nov 2001 08:24:31 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: Storage Location Patch Proposal for V7.3"
}
] |
[
{
"msg_contents": "Hi everyone,\n\nI have noticed a possibly major issues in Plpython that may need to be\naddressed before 7.2 is released:\n\n 1) If Plpython is installed as a trusted language, and from what little I\n can glean from the documentation, it should not have any filesystem access.\n However, the default behavior of the restricted execution environment\n being used allows read-only filesystem access.\n\n Here is the current behavior (from the Python Library Reference):\n\n r_open(filename[, mode[, bufsize]])\n\n Method called when open() is called in the restricted environment.\n The arguments are identical to those of open(), and a file object (or\n a class instance compatible with file objects) should be returned.\n RExec's default behavior is allow opening any file for reading, but\n forbidding any attempt to write a file. See the example below for an\n implementation of a less restrictive r_open().\n\n It is fairly easy to override this method to unconditionally raise an\n access exception.\n\nI have some other suggestions that may not be appropriate for the 7.2\nrelease, but think should be addressed before too long:\n\n 2) I'm not sure why the TD dictionary exists. Why not create objects\n 'new', 'old' or 'event' in the global namespace when the interpreter is\n called in the appropriate contexts? The current way is unwieldy, not\n very 'Pythonic' (a frequent justification for change in the Python\n world), and not consistent with other PostgreSQL procedural backends.\n Its possible to keep TD for backward compatibility, so there is no\n downside.\n\n 3) 'old' and 'new' should also provide class-like syntax:\n\n e.g. old.foo, new.baz (using getitem)\n\n instead of\n old['foo'], new['baz'] (using getattr)\n\n Of course we cannot drop the getattr interface, since many valid column\n names are not valid python identifiers (I think -- I haven't looked at\n the SQL grammar lately, though I'm guessing that is the case).\n\n 4) Plpython does not use the standard Python boolean checks, which is also\n not very Pythonic and somewhat confusing:\n\n e.g.\n\n CREATE OR REPLACE FUNCTION py_true() RETURNS bool AS '\n return \"a\"\n ' LANGUAGE 'plpython';\n\n CREATE OR REPLACE FUNCTION py_false() RETURNS bool AS '\n return {}\n ' LANGUAGE 'plpython';\n\n # select py_true();\n ERROR: Bad boolean external representation 'a'\n select py_false();\n ERROR: Bad boolean external representation '{}'\n\n These should return:\n\n # select py_true(); -- non-empty strings evaluate to true\n bool\n ------\n t\n (1 row)\n select py_false(); -- empty dictionaries evaluate to false\n bool\n ------\n f\n (1 row)\n\n I suggest changing the semantics of boolean return values to use\n PyObject_IsTrue(PyObject *o) to properly test for truth values.\n\n 5) It should be trivial to create an \"untrusted\" version of Plpython that\n bypasses the restricted execution environment. This is worthy of some\n consideration, since it may be very useful and can be implemented with\n relative ease.\n\n 6) [Very low priority] Its not insane to consider a Plpython procedure\n that spawns off a Python thread to do background processing tasks.\n This is obviously something that will only be possible in an untrusted\n version of the interpreter. Also, if the SPI interface is thread-safe,\n then it may be useful to release the Python interpreter lock around\n some of the SPI calls.\n\nOK, so I've got a laundry list of issues. Only the first issue is a real\nshow-stopper in my mind, though I'd like to see at least 1-4 addressed\nbefore 7.2 or 7.2.1, if at all possible. After some discussion, I'll even\nby happy to implement most/all of these items, though I'd like more\ncollaboration than just submitting patches blindly for consideration.\n\nThanks,\n-Kevin Jacobs\n\nPS: Oh, I'd like to thank everyone working on PostgreSQL for the wonderful\n job they've done. I'm a _very_ new user and am in the process of\n porting a _very large_ project from MySQL/MSSQL to PostgresQL, for\n obvious reasons. I had expected the process to be painful and tedious,\n but instead it has been a real pleasure and I have enjoyed exploring all\n of the wonderful things you all have given me to play with.\n\n\n--\nKevin Jacobs\nThe OPAL Group - Enterprise Systems Architect\nVoice: (216) 986-0710 x 19 E-mail: jacobs@theopalgroup.com\nFax: (216) 986-0714 WWW: http://www.theopalgroup.com\n\n",
"msg_date": "Thu, 8 Nov 2001 08:39:30 -0500 (EST)",
"msg_from": "Kevin Jacobs <jacobs@penguin.theopalgroup.com>",
"msg_from_op": true,
"msg_subject": "Possible major bug in PlPython (plus some other ideas)"
},
{
"msg_contents": "* Kevin Jacobs (jacobs@penguin.theopalgroup.com) [011109 08:59]:\n> \n> 1) If Plpython is installed as a trusted language, and from what little I\n> can glean from the documentation, it should not have any filesystem access.\n> However, the default behavior of the restricted execution environment\n> being used allows read-only filesystem access.\n>... \n> It is fairly easy to override this method to unconditionally raise an\n> access exception.\n\nAgreed. Just to amplify the point below, there is currently no\ndistinction between trusted and untrusted in PLpython (hmm, need\na doc change to identify this?). lanpltrusted appears nowhere\nin the implementation.\n\n> 2) I'm not sure why the TD dictionary exists. Why not create objects\n> 'new', 'old' or 'event' in the global namespace when the interpreter is\n> called in the appropriate contexts? The current way is unwieldy, not\n> very 'Pythonic' (a frequent justification for change in the Python\n> world), and not consistent with other PostgreSQL procedural backends.\n> Its possible to keep TD for backward compatibility, so there is no\n> downside.\n> \n> 3) 'old' and 'new' should also provide class-like syntax:\n> \n> e.g. old.foo, new.baz (using getitem)\n> \n> instead of\n> old['foo'], new['baz'] (using getattr)\n> \n> Of course we cannot drop the getattr interface, since many valid column\n> names are not valid python identifiers (I think -- I haven't looked at\n> the SQL grammar lately, though I'm guessing that is the case).\n\nAgree on both.\n \n> 4) Plpython does not use the standard Python boolean checks, which is also\n> not very Pythonic and somewhat confusing:\n> ...\n> I suggest changing the semantics of boolean return values to use\n> PyObject_IsTrue(PyObject *o) to properly test for truth values.\n\nAgree. Is this the only type that needs special treatment?\n \n> 5) It should be trivial to create an \"untrusted\" version of Plpython that\n> bypasses the restricted execution environment. This is worthy of some\n> consideration, since it may be very useful and can be implemented with\n> relative ease.\n\nStrongly agree. I'd like to see it for your \"7.2.1\" proposal.\n \n> 6) [Very low priority] Its not insane to consider a Plpython procedure\n> that spawns off a Python thread to do background processing tasks.\n> This is obviously something that will only be possible in an untrusted\n> version of the interpreter. Also, if the SPI interface is thread-safe,\n> then it may be useful to release the Python interpreter lock around\n> some of the SPI calls.\n\nThree weeks ago, I really wanted a feature like this, but I've slowly\nbeen convincing myself that it's a bad idea. Consider the effects of\na race condition in user generated threaded python code that results\nin a deadlock within the backend. Extend that to a backend that's\nholding a database lock of some kind. The expertise required to\ndiagnose and debug such a condition would be rather substantial -\nthreading, python threading, sql, postgres internals, ... in one\nperson.\n\nI solved my design issue with the use of an external process and\na NOTIFY, and I'm much happier for it. The threaded python stuff\ncan stay out in another process where the interaction concerns are\nminimized.\n\nI will say that the discrepancy between executing SQL from plpython\nand from python with the pg driver does cause some cognitive\ndisconnect, because despite working in the same language, I have\nto use different APIs to the same functionality. I don't have a\ngood proposal to solve it, though.\n \n> OK, so I've got a laundry list of issues. Only the first issue is a real\n> show-stopper in my mind, though I'd like to see at least 1-4 addressed\n> before 7.2 or 7.2.1, if at all possible. After some discussion, I'll even\n> by happy to implement most/all of these items, though I'd like more\n> collaboration than just submitting patches blindly for consideration.\n\nI'll happily help.\n\nOne last issue:\n\n7) Would it be completely impossible to make an extra-global\ndictionary that was shared between multiple back-ends (place\nin shared memory)? Anything I'm likely to place in GD, I'm likely\nto initialize once and want in all backends.\n\n-Brad McLean\n",
"msg_date": "Fri, 9 Nov 2001 10:19:39 -0500",
"msg_from": "Bradley McLean <brad@bradm.net>",
"msg_from_op": false,
"msg_subject": "Re: Possible major bug in PlPython (plus some other ideas)"
},
{
"msg_contents": "Hi Bradley,\n\nThanks for the response! I'm very relieved to get feedback on my\nsuggestions.\n\nOn Fri, 9 Nov 2001, Bradley McLean wrote:\n\n> * Kevin Jacobs (jacobs@penguin.theopalgroup.com) [011109 08:59]:\n> >\n> > 1) If Plpython is installed as a trusted language, and from what little I\n> > can glean from the documentation, it should not have any filesystem access.\n> > However, the default behavior of the restricted execution environment\n> > being used allows read-only filesystem access.\n> >...\n> > It is fairly easy to override this method to unconditionally raise an\n> > access exception.\n>\n> Agreed. Just to amplify the point below, there is currently no\n> distinction between trusted and untrusted in PLpython (hmm, need\n> a doc change to identify this?). lanpltrusted appears nowhere\n> in the implementation.\n\nActually, I'm mostly unaware of how createlang works, but I assumed that\nplpython was installed as a trusted backend. Can anyone clarify _exactly_\nwhat is meant by a 'trusted'?\n\nTo fix this issue, is it better to create a C-API method override to RExec\nor should we implement a real plpy.py module that does this (and some of the\nrest of my suggestions). I'd prefer the module, since it is _much_ easier\nto do, though it does incur some extra overhead.\n\n> > 4) Plpython does not use the standard Python boolean checks, which is also\n> > not very Pythonic and somewhat confusing:\n> > ...\n> > I suggest changing the semantics of boolean return values to use\n> > PyObject_IsTrue(PyObject *o) to properly test for truth values.\n>\n> Agree. Is this the only type that needs special treatment?\n\nVirtually all the other basic types will usually do the right thing.\nWe could add support for array conversions:\n\n Python [1,2,3] => Postgres INTEGER[] {1,2,3}\n\nCan you think of any others?\n\n> > 6) [Very low priority] Its not insane to consider a Plpython procedure\n> > that spawns off a Python thread to do background processing tasks.\n> > This is obviously something that will only be possible in an untrusted\n> > version of the interpreter. Also, if the SPI interface is thread-safe,\n> > then it may be useful to release the Python interpreter lock around\n> > some of the SPI calls.\n>\n> Three weeks ago, I really wanted a feature like this, but I've slowly\n> been convincing myself that it's a bad idea. Consider the effects of\n> a race condition in user generated threaded python code that results\n> in a deadlock within the backend.\n\nI'm not completely sold on this either. I'm going to let this idea\npercolate for a while and wait to see if I can find a need for it.\n\n> I will say that the discrepancy between executing SQL from plpython\n> and from python with the pg driver does cause some cognitive\n> disconnect, because despite working in the same language, I have\n> to use different APIs to the same functionality. I don't have a\n> good proposal to solve it, though.\n\nOther than not having the same modules available, what is annoying you most?\n\n> One last issue:\n>\n> 7) Would it be completely impossible to make an extra-global\n> dictionary that was shared between multiple back-ends (place\n> in shared memory)? Anything I'm likely to place in GD, I'm likely\n> to initialize once and want in all backends.\n\nIt will be very tricky to do in full generality. Anything that is inserted\ninto the \"shared global\" dictionary will have to be copied, re-allocated on\nthe shared heap, and hidden from the garbage collector. This is fairly easy\nto do for a simple types {int,float,string}; much harder for more complex\ntypes.\n\n-Kevin\n\n--\nKevin Jacobs\nThe OPAL Group - Enterprise Systems Architect\nVoice: (216) 986-0710 x 19 E-mail: jacobs@theopalgroup.com\nFax: (216) 986-0714 WWW: http://www.theopalgroup.com\n\n",
"msg_date": "Fri, 9 Nov 2001 10:58:17 -0500 (EST)",
"msg_from": "Kevin Jacobs <jacobs@penguin.theopalgroup.com>",
"msg_from_op": true,
"msg_subject": "Re: Possible major bug in PlPython (plus some other ideas)"
},
{
"msg_contents": "Kevin Jacobs wrote:\n> \n> Hi everyone,\n> \n> I have noticed a possibly major issues in Plpython that may need to be\n> addressed before 7.2 is released:\n> \n> 1) If Plpython is installed as a trusted language, and from what little I\n> can glean from the documentation, it should not have any filesystem access.\n> However, the default behavior of the restricted execution environment\n> being used allows read-only filesystem access.\n\nwe have 'read-only filesystem access anyhow' :\n\npg72b2=# create table hack(row text);\nCREATE\npg72b2=# copy hack from '/home/pg72b2/data/pg_hba.conf' DELIMITERS\n'\\01';\nCOPY\npg72b2=# select * from hack limit 10;\n \nrow \n-------------------------------------------------------------------------------\n # \n # PostgreSQL HOST-BASED ACCESS (HBA) CONTROL FILE\n # \n # \n # This file controls:\n # o which hosts are allowed to connect\n # o how users are authenticated on each host\n # o databases accessible by each host\n # \n # It is read on postmaster startup and when the postmaster receives a\nSIGHUP.\n(10 rows)\n\ndo I can't consider having it in plputhon any bigger security threat.\n\nusing copy xxx to '/file/' we have even read-write access, we just can't \noverwrite 0600 files. And you can do only what the postgres user can do.\n\n> 2) I'm not sure why the TD dictionary exists. Why not create objects\n> 'new', 'old' or 'event' in the global namespace when the interpreter is\n> called in the appropriate contexts? The current way is unwieldy, not\n> very 'Pythonic' (a frequent justification for change in the Python\n> world), and not consistent with other PostgreSQL procedural backends.\n> Its possible to keep TD for backward compatibility, so there is no\n> downside.\n> \n> 3) 'old' and 'new' should also provide class-like syntax:\n> \n> e.g. old.foo, new.baz (using getitem)\n> \n> instead of\n> old['foo'], new['baz'] (using getattr)\n> \n> Of course we cannot drop the getattr interface, since many valid column\n> names are not valid python identifiers (I think -- I haven't looked at\n> the SQL grammar lately, though I'm guessing that is the case).\n\nYou can have almost anything in an identifier if it is quoted.\n\n-----------\nHannu\n",
"msg_date": "Fri, 09 Nov 2001 18:25:30 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Possible major bug in PlPython (plus some other ideas)"
},
{
"msg_contents": "* Kevin Jacobs (jacobs@penguin.theopalgroup.com) [011109 10:53]:\n> \n> On Fri, 9 Nov 2001, Bradley McLean wrote:\n> \n> Actually, I'm mostly unaware of how createlang works, but I assumed that\n> plpython was installed as a trusted backend. Can anyone clarify _exactly_\n> what is meant by a 'trusted'?\n\nIANA Expert here. createlang (shell script, just read) defines plpython\nas \"TRUSTED\", but you can install it either way with CREATE LANGUAGE.\n\nLooking at the pl/tcl and pl/perl implementations, plpython.c needs to\nlook at the Form_pg_language->lanpltrusted element to get the flag.\n\nThen, plprocedure_compile needs to be modified to set up either exec or\nr_exec as appropriate. Also looks like PLy_init and it's sub procedures\nneed some rework as well.\n\nNote that I'm just another newcomer who has submitted one more patch\nthan you, but has a keen interest in a strong python PL.\n\n> To fix this issue, is it better to create a C-API method override to RExec\n> or should we implement a real plpy.py module that does this (and some of the\n> rest of my suggestions). I'd prefer the module, since it is _much_ easier\n> to do, though it does incur some extra overhead.\n\nI think we need to continue to work in the C-API, and not at the Python\nlevel. It's not *that* much harder, and I don't see a continuing need\nto work at the python level after this situation is fixed.\n\n> Virtually all the other basic types will usually do the right thing.\n> We could add support for array conversions:\n> \n> Python [1,2,3] => Postgres INTEGER[] {1,2,3}\n> \n> Can you think of any others?\n\nNo, and I couldn't before; just asking the design question.\n\n> > Three weeks ago, I really wanted a feature like this, but I've slowly\n> > been convincing myself that it's a bad idea. Consider the effects of\n> > a race condition in user generated threaded python code that results\n> > in a deadlock within the backend.\n> \n> I'm not completely sold on this either. I'm going to let this idea\n> percolate for a while and wait to see if I can find a need for it.\n\nSounds like a plan.\n\n> Other than not having the same modules available, what is annoying you most?\n\nCREATE FUNCTION one() returns int4 AS '\nresult = plpy.execute(\"SELECT 1 as one\")\nreturn result[0][\"one\"]\n' language 'plpython';\n\nvs.\n\nfrom pg import DB\ndef one():\n db = DB(dbname=\"foo\")\n result = db.query(\"SELECT 1 as one\")\n return result.getresult()[0][0]\n\n----- \n\nquery vs execute, getresult() in one case and not the other,\nresults by column name vs column index.\n\nAgain, I'm not sure I have a strong proposal to fix it, since\nthe two APIs run in very different environments, and one is\ninfluenced by the python database api, the other by the postgres\nSPI api. I'm just whining because when they're up in two\nemacs windows, I'm forever putting the wrong one in the wrong\nplace.\n\n> > 7) Would it be completely impossible to make an extra-global\n> > dictionary that was shared between multiple back-ends (place\n> > in shared memory)? Anything I'm likely to place in GD, I'm likely\n> > to initialize once and want in all backends.\n> \n> It will be very tricky to do in full generality. Anything that is inserted\n> into the \"shared global\" dictionary will have to be copied, re-allocated on\n> the shared heap, and hidden from the garbage collector. This is fairly easy\n> to do for a simple types {int,float,string}; much harder for more complex\n> types.\n\nAgreed it would be tricky. But is it useful? Would write-once (constant)\nsemantics help? \n\n-Brad\n",
"msg_date": "Fri, 9 Nov 2001 11:26:31 -0500",
"msg_from": "Bradley McLean <brad@bradm.net>",
"msg_from_op": false,
"msg_subject": "Re: Possible major bug in PlPython (plus some other ideas)"
},
{
"msg_contents": "> > 1) If Plpython is installed as a trusted language, and from what little I\n> > can glean from the documentation, it should not have any filesystem access.\n> > However, the default behavior of the restricted execution environment\n> > being used allows read-only filesystem access.\n>\n> we have 'read-only filesystem access anyhow' :\n\nThen I consider this a bug if a non-super-user can do this.\n\n> using copy xxx to '/file/' we have even read-write access, we just can't\n> overwrite 0600 files. And you can do only what the postgres user can do.\n\nThis is an even bigger bug. I didn't think I needed to run PostgreSQL in a\nchroot jail, but its looking more like that may be needed. Any comments\nfrom other developers? Is this really the security model you want?\n\nIf keep telling me things like this, I'll stop using Postgres!\n\n-Kevin\n\n--\nKevin Jacobs\nThe OPAL Group - Enterprise Systems Architect\nVoice: (216) 986-0710 x 19 E-mail: jacobs@theopalgroup.com\nFax: (216) 986-0714 WWW: http://www.theopalgroup.com\n\n\n",
"msg_date": "Fri, 9 Nov 2001 11:32:44 -0500 (EST)",
"msg_from": "Kevin Jacobs <jacobs@penguin.theopalgroup.com>",
"msg_from_op": true,
"msg_subject": "Re: Possible major bug in PlPython (plus some other ideas)"
},
{
"msg_contents": "Kevin Jacobs wrote:\n> \n> > > 1) If Plpython is installed as a trusted language, and from what little I\n> > > can glean from the documentation, it should not have any filesystem access.\n> > > However, the default behavior of the restricted execution environment\n> > > being used allows read-only filesystem access.\n> >\n> > we have 'read-only filesystem access anyhow' :\n> \n> Then I consider this a bug if a non-super-user can do this.\n\nIt's not that bad - only postgresql superuser can use copy to/from file\n.\n\n-------------\nHannu\n",
"msg_date": "Fri, 09 Nov 2001 19:06:17 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Possible major bug in PlPython (plus some other ideas)"
},
{
"msg_contents": "On Fri, 9 Nov 2001, Hannu Krosing wrote:\n> Kevin Jacobs wrote:\n> >\n> > > > 1) If Plpython is installed as a trusted language, and from what little I\n> > > > can glean from the documentation, it should not have any filesystem access.\n> > > > However, the default behavior of the restricted execution environment\n> > > > being used allows read-only filesystem access.\n> > >\n> > > we have 'read-only filesystem access anyhow' :\n> >\n> > Then I consider this a bug if a non-super-user can do this.\n>\n> It's not that bad - only postgresql superuser can use copy to/from file\n\nAh -- then it still means we should take read-only filesystem access away\nfrom plpython for now. If we want to implemente a trusted mode, then we can\nadd it back in.\n\n-Kevin\n\n--\nKevin Jacobs\nThe OPAL Group - Enterprise Systems Architect\nVoice: (216) 986-0710 x 19 E-mail: jacobs@theopalgroup.com\nFax: (216) 986-0714 WWW: http://www.theopalgroup.com\n\n\n",
"msg_date": "Fri, 9 Nov 2001 13:32:11 -0500 (EST)",
"msg_from": "Kevin Jacobs <jacobs@penguin.theopalgroup.com>",
"msg_from_op": true,
"msg_subject": "Re: Possible major bug in PlPython (plus some other ideas)"
},
{
"msg_contents": "Kevin Jacobs <jacobs@penguin.theopalgroup.com> writes:\n> I have noticed a possibly major issues in Plpython that may need to be\n> addressed before 7.2 is released:\n\n> 1) If Plpython is installed as a trusted language, and from what little I\n> can glean from the documentation, it should not have any filesystem access.\n> However, the default behavior of the restricted execution environment\n> being used allows read-only filesystem access.\n\nI agree, this is not good. If it's easy to patch, please submit a\npatch.\n\nWhat worries me is not so much this particular hole, which is easily\nplugged now that we know about it, as that it suggests that Python's\nidea of a restricted environment is considerably less restricted than\nwe would like. Perhaps there are other facilities that need to be\nturned off as well?\n\nThe alternative we could consider is to mark plpython as untrusted for\n7.2, until someone has time for a more complete review of possible\nsecurity problems.\n\n> I have some other suggestions that may not be appropriate for the 7.2\n> release, but think should be addressed before too long:\n\nThis would all be good stuff to address in 7.3 or further in the future.\nAs far as I'm concerned, all the PL languages except plpgsql are barely\nout of the \"proof of concept\" stage; they all need a lot of work from\ninterested people to bring them to the \"industrial strength\" stage.\nIf you want to be one of those people, step right up!\n\n> 6) [Very low priority] Its not insane to consider a Plpython procedure\n> that spawns off a Python thread to do background processing tasks.\n> This is obviously something that will only be possible in an untrusted\n> version of the interpreter. Also, if the SPI interface is thread-safe,\n> then it may be useful to release the Python interpreter lock around\n> some of the SPI calls.\n\nSPI is not thread-safe; in fact no part of the backend is thread-safe\nor designed for multithreading at all. This one I'd view with great\nwariness.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 09 Nov 2001 14:23:45 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Possible major bug in PlPython (plus some other ideas) "
},
{
"msg_contents": "\"Ross J. Reedstrom\" wrote:\n> \n> On Fri, Nov 09, 2001 at 03:25:04PM -0500, Doug McNaught wrote:\n> > Tom Lane <tgl@sss.pgh.pa.us> writes:\n> >\n> > > What worries me is not so much this particular hole, which is easily\n> > > plugged now that we know about it, as that it suggests that Python's\n> > > idea of a restricted environment is considerably less restricted than\n> > > we would like. Perhaps there are other facilities that need to be\n> > > turned off as well?\n\nPerhaps we need some general guidelines for trusted PLs - lists of\nthings \nto restrict, e.t.c. \n\nPerhaps we even need something like CodeBase Principals from \nNetscape/Javascript ?\n\nOr just more fine-grained PRIVILEGEs ?\n\nThe python way of defining restricted execution is quite flexible in\nwhat \nto allow. In Zope they used to restict even some kinds of loops and list \nconstructors to prevent users from shooting themselves in the foot. \n\nThey are now relaxing some restrictions, as there are too many \nunrestricted ways for doing it to warrant making common operations \ncumbersome.\n\n> > Could be. FWIW, Zope (www.zope.org) allows for Python scripts, created\n> > and managed through the web, that run in a \"sandbox\" with many of the\n> > same restrictions as PG puts on untrusted languages--they actually\n> > disallow regex matching so you can't hang the webserver thread with a\n> > regex that backtracks forever.\n\nAre there any plans to disallow regex matching in postgreSQL as well ???\n\nAFAIK there are simpler ways to hog a DB server as anyone writing \nSQL queries can tell you ;) \n\nA more reliable approach for DB server may be establishing per-user\nmemory/time/cpu quotas and just rolling back queries that exceed them.\n\n> > Might be worthhhile for the plpython folks to take a look at Zope.\n> \n> And it took _forever_ to convince the Zope folks to put it in, for this\n> very reason. Those who wanted python scripts (through the web interface,\n> as opposed to through the filesystem) had to jump through all the hoops\n> to make it safe enough.\n\n-----------------\nHannu\n",
"msg_date": "Sat, 10 Nov 2001 00:35:09 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Possible major bug in PlPython (plus some other ideas)"
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n>> However, the default behavior of the restricted execution environment\n>> being used allows read-only filesystem access.\n\n> we have 'read-only filesystem access anyhow' :\n\n> pg72b2=# create table hack(row text);\n> CREATE\n> pg72b2=# copy hack from '/home/pg72b2/data/pg_hba.conf' DELIMITERS\n> '\\01';\n\nOnly if you're superuser, which is exactly the point of the trusted\nvs untrusted function restriction. The plpython problem lets\nnon-superusers read any file that the postgres user can read, which\nis not cool.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 09 Nov 2001 14:48:22 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Possible major bug in PlPython (plus some other ideas) "
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> Hannu Krosing <hannu@tm.ee> writes:\n> >> However, the default behavior of the restricted execution environment\n> >> being used allows read-only filesystem access.\n> \n> > we have 'read-only filesystem access anyhow' :\n> \n> > pg72b2=# create table hack(row text);\n> > CREATE\n> > pg72b2=# copy hack from '/home/pg72b2/data/pg_hba.conf' DELIMITERS\n> > '\\01';\n> \n> Only if you're superuser, which is exactly the point of the trusted\n> vs untrusted function restriction. The plpython problem lets\n> non-superusers read any file that the postgres user can read, which\n> is not cool.\n\nIf a fix is made, will it be backported to the 7.1 branch so vendors\ncan upgrade their packages if this is necesarry?\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n",
"msg_date": "09 Nov 2001 15:14:13 -0500",
"msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)",
"msg_from_op": false,
"msg_subject": "Re: Possible major bug in PlPython (plus some other ideas)"
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> What worries me is not so much this particular hole, which is easily\n> plugged now that we know about it, as that it suggests that Python's\n> idea of a restricted environment is considerably less restricted than\n> we would like. Perhaps there are other facilities that need to be\n> turned off as well?\n\nCould be. FWIW, Zope (www.zope.org) allows for Python scripts, created \nand managed through the web, that run in a \"sandbox\" with many of the\nsame restrictions as PG puts on untrusted languages--they actually\ndisallow regex matching so you can't hang the webserver thread with a\nregex that backtracks forever. Might be worthhhile for the plpython\nfolks to take a look at Zope.\n\n> The alternative we could consider is to mark plpython as untrusted for\n> 7.2, until someone has time for a more complete review of possible\n> security problems.\n\nThis sounds like a good idea to me.\n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n",
"msg_date": "09 Nov 2001 15:25:04 -0500",
"msg_from": "Doug McNaught <doug@wireboard.com>",
"msg_from_op": false,
"msg_subject": "Re: Possible major bug in PlPython (plus some other ideas)"
},
{
"msg_contents": "Doug McNaught <doug@wireboard.com> writes:\n\n> FWIW, Zope (www.zope.org) allows for Python scripts, created \n> and managed through the web, that run in a \"sandbox\" with many of the\n> same restrictions as PG puts on untrusted languages--they actually\n ^^^^^^^^\n\nEr, I meant 'trusted' here, of course.\n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n",
"msg_date": "09 Nov 2001 15:36:35 -0500",
"msg_from": "Doug McNaught <doug@wireboard.com>",
"msg_from_op": false,
"msg_subject": "Re: Possible major bug in PlPython (plus some other ideas)"
},
{
"msg_contents": "On Fri, Nov 09, 2001 at 03:25:04PM -0500, Doug McNaught wrote:\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n> \n> > What worries me is not so much this particular hole, which is easily\n> > plugged now that we know about it, as that it suggests that Python's\n> > idea of a restricted environment is considerably less restricted than\n> > we would like. Perhaps there are other facilities that need to be\n> > turned off as well?\n> \n> Could be. FWIW, Zope (www.zope.org) allows for Python scripts, created \n> and managed through the web, that run in a \"sandbox\" with many of the\n> same restrictions as PG puts on untrusted languages--they actually\n> disallow regex matching so you can't hang the webserver thread with a\n> regex that backtracks forever. Might be worthhhile for the plpython\n> folks to take a look at Zope.\n\nAnd it took _forever_ to convince the Zope folks to put it in, for this\nvery reason. Those who wanted python scripts (through the web interface,\nas opposed to through the filesystem) had to jump through all the hoops\nto make it safe enough.\n\nRoss\n",
"msg_date": "Fri, 9 Nov 2001 15:28:45 -0600",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: Possible major bug in PlPython (plus some other ideas)"
},
{
"msg_contents": "On Fri, 9 Nov 2001, Tom Lane wrote:\n> Kevin Jacobs <jacobs@penguin.theopalgroup.com> writes:\n> > I have noticed a possibly major issues in Plpython that may need to be\n> > addressed before 7.2 is released:\n>\n> > 1) If Plpython is installed as a trusted language, and from what little I\n> > can glean from the documentation, it should not have any filesystem access.\n> > However, the default behavior of the restricted execution environment\n> > being used allows read-only filesystem access.\n>\n> I agree, this is not good. If it's easy to patch, please submit a\n> patch.\n\nI'll have something ready by Monday.\n\n> What worries me is not so much this particular hole, which is easily\n> plugged now that we know about it, as that it suggests that Python's\n> idea of a restricted environment is considerably less restricted than\n> we would like. Perhaps there are other facilities that need to be\n> turned off as well?\n\nI'm going to do a very careful review of the code. Upfront, I expect that\nI've found the only major problem. There is already a very good \"restricted\nexecution\" enviornment in place. The read-only filesystem issue slipped\nthrough the cracks because it is the default behavior for the evironment.\nI'll spend the time to go over any nooks and crannies that bear careful\nscrutiny.\n\n> The alternative we could consider is to mark plpython as untrusted for\n> 7.2, until someone has time for a more complete review of possible\n> security problems.\n\nIf I don't feel that the code is 100% then I'll vote for this option too.\n\n-Kevin\n\n--\nKevin Jacobs\nThe OPAL Group - Enterprise Systems Architect\nVoice: (216) 986-0710 x 19 E-mail: jacobs@theopalgroup.com\nFax: (216) 986-0714 WWW: http://www.theopalgroup.com\n\n\n",
"msg_date": "Fri, 9 Nov 2001 16:34:26 -0500 (EST)",
"msg_from": "Kevin Jacobs <jacobs@penguin.theopalgroup.com>",
"msg_from_op": true,
"msg_subject": "Re: Possible major bug in PlPython (plus some other ideas)"
},
{
"msg_contents": "On 9 Nov 2001, Doug McNaught wrote:\n> Doug McNaught <doug@wireboard.com> writes:\n>\n> > FWIW, Zope (www.zope.org) allows for Python scripts, created\n> > and managed through the web, that run in a \"sandbox\" with many of the\n> > same restrictions as PG puts on untrusted languages--they actually\n>\n> Er, I meant 'trusted' here, of course.\n\nplpython does this too -- the problem I've found is due to a bad default\nsetting in the \"sandbox\".\n\n-Kevin\n\n--\nKevin Jacobs\nThe OPAL Group - Enterprise Systems Architect\nVoice: (216) 986-0710 x 19 E-mail: jacobs@theopalgroup.com\nFax: (216) 986-0714 WWW: http://www.theopalgroup.com\n\n\n",
"msg_date": "Fri, 9 Nov 2001 16:35:11 -0500 (EST)",
"msg_from": "Kevin Jacobs <jacobs@penguin.theopalgroup.com>",
"msg_from_op": true,
"msg_subject": "Re: Possible major bug in PlPython (plus some other ideas)"
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> Are there any plans to disallow regex matching in postgreSQL as well ???\n> AFAIK there are simpler ways to hog a DB server as anyone writing \n> SQL queries can tell you ;) \n\nThat was my reaction too --- disabling regexes is not appropriate for\nthe Postgres environment. But we do need to prevent access to the\nfilesystem, even if it's just read-only access.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 09 Nov 2001 19:32:00 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Possible major bug in PlPython (plus some other ideas) "
},
{
"msg_contents": "Trond Eivind Glomsr�d writes:\n\n> If a fix is made, will it be backported to the 7.1 branch so vendors\n> can upgrade their packages if this is necesarry?\n\nProbably not, since there is no PL/Python in 7.1.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Sun, 11 Nov 2001 16:38:22 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Possible major bug in PlPython (plus some other ideas)"
},
{
"msg_contents": "\nI would like to have a patch for this into 7.2 because it is a security\nproblem.\n\n\n---------------------------------------------------------------------------\n\n> Kevin Jacobs <jacobs@penguin.theopalgroup.com> writes:\n> > I have noticed a possibly major issues in Plpython that may need to be\n> > addressed before 7.2 is released:\n> \n> > 1) If Plpython is installed as a trusted language, and from what little I\n> > can glean from the documentation, it should not have any filesystem access.\n> > However, the default behavior of the restricted execution environment\n> > being used allows read-only filesystem access.\n> \n> I agree, this is not good. If it's easy to patch, please submit a\n> patch.\n> \n> What worries me is not so much this particular hole, which is easily\n> plugged now that we know about it, as that it suggests that Python's\n> idea of a restricted environment is considerably less restricted than\n> we would like. Perhaps there are other facilities that need to be\n> turned off as well?\n> \n> The alternative we could consider is to mark plpython as untrusted for\n> 7.2, until someone has time for a more complete review of possible\n> security problems.\n> \n> > I have some other suggestions that may not be appropriate for the 7.2\n> > release, but think should be addressed before too long:\n> \n> This would all be good stuff to address in 7.3 or further in the future.\n> As far as I'm concerned, all the PL languages except plpgsql are barely\n> out of the \"proof of concept\" stage; they all need a lot of work from\n> interested people to bring them to the \"industrial strength\" stage.\n> If you want to be one of those people, step right up!\n> \n> > 6) [Very low priority] Its not insane to consider a Plpython procedure\n> > that spawns off a Python thread to do background processing tasks.\n> > This is obviously something that will only be possible in an untrusted\n> > version of the interpreter. Also, if the SPI interface is thread-safe,\n> > then it may be useful to release the Python interpreter lock around\n> > some of the SPI calls.\n> \n> SPI is not thread-safe; in fact no part of the backend is thread-safe\n> or designed for multithreading at all. This one I'd view with great\n> wariness.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 12 Nov 2001 00:39:05 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Possible major bug in PlPython (plus some other ideas)"
},
{
"msg_contents": "I'm going to do this today (Kevin, I'm not stepping on your toes, am I?)\n\nI have some other problems with the module to be addressed (there are\nabout a million, or at least a dozen ways to get it to return Null pointers\nto the backend, crash the backend, and cause a general shared mem\ncorruption message from the postmaster).\n\n-Brad\n\n* Bruce Momjian (pgman@candle.pha.pa.us) [011112 00:41]:\n> \n> I would like to have a patch for this into 7.2 because it is a security\n> problem.\n> \n> \n> ---------------------------------------------------------------------------\n> \n> > Kevin Jacobs <jacobs@penguin.theopalgroup.com> writes:\n> > > I have noticed a possibly major issues in Plpython that may need to be\n> > > addressed before 7.2 is released:\n> > \n> > > 1) If Plpython is installed as a trusted language, and from what little I\n> > > can glean from the documentation, it should not have any filesystem access.\n> > > However, the default behavior of the restricted execution environment\n> > > being used allows read-only filesystem access.\n> > \n> > I agree, this is not good. If it's easy to patch, please submit a\n> > patch.\n> > \n> > What worries me is not so much this particular hole, which is easily\n> > plugged now that we know about it, as that it suggests that Python's\n> > idea of a restricted environment is considerably less restricted than\n> > we would like. Perhaps there are other facilities that need to be\n> > turned off as well?\n> > \n> > The alternative we could consider is to mark plpython as untrusted for\n> > 7.2, until someone has time for a more complete review of possible\n> > security problems.\n> > \n> > > I have some other suggestions that may not be appropriate for the 7.2\n> > > release, but think should be addressed before too long:\n> > \n> > This would all be good stuff to address in 7.3 or further in the future.\n> > As far as I'm concerned, all the PL languages except plpgsql are barely\n> > out of the \"proof of concept\" stage; they all need a lot of work from\n> > interested people to bring them to the \"industrial strength\" stage.\n> > If you want to be one of those people, step right up!\n> > \n> > > 6) [Very low priority] Its not insane to consider a Plpython procedure\n> > > that spawns off a Python thread to do background processing tasks.\n> > > This is obviously something that will only be possible in an untrusted\n> > > version of the interpreter. Also, if the SPI interface is thread-safe,\n> > > then it may be useful to release the Python interpreter lock around\n> > > some of the SPI calls.\n> > \n> > SPI is not thread-safe; in fact no part of the backend is thread-safe\n> > or designed for multithreading at all. This one I'd view with great\n> > wariness.\n> > \n> > \t\t\tregards, tom lane\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> > \n> > http://www.postgresql.org/users-lounge/docs/faq.html\n> > \n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n",
"msg_date": "Mon, 12 Nov 2001 09:10:36 -0500",
"msg_from": "Bradley McLean <brad@bradm.net>",
"msg_from_op": false,
"msg_subject": "Re: Possible major bug in PlPython (plus some other ideas)"
},
{
"msg_contents": "\nKevin, I've been using / reviewing your patch and it looks good.\n\nDid you submit it to patches?\n\n(Everyone) Would a patch to add trusted language support be accepted\nfor 7.2, or is it too late?\n\n-Brad\n",
"msg_date": "Tue, 13 Nov 2001 14:13:59 -0500",
"msg_from": "Bradley McLean <brad@bradm.net>",
"msg_from_op": false,
"msg_subject": "Re: Possible major bug in PlPython (plus some other ideas)"
},
{
"msg_contents": "On Tue, 13 Nov 2001, Bradley McLean wrote:\n> Kevin, I've been using / reviewing your patch and it looks good.\n\nGreat! I'll have a very slightly updated version ready by tomorrow that\nincorprates some suggestions from Tom Lane.\n\n> Did you submit it to patches?\n\nI sent it to hackers, though the list manager really hates that I'm not\nsubscribed.\n\n> (Everyone) Would a patch to add trusted language support be accepted\n> for 7.2, or is it too late?\n\nI've got a patch about half-done to do this. If this gets the green light,\nthen lets work together on this.\n\n-Kevin\n\n",
"msg_date": "Tue, 13 Nov 2001 14:23:27 -0500 (EST)",
"msg_from": "Kevin Jacobs <jacobs@penguin.theopalgroup.com>",
"msg_from_op": true,
"msg_subject": "Re: Possible major bug in PlPython (plus some other ideas)"
},
{
"msg_contents": "Bradley McLean <brad@bradm.net> writes:\n> (Everyone) Would a patch to add trusted language support be accepted\n> for 7.2, or is it too late?\n\nI think the code in there already is the trusted case, no? The addition\nwould be an untrusted mode for plpython.\n\ntrusted = language handler prevents security violations, so unprivileged\nusers are allowed to define functions in the language (ie, we trust the\nlanguage itself to prevent security breaches)\n\nuntrusted = language allows user to access things outside database,\nso only Postgres superusers are allowed to define functions in the\nlanguage (ie, we must trust the function author instead of the language)\n\nIn any case, a second security level in plpython would clearly be a new\nfeature, and so I'd say it's too late to consider it for 7.2. All that\nwe want to do at this point is verify Kevin's proposed patch for the\nexisting security level. But certainly a \"plpythonu\" addition would\nbe welcome for 7.3.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 13 Nov 2001 16:17:15 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Possible major bug in PlPython (plus some other ideas) "
},
{
"msg_contents": "\nHas this all been addressed? Are there any TODO items here?\n\n---------------------------------------------------------------------------\n\n> Bradley McLean <brad@bradm.net> writes:\n> > (Everyone) Would a patch to add trusted language support be accepted\n> > for 7.2, or is it too late?\n> \n> I think the code in there already is the trusted case, no? The addition\n> would be an untrusted mode for plpython.\n> \n> trusted = language handler prevents security violations, so unprivileged\n> users are allowed to define functions in the language (ie, we trust the\n> language itself to prevent security breaches)\n> \n> untrusted = language allows user to access things outside database,\n> so only Postgres superusers are allowed to define functions in the\n> language (ie, we must trust the function author instead of the language)\n> \n> In any case, a second security level in plpython would clearly be a new\n> feature, and so I'd say it's too late to consider it for 7.2. All that\n> we want to do at this point is verify Kevin's proposed patch for the\n> existing security level. But certainly a \"plpythonu\" addition would\n> be welcome for 7.3.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 17 Nov 2001 14:43:09 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Possible major bug in PlPython (plus some other ideas)"
},
{
"msg_contents": "On Sat, 17 Nov 2001, Bruce Momjian wrote:\n> Has this all been addressed? Are there any TODO items here?\n\nAll of the security related _problems_ that affect the rest of 7.2 have been\nsolved, to the best of my knowledge. The discussion below pretains to adding\nan additional untrusted mode like plperl has. Since this is a new feature,\nit is on the TODO list for 7.3.\n\nRegards,\n-Kevin Jacobs\n\n>\n> ---------------------------------------------------------------------------\n>\n> > Bradley McLean <brad@bradm.net> writes:\n> > > (Everyone) Would a patch to add trusted language support be accepted\n> > > for 7.2, or is it too late?\n> >\n> > I think the code in there already is the trusted case, no? The addition\n> > would be an untrusted mode for plpython.\n> >\n> > trusted = language handler prevents security violations, so unprivileged\n> > users are allowed to define functions in the language (ie, we trust the\n> > language itself to prevent security breaches)\n> >\n> > untrusted = language allows user to access things outside database,\n> > so only Postgres superusers are allowed to define functions in the\n> > language (ie, we must trust the function author instead of the language)\n> >\n> > In any case, a second security level in plpython would clearly be a new\n> > feature, and so I'd say it's too late to consider it for 7.2. All that\n> > we want to do at this point is verify Kevin's proposed patch for the\n> > existing security level. But certainly a \"plpythonu\" addition would\n> > be welcome for 7.3.\n> >\n> > \t\t\tregards, tom lane\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Don't 'kill -9' the postmaster\n> >\n>\n>\n\n--\nKevin Jacobs\nThe OPAL Group - Enterprise Systems Architect\nVoice: (216) 986-0710 x 19 E-mail: jacobs@theopalgroup.com\nFax: (216) 986-0714 WWW: http://www.theopalgroup.com\n\n\n",
"msg_date": "Sat, 17 Nov 2001 14:47:32 -0500 (EST)",
"msg_from": "Kevin Jacobs <jacobs@penguin.theopalgroup.com>",
"msg_from_op": true,
"msg_subject": "Re: Possible major bug in PlPython (plus some other ideas)"
},
{
"msg_contents": "> On Sat, 17 Nov 2001, Bruce Momjian wrote:\n> > Has this all been addressed? Are there any TODO items here?\n> \n> All of the security related _problems_ that affect the rest of 7.2 have been\n> solved, to the best of my knowledge. The discussion below pretains to adding\n> an additional untrusted mode like plperl has. Since this is a new feature,\n> it is on the TODO list for 7.3.\n\nOK, added to TODO:\n\n\t* Add untrusted version of plpython\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 17 Nov 2001 14:52:57 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Possible major bug in PlPython (plus some other ideas)"
}
] |
[
{
"msg_contents": "Tables without oids wouldn't be able to be\nused inside fk constraints, since some of the checks\nin the trigger did a SELECT oid. Since the oid wasn't\nactually used, I changed this to SELECT 1. My test\ncase with non-oid tables now works and fk regression\nappears to run fine on my machine.",
"msg_date": "Thu, 8 Nov 2001 09:23:01 -0800 (PST)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": true,
"msg_subject": "Small FK patch to deal with tables without oids"
},
{
"msg_contents": "Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> \tTables without oids wouldn't be able to be\n> used inside fk constraints, since some of the checks\n> in the trigger did a SELECT oid. Since the oid wasn't\n> actually used, I changed this to SELECT 1.\n\nCan't believe I missed that while looking for OID dependencies :-(\nGood catch!\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 08 Nov 2001 23:08:17 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Small FK patch to deal with tables without oids "
},
{
"msg_contents": "\n\tI wanted to know if there was a decision\nto remove the triggered data change violation checks\nfrom trigger.c or to change them to a per statement\ncheck? I'm building a fix for some foreign key\nproblems, and want to cover some of those cases\nin the regression test if we're going to allow it.\n\n\n",
"msg_date": "Sun, 11 Nov 2001 10:54:05 -0800 (PST)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": true,
"msg_subject": "Triggered Data Change check"
},
{
"msg_contents": "Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> \tI wanted to know if there was a decision\n> to remove the triggered data change violation checks\n> from trigger.c or to change them to a per statement\n> check?\n\nAFAIK no one is happy with the state of that code, but I'm not sure\nif we've agreed on exactly how to change it. Do you have a proposal?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 11 Nov 2001 14:14:38 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Triggered Data Change check "
},
{
"msg_contents": "> \n> \tI wanted to know if there was a decision\n> to remove the triggered data change violation checks\n> from trigger.c or to change them to a per statement\n> check? I'm building a fix for some foreign key\n> problems, and want to cover some of those cases\n> in the regression test if we're going to allow it.\n\nI would like to do _something_ about that error message, not sure what.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 11 Nov 2001 14:29:59 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Triggered Data Change check"
},
{
"msg_contents": "\nOn Sun, 11 Nov 2001, Tom Lane wrote:\n\n> Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> > \tI wanted to know if there was a decision\n> > to remove the triggered data change violation checks\n> > from trigger.c or to change them to a per statement\n> > check?\n>\n> AFAIK no one is happy with the state of that code, but I'm not sure\n> if we've agreed on exactly how to change it. Do you have a proposal?\n\nWell, I wonder if the check is so weak as to be fairly useless in the\nfirst place really, even if applied to the statement as opposed to the\ntransaction. It prevents cases like (not tested, but gives the idea):\n\ncreate table foo1( a int unique);\ncreate table foo2( a int unique default 2 references foo1(a)\n initially deferred on update set default);\nalter table foo1 add foreign key(a) references foo2(a)\n initially deferred on update cascade);\nbegin;\ninsert into foo1 values (1);\ninsert into foo2 values (1);\nend;\nupdate foo1 set a=3;\n-- I think it would have the following effect:\n-- foo1 has \"a\" set to 3 which sets foo2's \"a\" to 2 which sets\n-- foo1's \"a\" to 2 as well. And so the row in foo1 is changed\n-- twice.\n\nBut, since you could do alot of this same work in your own triggers,\nand doing so doesn't seem to trip the triggered data change check\n(for example an after trigger on a non-fk table that updates the\nsame table), I wonder if we should either defend against neither\ncase or both.\n\nAs such, I'd say we should at least comment out the check and\nerror since it would fix alot of cases that people using the\nsystem more normally run into at the expense of a more edge\ncondition.\n\nOne problem is that it opens up the foreign key stuff to a\nbunch of cases that haven't been tested before and it may be\na little bit late for opening up that can of worms. I'm confident\nthat I've fixed related badness in the no action case and on\nthe base check(since my home copy had the check commented out and the\ntests I ran worked in that case), but I haven't touched the referential\nactions because they're a little more complicated.\n\n",
"msg_date": "Sun, 11 Nov 2001 12:06:58 -0800 (PST)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": true,
"msg_subject": "Re: Triggered Data Change check "
},
{
"msg_contents": "Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> Well, I wonder if the check is so weak as to be fairly useless in the\n> first place really, even if applied to the statement as opposed to the\n> transaction.\n\nLooking back at our discussion around 24-Oct, I recall that I was\nleaning to the idea that the correct interpretation of the spec's\n\"triggered data change\" rule is that it prohibits scenarios that are\nimpossible anyway under MVCC, because of the MVCC tuple visibility\nrules. Therefore we don't need any explicit test for triggered data\nchange. But I didn't hear anyone else supporting or disproving\nthat idea.\n\nThe code as-is is certainly wrong, since it prohibits multiple changes\nwithin a transaction, not within a statement as the spec says.\n\nRight at the moment I'd favor ripping the code out entirely ... but\nit'd be good to hear some support for that approach. Comments anyone?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 11 Nov 2001 15:52:47 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Triggered Data Change check "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> > Well, I wonder if the check is so weak as to be fairly useless in the\n> > first place really, even if applied to the statement as opposed to the\n> > transaction.\n> \n> Looking back at our discussion around 24-Oct, I recall that I was\n> leaning to the idea that the correct interpretation of the spec's\n> \"triggered data change\" rule is that it prohibits scenarios that are\n> impossible anyway under MVCC, because of the MVCC tuple visibility\n> rules. \n\nStrictly speaking MVCC is only for read-only queries.\nEven under MVCC, update, delete and select .. for update have\nto see the newest tuples. Constraints shouldn't ignore the\nupdate/delete operations in the future from MVCC POV.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Mon, 12 Nov 2001 12:06:49 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Triggered Data Change check"
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> Strictly speaking MVCC is only for read-only queries.\n> Even under MVCC, update, delete and select .. for update have\n> to see the newest tuples.\n\nTrue. But my point is that we already have mechanisms to deal with\nthat set of issues; the trigger code shouldn't concern itself with\nthe problem.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 11 Nov 2001 22:11:26 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Triggered Data Change check "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > Strictly speaking MVCC is only for read-only queries.\n> > Even under MVCC, update, delete and select .. for update have\n> > to see the newest tuples.\n> \n> True. But my point is that we already have mechanisms to deal with\n> that set of issues; the trigger code shouldn't concern itself with\n> the problem.\n\nYou are saying \n> Therefore we don't need any explicit test for triggered data\n> change. \n\nISTM your point is on the following.\n\n> Functions can run new commands that get new command ID numbers within\n> the current transaction --- but on return from the function, the current\n> command number is restored. I believe rows inserted by such a function\n> would look \"in the future\" to us at the outer command, and would be\n> ignored.\n\nMy point is why we could ignore the (future) changes. \n\nregards,\nHiroshi Inoue\n",
"msg_date": "Mon, 12 Nov 2001 12:40:59 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Triggered Data Change check"
},
{
"msg_contents": "\nOn Sun, 11 Nov 2001, Tom Lane wrote:\n\n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > Strictly speaking MVCC is only for read-only queries.\n> > Even under MVCC, update, delete and select .. for update have\n> > to see the newest tuples.\n>\n> True. But my point is that we already have mechanisms to deal with\n> that set of issues; the trigger code shouldn't concern itself with\n> the problem.\n\nThis sequence on my system prints the numbers increasing by 1 which\nI would assume means that the updates are going through:\n\ncreate table foo1(a int);\ncreate function f() returns opaque as 'begin update foo1 set a=a+1; raise\n notice ''%'', NEW.a; return NEW; end;' language 'plpgsql';\ncreate trigger tr after update on foo1 for each row execute\n procedure f();\ninsert into foo1 values(1);\nupdate foo1 set a=1;\n\nI think that if this were an fk trigger, this would technically be illegal\nbehavior as soon as that row in foo1 was modified again during the\nfunction execution from the \"update foo1 set a=1\" statement due to the\nfollowing (sql92, 11.8 General Rules -- I don't have the copy of sql99\non this machine to look at, but I'm guessing there's something similar)\n 7) If any attempt is made within an SQL-statement to update some\n data item to a value that is distinct from the value to which\n that data item was previously updated within the same SQL-\n statement, then an exception condition is raised: triggered\n data change violation.\nGiven this is under the referential constraint definition, I'm guessing\nit's about ri constraints even though the wording seems to say any\nattempt.\n\nBecause its easy to get around with general triggers, I'm not sure the\ncheck is meaningful, and it's alot less likely to occur than the normal\nupdate/delete or update/update cases that currently error out in the\nsystem, so I'm also for ripping out the check, although I think we\nprobably want to think about this for later.\n\n",
"msg_date": "Sun, 11 Nov 2001 19:54:32 -0800 (PST)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": true,
"msg_subject": "Re: Triggered Data Change check "
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> My point is why we could ignore the (future) changes. \n\nWe shouldn't. My feeling is that the various places that consider\nHeapTupleSelfUpdated to be an ignorable condition need more thought.\nIn some cases they should be raising a \"data change violation\" error,\ninstead.\n\nIt's still not special to triggers, however. If you read the spec\nclosely, it's talking about any update not only trigger-caused updates:\n\n 7) If any attempt is made within an SQL-statement to update some\n data item to a value that is distinct from the value to which\n that data item was previously updated within the same SQL-\n statement, then an exception condition is raised: triggered\n data change violation.\n\nIt might be that a trigger is the only possible way to make that happen\nwithin SQL92, but we have more ways to make it happen...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 11 Nov 2001 23:05:48 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Triggered Data Change check "
},
{
"msg_contents": "\nTom, I assume you want this applied?\n\n---------------------------------------------------------------------------\n\n> Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> > \tTables without oids wouldn't be able to be\n> > used inside fk constraints, since some of the checks\n> > in the trigger did a SELECT oid. Since the oid wasn't\n> > actually used, I changed this to SELECT 1.\n> \n> Can't believe I missed that while looking for OID dependencies :-(\n> Good catch!\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 12 Nov 2001 00:32:26 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Small FK patch to deal with tables without oids"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Tom, I assume you want this applied?\n\nPlease.\n\n\t\t\tregards, tom lane\n\n\n> ---------------------------------------------------------------------------\n\n>> Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> Tables without oids wouldn't be able to be\n> used inside fk constraints, since some of the checks\n> in the trigger did a SELECT oid. Since the oid wasn't\n> actually used, I changed this to SELECT 1.\n>> \n>> Can't believe I missed that while looking for OID dependencies :-(\n>> Good catch!\n>> \n>> regards, tom lane\n>> \n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>> \n\n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 12 Nov 2001 01:00:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Small FK patch to deal with tables without oids "
},
{
"msg_contents": "\nPatch applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\n> \n> \tTables without oids wouldn't be able to be\n> used inside fk constraints, since some of the checks\n> in the trigger did a SELECT oid. Since the oid wasn't\n> actually used, I changed this to SELECT 1. My test\n> case with non-oid tables now works and fk regression\n> appears to run fine on my machine.\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 12 Nov 2001 01:09:03 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Small FK patch to deal with tables without oids"
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Tom, I assume you want this applied?\n> \n> Please.\n\nDone.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 12 Nov 2001 01:09:11 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Small FK patch to deal with tables without oids"
}
] |
[
{
"msg_contents": "> * Bruce Momjian (pgman@candle.pha.pa.us) [011108 13:10]:\n> > > \n> > > > http://gborg.postgresql.org/project/pgreplication/projdisplay.php\n> > > \n> > Sure. I wonder if I should just put that URL on the TODO list.\n> \n> If I wasn't clear, yes, that's what I think should happen.\n\nAdded to TODO.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 8 Nov 2001 13:24:51 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: OT?: PGReplication project dead?"
}
] |
[
{
"msg_contents": "Hello all!\n\nAs postgresql does not have alter table modify column or alter table drop column, is there\nany simpler way to change a column definition??\n\nFor example to change a column varchar(40) to varchar(40)[] here you have the steps I follow:\n\nSuppose this table:\n CREATE TABLE \"proy_foto\" (\n \"numero\" int4 DEFAULT nextval('proy_foto_numero_seq'::text) NOT NULL,\n \"idproy\" int4,\n \"foto\" oid,\n \"nombre\" varchar(40),\n \"descrip\" text,\n PRIMARY KEY (\"numero\")\n );\n\n1. Add the new column def\n alter table proy_foto add nombre2 varchar(40)[];\n alter table proy_foto add descrip2 text[];\n\n\n2. Initialize with a default value.\n\n update proy_foto set nombre2 = '{ \"1\" }', descrip2 = '{\"2\"}';\n\n3.Update the columns with their corresponding values.\n\n UPDATE proy_foto\n SET nombre2[1] = nombre,\n descrip2[1] = descrip\n FROM proy_foto\n WHERE numero = numero;\n\n4. Initialize the obsolete columns\n\n update proy_foto set nombre = '', descrip = '';\n\n5. Rename the obsolete columns\n alter table proy_foto rename column nombre to obsolete1;\n alter table proy_foto rename column descrip to obsolete2;\n\n6. Rename the new columns with the old name.\n alter table proy_foto rename column nombre2 to nombre;\n alter table proy_foto rename column descrip2 to descrip;\n\n\nAny simpler idea?\n\nThanks in advance\n\n\n------------\nEvelio Martínez\n\n\n\n\n\n\n\n\nHello all!\n \nAs postgresql does not have alter table modify \ncolumn or alter table drop column, is there\nany simpler way to change a column \ndefinition??\n \nFor example to change a column varchar(40) to \nvarchar(40)[] here you have the steps I follow:\n \nSuppose this table:\n CREATE TABLE \"proy_foto\" \n( \"numero\" int4 \nDEFAULT nextval('proy_foto_numero_seq'::text) NOT \nNULL, \"idproy\" \nint4, \"foto\" \noid, \"nombre\" \nvarchar(40), \n\"descrip\" text, \nPRIMARY KEY (\"numero\") );\n1. Add the new column def\n alter table proy_foto add nombre2 \nvarchar(40)[]; alter table proy_foto add descrip2 \ntext[];\n2. Initialize with a default \nvalue.\n \n update proy_foto set nombre2 = '{ \"1\" \n}', descrip2 = '{\"2\"}';\n \n3.Update the columns with their corresponding \nvalues.\n \n UPDATE \nproy_foto SET nombre2[1] = nombre, \n descrip2[1] = \ndescrip FROM proy_foto \nWHERE numero = numero;\n4. Initialize the obsolete columns\n \n update proy_foto set nombre = \n'', descrip = '';\n \n5. Rename the obsolete columns\n alter table proy_foto rename column \nnombre to obsolete1; alter table proy_foto rename column descrip \nto obsolete2;\n \n6. Rename the new columns with the old \nname.\n alter table proy_foto rename column \nnombre2 to nombre; alter table proy_foto rename column descrip2 \nto descrip;\n \n \nAny simpler idea?\n \nThanks in advance\n \n------------Evelio \nMartínez",
"msg_date": "Thu, 8 Nov 2001 20:52:45 +0100",
"msg_from": "=?iso-8859-1?Q?Evelio_Mart=EDnez?= <evelio.martinez@testanet.com>",
"msg_from_op": true,
"msg_subject": "How to optimize a column type change???"
},
{
"msg_contents": "The simpler solution is to learn C and add this feature to PostgreSQL \ninternals.\n\nAt 20:52 08/11/01 +0100, you wrote:\n>Hello all!\n>\n>As postgresql does not have alter table modify column or alter table drop \n>column, is there\n>any simpler way to change a column definition??\n>\n>For example to change a column varchar(40) to varchar(40)[] here you have \n>the steps I follow:\n>\n>Suppose this table:\n> CREATE TABLE \"proy_foto\" (\n> \"numero\" int4 DEFAULT nextval('proy_foto_numero_seq'::text) \n> NOT NULL,\n> \"idproy\" int4,\n> \"foto\" oid,\n> \"nombre\" varchar(40),\n> \"descrip\" text,\n> PRIMARY KEY (\"numero\")\n> );\n>1. Add the new column def\n> alter table proy_foto add nombre2 varchar(40)[];\n> alter table proy_foto add descrip2 text[];\n>\n>2. Initialize with a default value.\n>\n> update proy_foto set nombre2 = '{ \"1\" }', descrip2 = '{\"2\"}';\n>\n>3.Update the columns with their corresponding values.\n>\n> UPDATE proy_foto\n> SET nombre2[1] = nombre,\n> descrip2[1] = descrip\n> FROM proy_foto\n> WHERE numero = numero;\n>\n>4. Initialize the obsolete columns\n>\n> update proy_foto set nombre = '', descrip = '';\n>\n>5. Rename the obsolete columns\n> alter table proy_foto rename column nombre to obsolete1;\n> alter table proy_foto rename column descrip to obsolete2;\n>\n>6. Rename the new columns with the old name.\n> alter table proy_foto rename column nombre2 to nombre;\n> alter table proy_foto rename column descrip2 to descrip;\n>\n>\n>Any simpler idea?\n>\n>Thanks in advance\n>\n>------------\n>Evelio Mart�nez\n\n",
"msg_date": "Fri, 09 Nov 2001 07:33:23 +0100",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": false,
"msg_subject": "Re: How to optimize a column type change???"
}
] |
[
{
"msg_contents": "I am at home now, Tomorrow I will get the exact Oracle syntax and post\nexample commands for CREATE TABLESPACE, CREATE DATABASE, CREATE TABLE\nand CREATE INDEX, CREATE USER. \n\n\n\nJim\n\n\n\n\n\n> \n> > Since \"tablespace\" is not part of the SQL standard, maybe it makes\n> sense to\n> > define a more specific syntax. The term \"location\" makes sense,\n> because it is\n> > not a tablespace as Oracle defines it.\n> \n> It *is* an \"OS managed tablespace\" in terms of IBM DB2.\n> Methinks the term \"TABLESPACE\" is perfect for PostgreSQL.\n> The fact whether it is a directory, a file or even a raw device\n> depends on how you create the tablespace.\n> \n> The point is, that the syntax for \"create table\" and \"create index\"\n> can be compatible in this case, imho without confusing many. \n> Not the \"create tablespace\" syntax, but that is imho not an issue.\n> \n> Andreas\n> \n> \n\n\n",
"msg_date": "Thu, 8 Nov 2001 17:08:00 -0500",
"msg_from": "\"Jim Buttafuoco\" <jim@buttafuoco.net>",
"msg_from_op": true,
"msg_subject": "Re: Storage Location Patch Proposal for V7.3"
}
] |
[
{
"msg_contents": "Since we've already seen two complaints about \"timestamp\" no longer\nbeing an allowed column name in 7.2, I think it's probably time to\nmake a serious effort at trimming the reserved-word list a little.\n\nThe attached patch de-reserves all these former ColLabels:\n\nABORT\t\tunrestricted\nBIT\t\tcan be ColId, but not function name\nCHAR\t\tcan be ColId, but not function name\nCHARACTER\tcan be ColId, but not function name\nCLUSTER\t\tunrestricted\nCOPY\t\tunrestricted\nDEC\t\tcan be ColId, but not function name\nDECIMAL\t\tcan be ColId, but not function name\nEXPLAIN\t\tunrestricted\nFLOAT\t\tcan be ColId, but not function name\nGLOBAL\t\tunrestricted\nINOUT\t\tunrestricted\nINTERVAL\tcan be ColId, but not function name\nLISTEN\t\tunrestricted\nLOAD\t\tunrestricted\nLOCAL\t\tunrestricted\nLOCK\t\tunrestricted\nMOVE\t\tunrestricted\nNCHAR\t\tcan be ColId, but not function name\nNUMERIC\t\tcan be ColId, but not function name\nOUT\t\tunrestricted\nPRECISION\tunrestricted\nRESET\t\tunrestricted\nSETOF\t\tcan be ColId, but not type or function name\nSHOW\t\tunrestricted\nTIME\t\tcan be ColId, but not function name\nTIMESTAMP\tcan be ColId, but not function name\nTRANSACTION\tunrestricted\nUNKNOWN\t\tunrestricted\nVACUUM\t\tunrestricted\nVARCHAR\t\tcan be ColId, but not function name\n\nThe ones that are now unrestricted were just low-hanging fruit (ie,\nthey probably should never have been in ColLabel in the first place).\nThe rest were fixed by recognizing that just because something couldn't\nbe a function name didn't mean it couldn't be used as a table or column\nname. This solves the fundamental shift/reduce conflict posed by cases\nlike \"SELECT TIMESTAMP(3 ...\", without also preventing people from\ncontinuing to name their columns \"timestamp\".\n\nThe keyword classification now looks like:\n\nTypeFuncId:\tIDENT plus all fully-unrestricted keywords\n\nColId:\t\tTypeFuncId plus type-name keywords that might be\n\t\tfollowed by '('; these can't be allowed to be\n\t\tfunction names, but they can be column names.\n\nfunc_name:\tTypeFuncId plus a few special-case ColLabels\n\t\t(this list could probably be extended further)\n\nColLabel:\tColId plus everything else\n\nComments? I'd like to apply this, unless there are objections.\nI suppose Peter might complain about having to redo the keyword\ntables ;-)\n\n\t\t\tregards, tom lane\n\n*** src/backend/parser/gram.y.orig\tMon Nov 5 00:00:14 2001\n--- src/backend/parser/gram.y\tThu Nov 8 19:00:24 2001\n***************\n*** 257,264 ****\n %type <paramno> ParamNo\n \n %type <typnam>\tTypename, SimpleTypename, ConstTypename\n! \t\t\t\tGenericType, Numeric, Geometric, Character, ConstDatetime, ConstInterval, Bit\n! %type <str>\t\tcharacter, datetime, bit\n %type <str>\t\textract_arg\n %type <str>\t\topt_charset, opt_collate\n %type <str>\t\topt_float\n--- 257,264 ----\n %type <paramno> ParamNo\n \n %type <typnam>\tTypename, SimpleTypename, ConstTypename\n! \t\t\t\tGenericType, Numeric, Character, ConstDatetime, ConstInterval, Bit\n! %type <str>\t\tcharacter, bit\n %type <str>\t\textract_arg\n %type <str>\t\topt_charset, opt_collate\n %type <str>\t\topt_float\n***************\n*** 268,274 ****\n %type <ival>\tIconst\n %type <str>\t\tSconst, comment_text\n %type <str>\t\tUserId, opt_boolean, var_value, ColId_or_Sconst\n! %type <str>\t\tColId, ColLabel, TokenId\n %type <node>\tzone_value\n \n %type <node>\tTableConstraint\n--- 268,274 ----\n %type <ival>\tIconst\n %type <str>\t\tSconst, comment_text\n %type <str>\t\tUserId, opt_boolean, var_value, ColId_or_Sconst\n! %type <str>\t\tColId, TypeFuncId, ColLabel\n %type <node>\tzone_value\n \n %type <node>\tTableConstraint\n***************\n*** 1007,1017 ****\n \t\t;\n \n \n! constraints_set_namelist:\tIDENT\n \t\t\t\t{\n \t\t\t\t\t$$ = makeList1($1);\n \t\t\t\t}\n! \t\t| constraints_set_namelist ',' IDENT\n \t\t\t\t{\n \t\t\t\t\t$$ = lappend($1, $3);\n \t\t\t\t}\n--- 1007,1017 ----\n \t\t;\n \n \n! constraints_set_namelist:\tColId\n \t\t\t\t{\n \t\t\t\t\t$$ = makeList1($1);\n \t\t\t\t}\n! \t\t| constraints_set_namelist ',' ColId\n \t\t\t\t{\n \t\t\t\t\t$$ = lappend($1, $3);\n \t\t\t\t}\n***************\n*** 2007,2014 ****\n \t\t\t\t}\n \t\t;\n \n def_arg: func_return \t\t\t\t\t{ $$ = (Node *)$1; }\n- \t\t| TokenId\t\t\t\t\t\t{ $$ = (Node *)makeString($1); }\n \t\t| all_Op\t\t\t\t\t\t{ $$ = (Node *)makeString($1); }\n \t\t| NumericOnly\t\t\t\t\t{ $$ = (Node *)$1; }\n \t\t| Sconst\t\t\t\t\t\t{ $$ = (Node *)makeString($1); }\n--- 2007,2014 ----\n \t\t\t\t}\n \t\t;\n \n+ /* Note: any simple identifier will be returned as a type name! */\n def_arg: func_return \t\t\t\t\t{ $$ = (Node *)$1; }\n \t\t| all_Op\t\t\t\t\t\t{ $$ = (Node *)makeString($1); }\n \t\t| NumericOnly\t\t\t\t\t{ $$ = (Node *)$1; }\n \t\t| Sconst\t\t\t\t\t\t{ $$ = (Node *)makeString($1); }\n***************\n*** 2629,2639 ****\n \t\t\t\t}\n \t\t;\n \n func_type:\tTypename\n \t\t\t\t{\n \t\t\t\t\t$$ = $1;\n \t\t\t\t}\n! \t\t| IDENT '.' ColId '%' TYPE_P\n \t\t\t\t{\n \t\t\t\t\t$$ = makeNode(TypeName);\n \t\t\t\t\t$$->name = $1;\n--- 2629,2643 ----\n \t\t\t\t}\n \t\t;\n \n+ /*\n+ * We would like to make the second production here be ColId '.' ColId etc,\n+ * but that causes reduce/reduce conflicts. TypeFuncId is next best choice.\n+ */\n func_type:\tTypename\n \t\t\t\t{\n \t\t\t\t\t$$ = $1;\n \t\t\t\t}\n! \t\t| TypeFuncId '.' ColId '%' TYPE_P\n \t\t\t\t{\n \t\t\t\t\t$$ = makeNode(TypeName);\n \t\t\t\t\t$$->name = $1;\n***************\n*** 4064,4076 ****\n \n ConstTypename: GenericType\n \t\t| Numeric\n- \t\t| Geometric\n \t\t| Bit\n \t\t| Character\n \t\t| ConstDatetime\n \t\t;\n \n! GenericType: IDENT\n \t\t\t\t{\n \t\t\t\t\t$$ = makeNode(TypeName);\n \t\t\t\t\t$$->name = xlateSqlType($1);\n--- 4068,4079 ----\n \n ConstTypename: GenericType\n \t\t| Numeric\n \t\t| Bit\n \t\t| Character\n \t\t| ConstDatetime\n \t\t;\n \n! GenericType: TypeFuncId\n \t\t\t\t{\n \t\t\t\t\t$$ = makeNode(TypeName);\n \t\t\t\t\t$$->name = xlateSqlType($1);\n***************\n*** 4086,4092 ****\n Numeric: FLOAT opt_float\n \t\t\t\t{\n \t\t\t\t\t$$ = makeNode(TypeName);\n! \t\t\t\t\t$$->name = xlateSqlType($2);\n \t\t\t\t\t$$->typmod = -1;\n \t\t\t\t}\n \t\t| DOUBLE PRECISION\n--- 4089,4095 ----\n Numeric: FLOAT opt_float\n \t\t\t\t{\n \t\t\t\t\t$$ = makeNode(TypeName);\n! \t\t\t\t\t$$->name = $2; /* already xlated */\n \t\t\t\t\t$$->typmod = -1;\n \t\t\t\t}\n \t\t| DOUBLE PRECISION\n***************\n*** 4115,4128 ****\n \t\t\t\t}\n \t\t;\n \n- Geometric: PATH_P\n- \t\t\t\t{\n- \t\t\t\t\t$$ = makeNode(TypeName);\n- \t\t\t\t\t$$->name = xlateSqlType(\"path\");\n- \t\t\t\t\t$$->typmod = -1;\n- \t\t\t\t}\n- \t\t;\n- \n opt_float: '(' Iconst ')'\n \t\t\t\t{\n \t\t\t\t\tif ($2 < 1)\n--- 4118,4123 ----\n***************\n*** 4299,4311 ****\n \t\t| /*EMPTY*/\t\t\t\t\t\t\t\t{ $$ = NULL; }\n \t\t;\n \n! ConstDatetime: datetime\n! \t\t\t\t{\n! \t\t\t\t\t$$ = makeNode(TypeName);\n! \t\t\t\t\t$$->name = xlateSqlType($1);\n! \t\t\t\t\t$$->typmod = -1;\n! \t\t\t\t}\n! \t\t| TIMESTAMP '(' Iconst ')' opt_timezone_x\n \t\t\t\t{\n \t\t\t\t\t$$ = makeNode(TypeName);\n \t\t\t\t\tif ($5)\n--- 4294,4300 ----\n \t\t| /*EMPTY*/\t\t\t\t\t\t\t\t{ $$ = NULL; }\n \t\t;\n \n! ConstDatetime: TIMESTAMP '(' Iconst ')' opt_timezone_x\n \t\t\t\t{\n \t\t\t\t\t$$ = makeNode(TypeName);\n \t\t\t\t\tif ($5)\n***************\n*** 4371,4384 ****\n \t\t\t\t}\n \t\t;\n \n- datetime: YEAR_P\t\t\t\t\t\t\t\t{ $$ = \"year\"; }\n- \t\t| MONTH_P\t\t\t\t\t\t\t\t{ $$ = \"month\"; }\n- \t\t| DAY_P\t\t\t\t\t\t\t\t\t{ $$ = \"day\"; }\n- \t\t| HOUR_P\t\t\t\t\t\t\t\t{ $$ = \"hour\"; }\n- \t\t| MINUTE_P\t\t\t\t\t\t\t\t{ $$ = \"minute\"; }\n- \t\t| SECOND_P\t\t\t\t\t\t\t\t{ $$ = \"second\"; }\n- \t\t;\n- \n /* XXX Make the default be WITH TIME ZONE for 7.2 to help with database upgrades\n * but revert this back to WITHOUT TIME ZONE for 7.3.\n * Do this by simply reverting opt_timezone_x to opt_timezone - thomas 2001-09-06\n--- 4360,4365 ----\n***************\n*** 5270,5278 ****\n * - thomas 2001-04-12\n */\n \n! extract_arg: datetime\t\t\t\t\t\t{ $$ = $1; }\n! \t\t| SCONST\t\t\t\t\t\t\t{ $$ = $1; }\n! \t\t| IDENT\t\t\t\t\t\t\t\t{ $$ = $1; }\n \t\t;\n \n /* position_list uses b_expr not a_expr to avoid conflict with general IN */\n--- 5251,5264 ----\n * - thomas 2001-04-12\n */\n \n! extract_arg: IDENT\t\t\t\t\t\t{ $$ = $1; }\n! \t\t| YEAR_P\t\t\t\t\t\t{ $$ = \"year\"; }\n! \t\t| MONTH_P\t\t\t\t\t\t{ $$ = \"month\"; }\n! \t\t| DAY_P\t\t\t\t\t\t\t{ $$ = \"day\"; }\n! \t\t| HOUR_P\t\t\t\t\t\t{ $$ = \"hour\"; }\n! \t\t| MINUTE_P\t\t\t\t\t\t{ $$ = \"minute\"; }\n! \t\t| SECOND_P\t\t\t\t\t\t{ $$ = \"second\"; }\n! \t\t| SCONST\t\t\t\t\t\t{ $$ = $1; }\n \t\t;\n \n /* position_list uses b_expr not a_expr to avoid conflict with general IN */\n***************\n*** 5555,5586 ****\n attr_name:\t\t\t\tColId\t\t\t{ $$ = $1; };\n class:\t\t\t\t\tColId\t\t\t{ $$ = $1; };\n index_name:\t\t\t\tColId\t\t\t{ $$ = $1; };\n- \n- /* Functions\n- * Include date/time keywords as SQL92 extension.\n- * Include TYPE as a SQL92 unreserved keyword. - thomas 1997-10-05\n- * Any tokens which show up as operators will screw up the parsing if\n- * allowed as identifiers, but are acceptable as ColLabels:\n- * BETWEEN, IN, IS, ISNULL, NOTNULL, OVERLAPS\n- * Thanks to Tom Lane for pointing this out. - thomas 2000-03-29\n- * We need OVERLAPS allowed as a function name to enable the implementation\n- * of argument type variations on the underlying implementation. These\n- * variations are done as SQL-language entries in the pg_proc catalog.\n- * Do not include SUBSTRING here since it has explicit productions\n- * in a_expr to support the goofy SQL9x argument syntax.\n- * - thomas 2000-11-28\n- */\n- func_name: ColId\t\t\t\t\t\t{ $$ = xlateSqlFunc($1); }\n- \t\t| BETWEEN\t\t\t\t\t\t{ $$ = xlateSqlFunc(\"between\"); }\n- \t\t| ILIKE\t\t\t\t\t\t\t{ $$ = xlateSqlFunc(\"ilike\"); }\n- \t\t| IN\t\t\t\t\t\t\t{ $$ = xlateSqlFunc(\"in\"); }\n- \t\t| IS\t\t\t\t\t\t\t{ $$ = xlateSqlFunc(\"is\"); }\n- \t\t| ISNULL\t\t\t\t\t\t{ $$ = xlateSqlFunc(\"isnull\"); }\n- \t\t| LIKE\t\t\t\t\t\t\t{ $$ = xlateSqlFunc(\"like\"); }\n- \t\t| NOTNULL\t\t\t\t\t\t{ $$ = xlateSqlFunc(\"notnull\"); }\n- \t\t| OVERLAPS\t\t\t\t\t\t{ $$ = xlateSqlFunc(\"overlaps\"); }\n- \t\t;\n- \n file_name:\t\t\t\tSconst\t\t\t{ $$ = $1; };\n \n /* Constants\n--- 5541,5546 ----\n***************\n*** 5692,5718 ****\n Sconst: SCONST\t\t\t\t\t\t\t{ $$ = $1; };\n UserId: ColId\t\t\t\t\t\t\t{ $$ = $1; };\n \n! /* Column identifier\n! * Include date/time keywords as SQL92 extension.\n! * Include TYPE as a SQL92 unreserved keyword. - thomas 1997-10-05\n! * Add other keywords. Note that as the syntax expands,\n! * some of these keywords will have to be removed from this\n! * list due to shift/reduce conflicts in yacc. If so, move\n! * down to the ColLabel entity. - thomas 1997-11-06\n! */\n! ColId: IDENT\t\t\t\t\t\t\t{ $$ = $1; }\n! \t\t| datetime\t\t\t\t\t\t{ $$ = $1; }\n! \t\t| TokenId\t\t\t\t\t\t{ $$ = $1; }\n! \t\t| NATIONAL\t\t\t\t\t\t{ $$ = \"national\"; }\n \t\t| NONE\t\t\t\t\t\t\t{ $$ = \"none\"; }\n! \t\t| PATH_P\t\t\t\t\t\t{ $$ = \"path\"; }\n \t\t;\n \n! /* Parser tokens to be used as identifiers.\n! * Tokens involving data types should appear in ColId only,\n! * since they will conflict with real TypeName productions.\n */\n! TokenId: ABSOLUTE\t\t\t\t\t\t{ $$ = \"absolute\"; }\n \t\t| ACCESS\t\t\t\t\t\t{ $$ = \"access\"; }\n \t\t| ACTION\t\t\t\t\t\t{ $$ = \"action\"; }\n \t\t| ADD\t\t\t\t\t\t\t{ $$ = \"add\"; }\n--- 5652,5729 ----\n Sconst: SCONST\t\t\t\t\t\t\t{ $$ = $1; };\n UserId: ColId\t\t\t\t\t\t\t{ $$ = $1; };\n \n! /* Column identifier --- names that can be column, table, etc names.\n! *\n! * This contains the TypeFuncId list plus those keywords that conflict\n! * only with typename productions, not with other uses. Note that\n! * most of these keywords will in fact be recognized as type names too;\n! * they just have to have special productions for the purpose.\n! *\n! * Most of these cannot be in TypeFuncId (ie, are not also usable as function\n! * names) because they can be followed by '(' in typename productions, which\n! * looks too much like a function call for a LALR(1) parser.\n! */\n! ColId: TypeFuncId\t\t\t\t\t\t{ $$ = $1; }\n! \t\t| BIT\t\t\t\t\t\t\t{ $$ = \"bit\"; }\n! \t\t| CHAR\t\t\t\t\t\t\t{ $$ = \"char\"; }\n! \t\t| CHARACTER\t\t\t\t\t\t{ $$ = \"character\"; }\n! \t\t| DEC\t\t\t\t\t\t\t{ $$ = \"dec\"; }\n! \t\t| DECIMAL\t\t\t\t\t\t{ $$ = \"decimal\"; }\n! \t\t| FLOAT\t\t\t\t\t\t\t{ $$ = \"float\"; }\n! \t\t| INTERVAL\t\t\t\t\t\t{ $$ = \"interval\"; }\n! \t\t| NCHAR\t\t\t\t\t\t\t{ $$ = \"nchar\"; }\n \t\t| NONE\t\t\t\t\t\t\t{ $$ = \"none\"; }\n! \t\t| NUMERIC\t\t\t\t\t\t{ $$ = \"numeric\"; }\n! \t\t| SETOF\t\t\t\t\t\t\t{ $$ = \"setof\"; }\n! \t\t| TIME\t\t\t\t\t\t\t{ $$ = \"time\"; }\n! \t\t| TIMESTAMP\t\t\t\t\t\t{ $$ = \"timestamp\"; }\n! \t\t| VARCHAR\t\t\t\t\t\t{ $$ = \"varchar\"; }\n! \t\t;\n! \n! /* Function identifier --- names that can be function names.\n! *\n! * This contains the TypeFuncId list plus some ColLabel keywords\n! * that are used as operators in expressions; in general such keywords\n! * can't be ColId because they would be ambiguous with variable names,\n! * but they are unambiguous as function identifiers.\n! *\n! * Do not include POSITION, SUBSTRING, etc here since they have explicit\n! * productions in a_expr to support the goofy SQL9x argument syntax.\n! * - thomas 2000-11-28\n! */\n! func_name: TypeFuncId\t\t\t\t\t{ $$ = xlateSqlFunc($1); }\n! \t\t| BETWEEN\t\t\t\t\t\t{ $$ = xlateSqlFunc(\"between\"); }\n! \t\t| BINARY\t\t\t\t\t\t{ $$ = xlateSqlFunc(\"binary\"); }\n! \t\t| CROSS\t\t\t\t\t\t\t{ $$ = xlateSqlFunc(\"cross\"); }\n! \t\t| FREEZE\t\t\t\t\t\t{ $$ = xlateSqlFunc(\"freeze\"); }\n! \t\t| FULL\t\t\t\t\t\t\t{ $$ = xlateSqlFunc(\"full\"); }\n! \t\t| ILIKE\t\t\t\t\t\t\t{ $$ = xlateSqlFunc(\"ilike\"); }\n! \t\t| IN\t\t\t\t\t\t\t{ $$ = xlateSqlFunc(\"in\"); }\n! \t\t| INNER_P\t\t\t\t\t\t{ $$ = xlateSqlFunc(\"inner\"); }\n! \t\t| IS\t\t\t\t\t\t\t{ $$ = xlateSqlFunc(\"is\"); }\n! \t\t| ISNULL\t\t\t\t\t\t{ $$ = xlateSqlFunc(\"isnull\"); }\n! \t\t| JOIN\t\t\t\t\t\t\t{ $$ = xlateSqlFunc(\"join\"); }\n! \t\t| LEFT\t\t\t\t\t\t\t{ $$ = xlateSqlFunc(\"left\"); }\n! \t\t| LIKE\t\t\t\t\t\t\t{ $$ = xlateSqlFunc(\"like\"); }\n! \t\t| NATURAL\t\t\t\t\t\t{ $$ = xlateSqlFunc(\"natural\"); }\n! \t\t| NOTNULL\t\t\t\t\t\t{ $$ = xlateSqlFunc(\"notnull\"); }\n! \t\t| OUTER_P\t\t\t\t\t\t{ $$ = xlateSqlFunc(\"outer\"); }\n! \t\t| OVERLAPS\t\t\t\t\t\t{ $$ = xlateSqlFunc(\"overlaps\"); }\n! \t\t| PUBLIC\t\t\t\t\t\t{ $$ = xlateSqlFunc(\"public\"); }\n! \t\t| RIGHT\t\t\t\t\t\t\t{ $$ = xlateSqlFunc(\"right\"); }\n! \t\t| VERBOSE\t\t\t\t\t\t{ $$ = xlateSqlFunc(\"verbose\"); }\n \t\t;\n \n! /* Type/func identifier --- names that can be type and function names\n! * (as well as ColIds --- ie, these are unreserved keywords).\n! *\n! * Every new keyword should be added to this list unless\n! * doing so produces a shift/reduce or reduce/reduce conflict.\n! * If so, make it a ColId, or failing that a ColLabel.\n */\n! TypeFuncId: IDENT\t\t\t\t\t\t{ $$ = $1; }\n! \t\t| ABORT_TRANS\t\t\t\t\t{ $$ = \"abort\"; }\n! \t\t| ABSOLUTE\t\t\t\t\t\t{ $$ = \"absolute\"; }\n \t\t| ACCESS\t\t\t\t\t\t{ $$ = \"access\"; }\n \t\t| ACTION\t\t\t\t\t\t{ $$ = \"action\"; }\n \t\t| ADD\t\t\t\t\t\t\t{ $$ = \"add\"; }\n***************\n*** 5731,5746 ****\n--- 5742,5760 ----\n \t\t| CHARACTERISTICS\t\t\t\t{ $$ = \"characteristics\"; }\n \t\t| CHECKPOINT\t\t\t\t\t{ $$ = \"checkpoint\"; }\n \t\t| CLOSE\t\t\t\t\t\t\t{ $$ = \"close\"; }\n+ \t\t| CLUSTER\t\t\t\t\t\t{ $$ = \"cluster\"; }\n \t\t| COMMENT\t\t\t\t\t\t{ $$ = \"comment\"; }\n \t\t| COMMIT\t\t\t\t\t\t{ $$ = \"commit\"; }\n \t\t| COMMITTED\t\t\t\t\t\t{ $$ = \"committed\"; }\n \t\t| CONSTRAINTS\t\t\t\t\t{ $$ = \"constraints\"; }\n+ \t\t| COPY\t\t\t\t\t\t\t{ $$ = \"copy\"; }\n \t\t| CREATE\t\t\t\t\t\t{ $$ = \"create\"; }\n \t\t| CREATEDB\t\t\t\t\t\t{ $$ = \"createdb\"; }\n \t\t| CREATEUSER\t\t\t\t\t{ $$ = \"createuser\"; }\n \t\t| CURSOR\t\t\t\t\t\t{ $$ = \"cursor\"; }\n \t\t| CYCLE\t\t\t\t\t\t\t{ $$ = \"cycle\"; }\n \t\t| DATABASE\t\t\t\t\t\t{ $$ = \"database\"; }\n+ \t\t| DAY_P\t\t\t\t\t\t\t{ $$ = \"day\"; }\n \t\t| DECLARE\t\t\t\t\t\t{ $$ = \"declare\"; }\n \t\t| DEFERRED\t\t\t\t\t\t{ $$ = \"deferred\"; }\n \t\t| DELETE\t\t\t\t\t\t{ $$ = \"delete\"; }\n***************\n*** 5753,5768 ****\n--- 5767,5786 ----\n \t\t| ESCAPE\t\t\t\t\t\t{ $$ = \"escape\"; }\n \t\t| EXCLUSIVE\t\t\t\t\t\t{ $$ = \"exclusive\"; }\n \t\t| EXECUTE\t\t\t\t\t\t{ $$ = \"execute\"; }\n+ \t\t| EXPLAIN\t\t\t\t\t\t{ $$ = \"explain\"; }\n \t\t| FETCH\t\t\t\t\t\t\t{ $$ = \"fetch\"; }\n \t\t| FORCE\t\t\t\t\t\t\t{ $$ = \"force\"; }\n \t\t| FORWARD\t\t\t\t\t\t{ $$ = \"forward\"; }\n \t\t| FUNCTION\t\t\t\t\t\t{ $$ = \"function\"; }\n+ \t\t| GLOBAL\t\t\t\t\t\t{ $$ = \"global\"; }\n \t\t| GRANT\t\t\t\t\t\t\t{ $$ = \"grant\"; }\n \t\t| HANDLER\t\t\t\t\t\t{ $$ = \"handler\"; }\n+ \t\t| HOUR_P\t\t\t\t\t\t{ $$ = \"hour\"; }\n \t\t| IMMEDIATE\t\t\t\t\t\t{ $$ = \"immediate\"; }\n \t\t| INCREMENT\t\t\t\t\t\t{ $$ = \"increment\"; }\n \t\t| INDEX\t\t\t\t\t\t\t{ $$ = \"index\"; }\n \t\t| INHERITS\t\t\t\t\t\t{ $$ = \"inherits\"; }\n+ \t\t| INOUT\t\t\t\t\t\t\t{ $$ = \"inout\"; }\n \t\t| INSENSITIVE\t\t\t\t\t{ $$ = \"insensitive\"; }\n \t\t| INSERT\t\t\t\t\t\t{ $$ = \"insert\"; }\n \t\t| INSTEAD\t\t\t\t\t\t{ $$ = \"instead\"; }\n***************\n*** 5771,5782 ****\n--- 5789,5808 ----\n \t\t| LANGUAGE\t\t\t\t\t\t{ $$ = \"language\"; }\n \t\t| LANCOMPILER\t\t\t\t\t{ $$ = \"lancompiler\"; }\n \t\t| LEVEL\t\t\t\t\t\t\t{ $$ = \"level\"; }\n+ \t\t| LISTEN\t\t\t\t\t\t{ $$ = \"listen\"; }\n+ \t\t| LOAD\t\t\t\t\t\t\t{ $$ = \"load\"; }\n+ \t\t| LOCAL\t\t\t\t\t\t\t{ $$ = \"local\"; }\n \t\t| LOCATION\t\t\t\t\t\t{ $$ = \"location\"; }\n+ \t\t| LOCK_P\t\t\t\t\t\t{ $$ = \"lock\"; }\n \t\t| MATCH\t\t\t\t\t\t\t{ $$ = \"match\"; }\n \t\t| MAXVALUE\t\t\t\t\t\t{ $$ = \"maxvalue\"; }\n+ \t\t| MINUTE_P\t\t\t\t\t\t{ $$ = \"minute\"; }\n \t\t| MINVALUE\t\t\t\t\t\t{ $$ = \"minvalue\"; }\n \t\t| MODE\t\t\t\t\t\t\t{ $$ = \"mode\"; }\n+ \t\t| MONTH_P\t\t\t\t\t\t{ $$ = \"month\"; }\n+ \t\t| MOVE\t\t\t\t\t\t\t{ $$ = \"move\"; }\n \t\t| NAMES\t\t\t\t\t\t\t{ $$ = \"names\"; }\n+ \t\t| NATIONAL\t\t\t\t\t\t{ $$ = \"national\"; }\n \t\t| NEXT\t\t\t\t\t\t\t{ $$ = \"next\"; }\n \t\t| NO\t\t\t\t\t\t\t{ $$ = \"no\"; }\n \t\t| NOCREATEDB\t\t\t\t\t{ $$ = \"nocreatedb\"; }\n***************\n*** 5787,5796 ****\n--- 5813,5825 ----\n \t\t| OIDS\t\t\t\t\t\t\t{ $$ = \"oids\"; }\n \t\t| OPERATOR\t\t\t\t\t\t{ $$ = \"operator\"; }\n \t\t| OPTION\t\t\t\t\t\t{ $$ = \"option\"; }\n+ \t\t| OUT\t\t\t\t\t\t\t{ $$ = \"out\"; }\n \t\t| OWNER\t\t\t\t\t\t\t{ $$ = \"owner\"; }\n \t\t| PARTIAL\t\t\t\t\t\t{ $$ = \"partial\"; }\n \t\t| PASSWORD\t\t\t\t\t\t{ $$ = \"password\"; }\n+ \t\t| PATH_P\t\t\t\t\t\t{ $$ = \"path\"; }\n \t\t| PENDANT\t\t\t\t\t\t{ $$ = \"pendant\"; }\n+ \t\t| PRECISION\t\t\t\t\t\t{ $$ = \"precision\"; }\n \t\t| PRIOR\t\t\t\t\t\t\t{ $$ = \"prior\"; }\n \t\t| PRIVILEGES\t\t\t\t\t{ $$ = \"privileges\"; }\n \t\t| PROCEDURAL\t\t\t\t\t{ $$ = \"procedural\"; }\n***************\n*** 5800,5805 ****\n--- 5829,5835 ----\n \t\t| RELATIVE\t\t\t\t\t\t{ $$ = \"relative\"; }\n \t\t| RENAME\t\t\t\t\t\t{ $$ = \"rename\"; }\n \t\t| REPLACE\t\t\t\t\t\t{ $$ = \"replace\"; }\n+ \t\t| RESET\t\t\t\t\t\t\t{ $$ = \"reset\"; }\n \t\t| RESTRICT\t\t\t\t\t\t{ $$ = \"restrict\"; }\n \t\t| RETURNS\t\t\t\t\t\t{ $$ = \"returns\"; }\n \t\t| REVOKE\t\t\t\t\t\t{ $$ = \"revoke\"; }\n***************\n*** 5808,5818 ****\n--- 5838,5850 ----\n \t\t| RULE\t\t\t\t\t\t\t{ $$ = \"rule\"; }\n \t\t| SCHEMA\t\t\t\t\t\t{ $$ = \"schema\"; }\n \t\t| SCROLL\t\t\t\t\t\t{ $$ = \"scroll\"; }\n+ \t\t| SECOND_P\t\t\t\t\t\t{ $$ = \"second\"; }\n \t\t| SESSION\t\t\t\t\t\t{ $$ = \"session\"; }\n \t\t| SEQUENCE\t\t\t\t\t\t{ $$ = \"sequence\"; }\n \t\t| SERIALIZABLE\t\t\t\t\t{ $$ = \"serializable\"; }\n \t\t| SET\t\t\t\t\t\t\t{ $$ = \"set\"; }\n \t\t| SHARE\t\t\t\t\t\t\t{ $$ = \"share\"; }\n+ \t\t| SHOW\t\t\t\t\t\t\t{ $$ = \"show\"; }\n \t\t| START\t\t\t\t\t\t\t{ $$ = \"start\"; }\n \t\t| STATEMENT\t\t\t\t\t\t{ $$ = \"statement\"; }\n \t\t| STATISTICS\t\t\t\t\t{ $$ = \"statistics\"; }\n***************\n*** 5823,5836 ****\n--- 5855,5871 ----\n \t\t| TEMPLATE\t\t\t\t\t\t{ $$ = \"template\"; }\n \t\t| TEMPORARY\t\t\t\t\t\t{ $$ = \"temporary\"; }\n \t\t| TOAST\t\t\t\t\t\t\t{ $$ = \"toast\"; }\n+ \t\t| TRANSACTION\t\t\t\t\t{ $$ = \"transaction\"; }\n \t\t| TRIGGER\t\t\t\t\t\t{ $$ = \"trigger\"; }\n \t\t| TRUNCATE\t\t\t\t\t\t{ $$ = \"truncate\"; }\n \t\t| TRUSTED\t\t\t\t\t\t{ $$ = \"trusted\"; }\n \t\t| TYPE_P\t\t\t\t\t\t{ $$ = \"type\"; }\n \t\t| UNENCRYPTED\t\t\t\t\t{ $$ = \"unencrypted\"; }\n+ \t\t| UNKNOWN\t\t\t\t\t\t{ $$ = \"unknown\"; }\n \t\t| UNLISTEN\t\t\t\t\t\t{ $$ = \"unlisten\"; }\n \t\t| UNTIL\t\t\t\t\t\t\t{ $$ = \"until\"; }\n \t\t| UPDATE\t\t\t\t\t\t{ $$ = \"update\"; }\n+ \t\t| VACUUM\t\t\t\t\t\t{ $$ = \"vacuum\"; }\n \t\t| VALID\t\t\t\t\t\t\t{ $$ = \"valid\"; }\n \t\t| VALUES\t\t\t\t\t\t{ $$ = \"values\"; }\n \t\t| VARYING\t\t\t\t\t\t{ $$ = \"varying\"; }\n***************\n*** 5839,5859 ****\n \t\t| WITH\t\t\t\t\t\t\t{ $$ = \"with\"; }\n \t\t| WITHOUT\t\t\t\t\t\t{ $$ = \"without\"; }\n \t\t| WORK\t\t\t\t\t\t\t{ $$ = \"work\"; }\n \t\t| ZONE\t\t\t\t\t\t\t{ $$ = \"zone\"; }\n \t\t;\n \n! /* Column label\n! * Allowed labels in \"AS\" clauses.\n! * Include TRUE/FALSE SQL3 reserved words for Postgres backward\n! * compatibility. Cannot allow this for column names since the\n! * syntax would not distinguish between the constant value and\n! * a column name. - thomas 1997-10-24\n! * Add other keywords to this list. Note that they appear here\n! * rather than in ColId if there was a shift/reduce conflict\n! * when used as a full identifier. - thomas 1997-11-06\n */\n ColLabel: ColId\t\t\t\t\t\t{ $$ = $1; }\n- \t\t| ABORT_TRANS\t\t\t\t\t{ $$ = \"abort\"; }\n \t\t| ALL\t\t\t\t\t\t\t{ $$ = \"all\"; }\n \t\t| ANALYSE\t\t\t\t\t\t{ $$ = \"analyse\"; } /* British */\n \t\t| ANALYZE\t\t\t\t\t\t{ $$ = \"analyze\"; }\n--- 5874,5893 ----\n \t\t| WITH\t\t\t\t\t\t\t{ $$ = \"with\"; }\n \t\t| WITHOUT\t\t\t\t\t\t{ $$ = \"without\"; }\n \t\t| WORK\t\t\t\t\t\t\t{ $$ = \"work\"; }\n+ \t\t| YEAR_P\t\t\t\t\t\t{ $$ = \"year\"; }\n \t\t| ZONE\t\t\t\t\t\t\t{ $$ = \"zone\"; }\n \t\t;\n \n! /* Column label --- allowed labels in \"AS\" clauses.\n! *\n! * Keywords should appear here if they could not be distinguished\n! * from variable, type, or function names in some contexts.\n! *\n! * At present, every keyword except \"AS\" itself should appear in\n! * one of ColId, TypeFuncId, or ColLabel. When adding a ColLabel,\n! * also consider whether it can be added to func_name.\n */\n ColLabel: ColId\t\t\t\t\t\t{ $$ = $1; }\n \t\t| ALL\t\t\t\t\t\t\t{ $$ = \"all\"; }\n \t\t| ANALYSE\t\t\t\t\t\t{ $$ = \"analyse\"; } /* British */\n \t\t| ANALYZE\t\t\t\t\t\t{ $$ = \"analyze\"; }\n***************\n*** 5862,5887 ****\n \t\t| ASC\t\t\t\t\t\t\t{ $$ = \"asc\"; }\n \t\t| BETWEEN\t\t\t\t\t\t{ $$ = \"between\"; }\n \t\t| BINARY\t\t\t\t\t\t{ $$ = \"binary\"; }\n- \t\t| BIT\t\t\t\t\t\t\t{ $$ = \"bit\"; }\n \t\t| BOTH\t\t\t\t\t\t\t{ $$ = \"both\"; }\n \t\t| CASE\t\t\t\t\t\t\t{ $$ = \"case\"; }\n \t\t| CAST\t\t\t\t\t\t\t{ $$ = \"cast\"; }\n- \t\t| CHAR\t\t\t\t\t\t\t{ $$ = \"char\"; }\n- \t\t| CHARACTER\t\t\t\t\t\t{ $$ = \"character\"; }\n \t\t| CHECK\t\t\t\t\t\t\t{ $$ = \"check\"; }\n- \t\t| CLUSTER\t\t\t\t\t\t{ $$ = \"cluster\"; }\n \t\t| COALESCE\t\t\t\t\t\t{ $$ = \"coalesce\"; }\n \t\t| COLLATE\t\t\t\t\t\t{ $$ = \"collate\"; }\n \t\t| COLUMN\t\t\t\t\t\t{ $$ = \"column\"; }\n \t\t| CONSTRAINT\t\t\t\t\t{ $$ = \"constraint\"; }\n- \t\t| COPY\t\t\t\t\t\t\t{ $$ = \"copy\"; }\n \t\t| CROSS\t\t\t\t\t\t\t{ $$ = \"cross\"; }\n \t\t| CURRENT_DATE\t\t\t\t\t{ $$ = \"current_date\"; }\n \t\t| CURRENT_TIME\t\t\t\t\t{ $$ = \"current_time\"; }\n \t\t| CURRENT_TIMESTAMP\t\t\t\t{ $$ = \"current_timestamp\"; }\n \t\t| CURRENT_USER\t\t\t\t\t{ $$ = \"current_user\"; }\n- \t\t| DEC\t\t\t\t\t\t\t{ $$ = \"dec\"; }\n- \t\t| DECIMAL\t\t\t\t\t\t{ $$ = \"decimal\"; }\n \t\t| DEFAULT\t\t\t\t\t\t{ $$ = \"default\"; }\n \t\t| DEFERRABLE\t\t\t\t\t{ $$ = \"deferrable\"; }\n \t\t| DESC\t\t\t\t\t\t\t{ $$ = \"desc\"; }\n--- 5896,5914 ----\n***************\n*** 5891,5915 ****\n \t\t| END_TRANS\t\t\t\t\t\t{ $$ = \"end\"; }\n \t\t| EXCEPT\t\t\t\t\t\t{ $$ = \"except\"; }\n \t\t| EXISTS\t\t\t\t\t\t{ $$ = \"exists\"; }\n- \t\t| EXPLAIN\t\t\t\t\t\t{ $$ = \"explain\"; }\n \t\t| EXTRACT\t\t\t\t\t\t{ $$ = \"extract\"; }\n \t\t| FALSE_P\t\t\t\t\t\t{ $$ = \"false\"; }\n- \t\t| FLOAT\t\t\t\t\t\t\t{ $$ = \"float\"; }\n \t\t| FOR\t\t\t\t\t\t\t{ $$ = \"for\"; }\n \t\t| FOREIGN\t\t\t\t\t\t{ $$ = \"foreign\"; }\n \t\t| FREEZE\t\t\t\t\t\t{ $$ = \"freeze\"; }\n \t\t| FROM\t\t\t\t\t\t\t{ $$ = \"from\"; }\n \t\t| FULL\t\t\t\t\t\t\t{ $$ = \"full\"; }\n- \t\t| GLOBAL\t\t\t\t\t\t{ $$ = \"global\"; }\n \t\t| GROUP\t\t\t\t\t\t\t{ $$ = \"group\"; }\n \t\t| HAVING\t\t\t\t\t\t{ $$ = \"having\"; }\n \t\t| ILIKE\t\t\t\t\t\t\t{ $$ = \"ilike\"; }\n \t\t| IN\t\t\t\t\t\t\t{ $$ = \"in\"; }\n \t\t| INITIALLY\t\t\t\t\t\t{ $$ = \"initially\"; }\n \t\t| INNER_P\t\t\t\t\t\t{ $$ = \"inner\"; }\n- \t\t| INOUT\t\t\t\t\t\t\t{ $$ = \"inout\"; }\n \t\t| INTERSECT\t\t\t\t\t\t{ $$ = \"intersect\"; }\n- \t\t| INTERVAL\t\t\t\t\t\t{ $$ = \"interval\"; }\n \t\t| INTO\t\t\t\t\t\t\t{ $$ = \"into\"; }\n \t\t| IS\t\t\t\t\t\t\t{ $$ = \"is\"; }\n \t\t| ISNULL\t\t\t\t\t\t{ $$ = \"isnull\"; }\n--- 5918,5937 ----\n***************\n*** 5918,5936 ****\n \t\t| LEFT\t\t\t\t\t\t\t{ $$ = \"left\"; }\n \t\t| LIKE\t\t\t\t\t\t\t{ $$ = \"like\"; }\n \t\t| LIMIT\t\t\t\t\t\t\t{ $$ = \"limit\"; }\n- \t\t| LISTEN\t\t\t\t\t\t{ $$ = \"listen\"; }\n- \t\t| LOAD\t\t\t\t\t\t\t{ $$ = \"load\"; }\n- \t\t| LOCAL\t\t\t\t\t\t\t{ $$ = \"local\"; }\n- \t\t| LOCK_P\t\t\t\t\t\t{ $$ = \"lock\"; }\n- \t\t| MOVE\t\t\t\t\t\t\t{ $$ = \"move\"; }\n \t\t| NATURAL\t\t\t\t\t\t{ $$ = \"natural\"; }\n- \t\t| NCHAR\t\t\t\t\t\t\t{ $$ = \"nchar\"; }\n \t\t| NEW\t\t\t\t\t\t\t{ $$ = \"new\"; }\n \t\t| NOT\t\t\t\t\t\t\t{ $$ = \"not\"; }\n \t\t| NOTNULL\t\t\t\t\t\t{ $$ = \"notnull\"; }\n \t\t| NULLIF\t\t\t\t\t\t{ $$ = \"nullif\"; }\n \t\t| NULL_P\t\t\t\t\t\t{ $$ = \"null\"; }\n- \t\t| NUMERIC\t\t\t\t\t\t{ $$ = \"numeric\"; }\n \t\t| OFF\t\t\t\t\t\t\t{ $$ = \"off\"; }\n \t\t| OFFSET\t\t\t\t\t\t{ $$ = \"offset\"; }\n \t\t| OLD\t\t\t\t\t\t\t{ $$ = \"old\"; }\n--- 5940,5951 ----\n***************\n*** 5938,5975 ****\n \t\t| ONLY\t\t\t\t\t\t\t{ $$ = \"only\"; }\n \t\t| OR\t\t\t\t\t\t\t{ $$ = \"or\"; }\n \t\t| ORDER\t\t\t\t\t\t\t{ $$ = \"order\"; }\n- \t\t| OUT\t\t\t\t\t\t\t{ $$ = \"out\"; }\n \t\t| OUTER_P\t\t\t\t\t\t{ $$ = \"outer\"; }\n \t\t| OVERLAPS\t\t\t\t\t\t{ $$ = \"overlaps\"; }\n \t\t| POSITION\t\t\t\t\t\t{ $$ = \"position\"; }\n- \t\t| PRECISION\t\t\t\t\t\t{ $$ = \"precision\"; }\n \t\t| PRIMARY\t\t\t\t\t\t{ $$ = \"primary\"; }\n \t\t| PUBLIC\t\t\t\t\t\t{ $$ = \"public\"; }\n \t\t| REFERENCES\t\t\t\t\t{ $$ = \"references\"; }\n- \t\t| RESET\t\t\t\t\t\t\t{ $$ = \"reset\"; }\n \t\t| RIGHT\t\t\t\t\t\t\t{ $$ = \"right\"; }\n \t\t| SELECT\t\t\t\t\t\t{ $$ = \"select\"; }\n \t\t| SESSION_USER\t\t\t\t\t{ $$ = \"session_user\"; }\n- \t\t| SETOF\t\t\t\t\t\t\t{ $$ = \"setof\"; }\n- \t\t| SHOW\t\t\t\t\t\t\t{ $$ = \"show\"; }\n \t\t| SOME\t\t\t\t\t\t\t{ $$ = \"some\"; }\n \t\t| SUBSTRING\t\t\t\t\t\t{ $$ = \"substring\"; }\n \t\t| TABLE\t\t\t\t\t\t\t{ $$ = \"table\"; }\n \t\t| THEN\t\t\t\t\t\t\t{ $$ = \"then\"; }\n- \t\t| TIME\t\t\t\t\t\t\t{ $$ = \"time\"; }\n- \t\t| TIMESTAMP\t\t\t\t\t\t{ $$ = \"timestamp\"; }\n \t\t| TO\t\t\t\t\t\t\t{ $$ = \"to\"; }\n \t\t| TRAILING\t\t\t\t\t\t{ $$ = \"trailing\"; }\n- \t\t| TRANSACTION\t\t\t\t\t{ $$ = \"transaction\"; }\n \t\t| TRIM\t\t\t\t\t\t\t{ $$ = \"trim\"; }\n \t\t| TRUE_P\t\t\t\t\t\t{ $$ = \"true\"; }\n \t\t| UNION\t\t\t\t\t\t\t{ $$ = \"union\"; }\n \t\t| UNIQUE\t\t\t\t\t\t{ $$ = \"unique\"; }\n- \t\t| UNKNOWN\t\t\t\t\t\t{ $$ = \"unknown\"; }\n \t\t| USER\t\t\t\t\t\t\t{ $$ = \"user\"; }\n \t\t| USING\t\t\t\t\t\t\t{ $$ = \"using\"; }\n- \t\t| VACUUM\t\t\t\t\t\t{ $$ = \"vacuum\"; }\n- \t\t| VARCHAR\t\t\t\t\t\t{ $$ = \"varchar\"; }\n \t\t| VERBOSE\t\t\t\t\t\t{ $$ = \"verbose\"; }\n \t\t| WHEN\t\t\t\t\t\t\t{ $$ = \"when\"; }\n \t\t| WHERE\t\t\t\t\t\t\t{ $$ = \"where\"; }\n--- 5953,5979 ----",
"msg_date": "Thu, 08 Nov 2001 19:44:44 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Call for objections: revision of keyword classification"
},
{
"msg_contents": "\nSeems fine to apply. It would be nice if we had a more general system\nfor adding keywords and having them be column label/function name\ncapable. Right now I know I need to add to keyword.c but I have no idea\nif/when I need to add to the keyword list in gram.y. Can we move the\nkeywords out into another file and somehow pull them into gram.y with\nthe proper attributes so they get into all the places they need to be\nwith little fiddling?\n\n---------------------------------------------------------------------------\n\n> Since we've already seen two complaints about \"timestamp\" no longer\n> being an allowed column name in 7.2, I think it's probably time to\n> make a serious effort at trimming the reserved-word list a little.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 8 Nov 2001 21:05:07 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Call for objections: revision of keyword classification"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> It would be nice if we had a more general system\n> for adding keywords and having them be column label/function name\n> capable. Right now I know I need to add to keyword.c but I have no idea\n> if/when I need to add to the keyword list in gram.y.\n\n*Every* new keyword should be in one of the keyword lists in gram.y.\n\nI tried to clean up the documentation of which list does which and why\nin the proposed patch --- what do you think of it?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 08 Nov 2001 21:17:17 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Call for objections: revision of keyword classification "
},
{
"msg_contents": "> Since we've already seen two complaints about \"timestamp\" no longer\n> being an allowed column name in 7.2, I think it's probably time to\n> make a serious effort at trimming the reserved-word list a little.\n\nCool.\n\nThe only reservation I have (pun not *really* intended ;) is that the\nSQL9x reserved words may continue to impact us into the future, so\nfreeing them up now may just postpone the pain until later. That\nprobably is not a good enough argument (*I* don't even like it) but any\nextra flexibility we put in now is not guaranteed to last forever...\n\nIn either case, having reserved words which are also reserved in the SQL\nstandard will not keep folks from using PostgreSQL, and allowing them\nwill not be a difference maker in adoption either imho.\n\n - Thomas\n",
"msg_date": "Fri, 09 Nov 2001 02:28:32 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Call for objections: revision of keyword classification"
},
{
"msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> The only reservation I have (pun not *really* intended ;) is that the\n> SQL9x reserved words may continue to impact us into the future, so\n> freeing them up now may just postpone the pain until later. That\n> probably is not a good enough argument (*I* don't even like it) but any\n> extra flexibility we put in now is not guaranteed to last forever...\n\nOf course not, but we might as well do what we can while we can.\n\nOne positive point is that (I think) we are pretty close to SQL9x now\non datatype declaration syntax, so if we can make these words unreserved\nor less-reserved today, it's not unreasonable to think they might be\nable to stay that way indefinitely.\n\n> In either case, having reserved words which are also reserved in the SQL\n> standard will not keep folks from using PostgreSQL, and allowing them\n> will not be a difference maker in adoption either imho.\n\nNo, it won't. I'm mainly doing this to try to minimize the pain of\npeople porting forward from previous Postgres releases, in which\n(some of) these words weren't reserved. That seems a worthwhile\ngoal to me, even if in the long run they end up absorbing the pain\nanyway. Certain pain now vs maybe-or-maybe-not pain later is an\neasy tradeoff ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 08 Nov 2001 21:34:37 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Call for objections: revision of keyword classification "
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Can we move the keywords out into another file and somehow pull them\n> into gram.y with the proper attributes so they get into all the places\n> they need to be with little fiddling?\n\nThinking about that, it seems like it might be nice to have a master\nkeyword file that contains just keywords and classifications:\n\n\tAS\t\tHard-reserved\n\tCASE\t\tColLabel\n\tABSOLUTE\tTypeFuncId\n\tBIT\t\tColId\n\nand make some scripts that generate both keyword.c and the list\nproductions in gram.y automatically. (Among other things, we could stop\ntrusting manual sorting of the keyword.c entries ...) Peter's\ndocumentation generator would no doubt be a lot happier too --- we\ncould add indications of SQL92 and SQL99 reserved status to this\nmaster file, for example.\n\nHowever, right offhand I don't see any equivalent of #include in the\nBison manual, so I'm not sure how the autogenerated list productions\ncould be included into the hand-maintained part of gram.y. Thoughts?\n\n\t\t\tregards, tom lane\n\nPS: no, I'm *not* suggesting we do this during beta.\n",
"msg_date": "Thu, 08 Nov 2001 21:48:51 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Call for objections: revision of keyword classification "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Can we move the keywords out into another file and somehow pull them\n> > into gram.y with the proper attributes so they get into all the places\n> > they need to be with little fiddling?\n> \n> Thinking about that, it seems like it might be nice to have a master\n> keyword file that contains just keywords and classifications:\n> \n> \tAS\t\tHard-reserved\n> \tCASE\t\tColLabel\n> \tABSOLUTE\tTypeFuncId\n> \tBIT\t\tColId\n> \n> and make some scripts that generate both keyword.c and the list\n> productions in gram.y automatically. (Among other things, we could stop\n> trusting manual sorting of the keyword.c entries ...) Peter's\n> documentation generator would no doubt be a lot happier too --- we\n> could add indications of SQL92 and SQL99 reserved status to this\n> master file, for example.\n> \n> However, right offhand I don't see any equivalent of #include in the\n> Bison manual, so I'm not sure how the autogenerated list productions\n> could be included into the hand-maintained part of gram.y. Thoughts?\n\nYes, this is what I was suggesting; a central file that can be pulled\nin to generate the others.\n\nDoesn't bison deal with #include? I guess not. The only other way is\nto make a gram.y.pre, and have Makefile do the inclusions in the proper\nspot, and run that new gram.y through bison. The fact is, you have to\nprocess the central file anyway so may as well just do the gram.y\nreplacements manually too.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 8 Nov 2001 23:04:17 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Call for objections: revision of keyword classification"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Doesn't bison deal with #include? I guess not. The only other way is\n> to make a gram.y.pre, and have Makefile do the inclusions in the proper\n> spot, and run that new gram.y through bison.\n\nI was hoping to avoid that sort of kluge ... surely the bison designers\nthought of include, and I'm just not seeing how it's done ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 08 Nov 2001 23:19:26 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Call for objections: revision of keyword classification "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Doesn't bison deal with #include? I guess not. The only other way is\n> > to make a gram.y.pre, and have Makefile do the inclusions in the proper\n> > spot, and run that new gram.y through bison.\n> \n> I was hoping to avoid that sort of kluge ... surely the bison designers\n> thought of include, and I'm just not seeing how it's done ...\n\nWhat does #include do? Doesn't it work?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 8 Nov 2001 23:31:19 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Call for objections: revision of keyword classification"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> What does #include do? Doesn't it work?\n\nAFAICT it's only allowed in the C-code sections of gram.y, from which\nit's just transposed into the output .c file (as indeed you'd want;\nyou wouldn't want your header files expanded when bison is run).\n\nI'm not seeing anything that supports inclusion of a file in the\ngrammar-productions portion of gram.y.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 08 Nov 2001 23:35:25 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Call for objections: revision of keyword classification "
},
{
"msg_contents": "\nHow do some of the other RDBMSs handle this? I've gotten into the habit\nawhile ago of not using 'field types' as 'field names' that not using\nsomething like 'timestamp' as a field name comes naturally ... ignoring\ngoing from old-PgSQL to new-PgSQL ... what about PgSQL->Oracle? I\npersonally like it when I see apps out there that strive to work with\ndifferent DBs, I'd hate to see it be us that makes life more difficult for\nppl to make choices because we 'softened restrictions' on reserved words,\nallowing someone to create an app that works great under us, but is now a\nheadache to change to someone else's RDBMSs as a result ...\n\n... if that makes any sense?\n\nOn Thu, 8 Nov 2001, Tom Lane wrote:\n\n> Thomas Lockhart <lockhart@fourpalms.org> writes:\n> > The only reservation I have (pun not *really* intended ;) is that the\n> > SQL9x reserved words may continue to impact us into the future, so\n> > freeing them up now may just postpone the pain until later. That\n> > probably is not a good enough argument (*I* don't even like it) but any\n> > extra flexibility we put in now is not guaranteed to last forever...\n>\n> Of course not, but we might as well do what we can while we can.\n>\n> One positive point is that (I think) we are pretty close to SQL9x now\n> on datatype declaration syntax, so if we can make these words unreserved\n> or less-reserved today, it's not unreasonable to think they might be\n> able to stay that way indefinitely.\n>\n> > In either case, having reserved words which are also reserved in the SQL\n> > standard will not keep folks from using PostgreSQL, and allowing them\n> > will not be a difference maker in adoption either imho.\n>\n> No, it won't. I'm mainly doing this to try to minimize the pain of\n> people porting forward from previous Postgres releases, in which\n> (some of) these words weren't reserved. That seems a worthwhile\n> goal to me, even if in the long run they end up absorbing the pain\n> anyway. Certain pain now vs maybe-or-maybe-not pain later is an\n> easy tradeoff ;-)\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n",
"msg_date": "Thu, 8 Nov 2001 23:46:21 -0500 (EST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Call for objections: revision of keyword classification"
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > What does #include do? Doesn't it work?\n> \n> AFAICT it's only allowed in the C-code sections of gram.y, from which\n> it's just transposed into the output .c file (as indeed you'd want;\n> you wouldn't want your header files expanded when bison is run).\n> \n> I'm not seeing anything that supports inclusion of a file in the\n> grammar-productions portion of gram.y.\n\nIt would be very easy to simulate the #include in the action section\nusing a small awk script. I can assist.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 8 Nov 2001 23:51:25 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Call for objections: revision of keyword classification"
},
{
"msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> I'd hate to see it be us that makes life more difficult for\n> ppl to make choices because we 'softened restrictions' on reserved words,\n> allowing someone to create an app that works great under us, but is now a\n> headache to change to someone else's RDBMSs as a result ...\n\nWell, I could see making a \"strict SQL\" mode that rejects *all* PG-isms,\nbut in the absence of such a thing I don't see much value to taking a\nhard line just on the point of disallowing keywords as field names.\nThat seems unlikely to be anyone's worst porting headache ...\n\nYour question is valid though: do other RDBMSs take a hard line on\nhow reserved keywords are? I dunno.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 08 Nov 2001 23:52:28 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Call for objections: revision of keyword classification "
},
{
"msg_contents": "...\n> Thinking about that, it seems like it might be nice to have a master\n> keyword file that contains just keywords and classifications:\n...\n> and make some scripts that generate both keyword.c and the list\n> productions in gram.y automatically. (Among other things, we could stop\n> trusting manual sorting of the keyword.c entries ...) Peter's\n> documentation generator would no doubt be a lot happier too --- we\n> could add indications of SQL92 and SQL99 reserved status to this\n> master file, for example.\n\nistm that we would have a better time using gram.y as the definitive\nsource for this list. Trying to stuff gram.y from some other source file\nmoves the information another step away from bison, which is the\ndefinitive arbiter of correct behavior and syntax. Complaints that\nthings are too hard to figure out won't get better by having more\nindirection in the process, and no matter how we do it one will still\nneed to understand the relationships between tokens and productions.\n\nWe could have a perl script (haven't looked; maybe Peter's utility\nalready does this?) which rummages through gram.y and generates\nkeyword.c. And if we wanted to categorize what we implement wrt SQL9x\ndefinitions, we should do a join from lists in SQL9x against our\nkeywords, rather than trying to maintain that relationship manually. We\ncould even find some database to do it for us ;)\n\n - Thomas\n",
"msg_date": "Fri, 09 Nov 2001 06:21:57 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Call for objections: revision of keyword classification"
},
{
"msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n>> Thinking about that, it seems like it might be nice to have a master\n>> keyword file that contains just keywords and classifications:\n\n> istm that we would have a better time using gram.y as the definitive\n> source for this list.\n\nThat's what we're doing now, more or less, and it's got glaring\ndeficiencies. It's nearly unintelligible (cf Bruce's complaint\nearlier in this thread) and it's horribly prone to human error.\nHere are just three depressingly-easy-to-make mistakes against\nwhich we have no mechanical check:\n\n\t* keyword production mismatches token and action, eg\n\n\t\t| FOO\t\t\t{ $$ = \"bar\"; }\n\n\t* failure to add new keyword to any of the appropriate lists;\n\n\t* messing up the perfect sort order required in keyword.c.\n\nWhat's worse is that the consequences of these mistakes are relatively\nsubtle and could escape detection for awhile. I'd like to see mistakes\nof this kind become procedurally impossible.\n\n> We could have a perl script (haven't looked; maybe Peter's utility\n> already does this?) which rummages through gram.y and generates\n> keyword.c.\n\nI believe Peter's already doing some form of this, but gram.y is a\nforbiddingly unfriendly form of storage for this information. It'd\nbe a lot easier and less mistake-prone to start from a *designed*\nkeyword database and generate the appropriate lists in gram.y.\n\nBTW, another thing in the back of my mind is that we should try to\nfigure out some way to unify ecpg's SQL grammar with the backend's.\nMaintaining that thing is an even bigger headache than getting the\nbackend's own parser right.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 09 Nov 2001 12:04:12 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Call for objections: revision of keyword classification "
},
{
"msg_contents": "> That's what we're doing now, more or less, and it's got glaring\n> deficiencies. It's nearly unintelligible (cf Bruce's complaint\n> earlier in this thread) and it's horribly prone to human error.\n> Here are just three depressingly-easy-to-make mistakes against\n> which we have no mechanical check:\n\nZounds! How could this ever have worked??!! ;)\n\n> What's worse is that the consequences of these mistakes are relatively\n> subtle and could escape detection for awhile. I'd like to see mistakes\n> of this kind become procedurally impossible.\n\nNo disagreement with the goals...\n\n> I believe Peter's already doing some form of this, but gram.y is a\n> forbiddingly unfriendly form of storage for this information. It'd\n> be a lot easier and less mistake-prone to start from a *designed*\n> keyword database and generate the appropriate lists in gram.y.\n\nCertainly gram.y is forbidding to beginners and those who don't spend\nmuch time in the code, but separating blocks of the code into external\nfiles only increases the indirection. One still has to *understand* what\ngram.y is doing, and no amount of reorganization will keep one from the\npossibility of shift/reduce problems with new productions.\n\nOne possibility would be to put better comments into gram.y, and to back\nthose comments up with a validation script that *could* generate\nkeyword.c and other cross references. A bit more structure to the\ncomments and code would enable that I think.\n\n> BTW, another thing in the back of my mind is that we should try to\n> figure out some way to unify ecpg's SQL grammar with the backend's.\n> Maintaining that thing is an even bigger headache than getting the\n> backend's own parser right.\n\nThat would be nice. Unfortunately that would lead to the main parser\nhaving the same machinations used in ecpg, with separate subroutine\ncalls for *every* production. Yuck. I wonder if some other structure\nwould be possible...\n\n - Thomas\n",
"msg_date": "Fri, 09 Nov 2001 18:14:09 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Call for objections: revision of keyword classification"
},
{
"msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n>> BTW, another thing in the back of my mind is that we should try to\n>> figure out some way to unify ecpg's SQL grammar with the backend's.\n>> Maintaining that thing is an even bigger headache than getting the\n>> backend's own parser right.\n\n> That would be nice. Unfortunately that would lead to the main parser\n> having the same machinations used in ecpg, with separate subroutine\n> calls for *every* production. Yuck.\n\nThe thing is that most of the actions in ecpg's grammar could easily be\ngenerated mechanically. My half-baked idea here is some sort of script\nthat would take the backend grammar, strip out the backend's actions and\nreplace 'em with mechanically-generated actions that reconstruct the\nquery string, and finally merge with a small set of hand-maintained\nrules that reflect ecpg's distinctive features.\n\nYou're quite right that nothing like this will reduce the amount that\nmaintainers have to know. But I think it could reduce the amount of\ntedious, purely mechanical, and error-prone maintenance work that we\nhave to do to keep various files and lists in sync.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 09 Nov 2001 13:25:58 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Call for objections: revision of keyword classification "
},
{
"msg_contents": "> One possibility would be to put better comments into gram.y, and to back\n> those comments up with a validation script that *could* generate\n> keyword.c and other cross references. A bit more structure to the\n> comments and code would enable that I think.\n\nA validation script is a good intermediate idea, similar to our\nduplicate_oids we have in include/catalog. It would make sure\nkeywords.c was sorted, and make sure each keyword appeared somewhere in\nlists of allowed function/column name productions.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 9 Nov 2001 13:49:24 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Call for objections: revision of keyword classification"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> A validation script is a good intermediate idea,\n\nIMHO a validation script would be *far* harder than the alternative\nI'm proposing, because it'd have to parse and interpret gram.y and\nkeyword.c. Building a correct-by-construction set of keyword lists\nseems much easier than checking their rather messy representation\nin those files.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 09 Nov 2001 14:09:31 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Call for objections: revision of keyword classification "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > A validation script is a good intermediate idea,\n> \n> IMHO a validation script would be *far* harder than the alternative\n> I'm proposing, because it'd have to parse and interpret gram.y and\n> keyword.c. Building a correct-by-construction set of keyword lists\n> seems much easier than checking their rather messy representation\n> in those files.\n\nAgreed. It just removed the indirection problem mentioned by Thomas.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 9 Nov 2001 14:14:21 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Call for objections: revision of keyword classification"
},
{
"msg_contents": "Tom Lane writes:\n\n> The keyword classification now looks like:\n>\n> TypeFuncId:\tIDENT plus all fully-unrestricted keywords\n>\n> ColId:\t\tTypeFuncId plus type-name keywords that might be\n> \t\tfollowed by '('; these can't be allowed to be\n> \t\tfunction names, but they can be column names.\n>\n> func_name:\tTypeFuncId plus a few special-case ColLabels\n> \t\t(this list could probably be extended further)\n>\n> ColLabel:\tColId plus everything else\n>\n> Comments? I'd like to apply this, unless there are objections.\n\nIs there any reason why ColLabel does not include func_name? All the\ntokens listed in func_name are also part of ColLabel.\n\n> I suppose Peter might complain about having to redo the keyword\n> tables ;-)\n\nThe question is, do we want to give the user that much detail, or should\nwe just say\n\nTypeFuncId, ColId -> \"non-reserved\"\nfunc_name, ColLabel -> \"reserved\" (along with the explanations in the\ntext)\n\nThe plain reserved/non-reserved scheme makes it easier to match up the\nPostgreSQL column with the SQL9x columns, and hopefully less users will\nnitpick about whatever details or consider the categories a promise for\nall future.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Wed, 14 Nov 2001 17:25:07 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Call for objections: revision of keyword classification"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Is there any reason why ColLabel does not include func_name? All the\n> tokens listed in func_name are also part of ColLabel.\n\nCan't do it directly (ie, make func_name one of the alternatives for\nColLabel) because that would result in a ton of shift-reduce conflicts:\nall the keywords in TypeFuncId would have two ways to be reduced to\nColLabel (via ColId or via func_name). We could restructure things\nby adding an auxiliary category:\n\n\tfunc_name: TypeFuncId | func_name_keywords;\n\n\tfunc_name_keywords: BETWEEN | BINARY | ... ;\n\n\tColLabel: ColId | func_name_keywords | ALL | ANALYSE | ... ;\n\nbut I'm not convinced that that's materially cleaner. Comments?\n\n> The question is, do we want to give the user that much detail, or should\n> we just say\n\n> TypeFuncId, ColId -> \"non-reserved\"\n> func_name, ColLabel -> \"reserved\" (along with the explanations in the\n> text)\n\nColId is certainly the most important category for ordinary users, so\nI agree that division would be sufficient for most people's purposes.\nHowever ... seems like the point of having this documentation at all\nis for it to be complete and accurate. I'd vote for telling the whole\ntruth, I think.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 14 Nov 2001 11:33:03 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Call for objections: revision of keyword classification "
},
{
"msg_contents": "BTW, is there any good reason that AS is not a member of the ColLabel\nset? It wouldn't cause a parse conflict to add it (I just tested that)\nand it seems like that's a special case we could do without.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 14 Nov 2001 11:43:37 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Call for objections: revision of keyword classification "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> I found that COALESCE, EXISTS, EXTRACT, NULLIF, POSITION, SUBSTRING, TRIM\n> can be moved from ColLabel to ColId.\n\nReally? I didn't bother experimenting with anything that had special\na_expr productions ... but now that you mention it, it makes sense that\nanything whose special meaning requires a following left paren could\nwork as a ColId.\n\nAre you recommending that we actually make this change now, or leave\nit for a future round of experiments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 15 Nov 2001 11:14:32 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Call for objections: revision of keyword classification "
},
{
"msg_contents": "Tom Lane writes:\n\n> ColId is certainly the most important category for ordinary users, so\n> I agree that division would be sufficient for most people's purposes.\n> However ... seems like the point of having this documentation at all\n> is for it to be complete and accurate. I'd vote for telling the whole\n> truth, I think.\n\nOkay, here's the new definition of truth then:\n\nTypeFuncId => \"non-reserved\"\nColId => \"non-reserved (cannot be function or type)\"\nfunc_name => \"reserved (can be function)\"\nColId => \"reserved\"\n\nThis can still be matched well against the SQL 9x columns.\n\nBut it gets worse... ;-)\n\nI found that COALESCE, EXISTS, EXTRACT, NULLIF, POSITION, SUBSTRING, TRIM\ncan be moved from ColLabel to ColId. (This makes sense given the new\ndefinition of ColId as above.) However, I *think* it should be possible\nto use these tokens as type names if one were willing to refactor these\nlists further. So there's possibly plenty of fun left in this area. ;-)\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 15 Nov 2001 17:16:07 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Call for objections: revision of keyword classification "
},
{
"msg_contents": "Tom Lane writes:\n\n> BTW, is there any good reason that AS is not a member of the ColLabel\n> set? It wouldn't cause a parse conflict to add it (I just tested that)\n> and it seems like that's a special case we could do without.\n\nFine with me. I guess I'll wait with the new table a bit yet. ;-)\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 15 Nov 2001 17:16:21 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Call for objections: revision of keyword classification "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Fine with me. I guess I'll wait with the new table a bit yet. ;-)\n\nThe coast is clear now ...\n\nI divided the keywords into four mutually exclusive, all inclusive\ncategory lists:\n\tunreserved_keyword\n\tcol_name_keyword\n\tfunc_name_keyword\n\treserved_keyword\nwhich I trust will not be a problem for your document-generating\nscript.\n\necpg has some finer distinctions but I'm happy to leave those\nundocumented.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 15 Nov 2001 23:12:24 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Call for objections: revision of keyword classification "
}
] |
[
{
"msg_contents": "\tMaybe I'm missing something, but it looks like the linker\nisn't honoring the LD_RUN_PATH environment variable and therefore\nisn't searching /usr/local/lib at runtime of applications. I can set\nLD_LIBRARY_PATH in my environment and everything works, but I'd like\nto distribute some solaris packages and not require LD_LIBRARY_PATH to\nbe set. Any thoughts on the following? I'm wondering if a new shell\nis being spawned that cleans the environment. ?? -sc\n\n> pg_dump -u eman\nld.so.1: pg_dump: fatal: libkrb5.so.3: open failed: No such file or directory\nKilled\n> ldd /usr/local/pgsql/bin/pg_dump \n libpq.so.2 => /usr/local/pgsql/lib/libpq.so.2\n libkrb5.so.3 => (file not found)\n libk5crypto.so.3 => (file not found)\n libcom_err.so.3 => (file not found)\n librt.so.1 => /usr/lib/librt.so.1\n libz.so => /usr/lib/libz.so\n libresolv.so.2 => /usr/lib/libresolv.so.2\n libgen.so.1 => /usr/lib/libgen.so.1\n libnsl.so.1 => /usr/lib/libnsl.so.1\n libsocket.so.1 => /usr/lib/libsocket.so.1\n libdl.so.1 => /usr/lib/libdl.so.1\n libm.so.1 => /usr/lib/libm.so.1\n libc.so.1 => /usr/lib/libc.so.1\n libgcc_s.so.1 => (file not found)\n libkrb5.so.3 => (file not found)\n libk5crypto.so.3 => (file not found)\n libcom_err.so.3 => (file not found)\n libgcc_s.so.1 => (file not found)\n libaio.so.1 => /usr/lib/libaio.so.1\n libmp.so.2 => /usr/lib/libmp.so.2\n /usr/platform/SUNW,Ultra-5_10/lib/libc_psr.so.1\n> setenv LD_LIBRARY_PATH /usr/local/lib\n> ldd /usr/local/pgsql/bin/pg_dump\n libpq.so.2 => /usr/local/pgsql/lib/libpq.so.2\n libkrb5.so.3 => /usr/local/lib/libkrb5.so.3\n libk5crypto.so.3 => /usr/local/lib/libk5crypto.so.3\n libcom_err.so.3 => /usr/local/lib/libcom_err.so.3\n librt.so.1 => /usr/lib/librt.so.1\n libz.so => /usr/local/lib/libz.so\n libresolv.so.2 => /usr/lib/libresolv.so.2\n libgen.so.1 => /usr/lib/libgen.so.1\n libnsl.so.1 => /usr/lib/libnsl.so.1\n libsocket.so.1 => /usr/lib/libsocket.so.1\n libdl.so.1 => /usr/lib/libdl.so.1\n libm.so.1 => /usr/lib/libm.so.1\n libc.so.1 => /usr/lib/libc.so.1\n libgcc_s.so.1 => /usr/local/lib/libgcc_s.so.1\n libaio.so.1 => /usr/lib/libaio.so.1\n libmp.so.2 => /usr/lib/libmp.so.2\n /usr/platform/SUNW,Ultra-5_10/lib/libc_psr.so.1\n> pg_dump -u eman\nUser name: \n",
"msg_date": "Thu, 8 Nov 2001 17:37:13 -0800",
"msg_from": "Sean Chittenden <sean@chittenden.org>",
"msg_from_op": true,
"msg_subject": "Linking probs on Sol8..."
}
] |
[
{
"msg_contents": "I find the pg_hba.conf file to be frustrating in its configuration. I dont\nsee anywhere in the document itself that I would be able to use a netmask,\nsay:\n\n192.168/16\n\nwhich I would find to be far easier than the current setup. It doesnt strike\nme as particularly difficult to include, and I think a lot of people would\ntake to it easier than the current setup.\n\nWhere might I propose such a patch? I'd really love to do it myself, except\nthat I'm a perl hacker and don't know the C necessary.\n\nThanks,\nalex\n\n--\nalex j. avriette\nperl hacker.\na_avriette@acs.org\n$dbh -> do('unhose');\n",
"msg_date": "Fri, 9 Nov 2001 09:31:46 -0500 ",
"msg_from": "Alex Avriette <a_avriette@acs.org>",
"msg_from_op": true,
"msg_subject": "Where might I propose a 'feature'?"
},
{
"msg_contents": "> I find the pg_hba.conf file to be frustrating in its configuration. I dont\n> see anywhere in the document itself that I would be able to use a netmask,\n> say:\n> \n> 192.168/16\n> \n> which I would find to be far easier than the current setup. It doesnt strike\n> me as particularly difficult to include, and I think a lot of people would\n> take to it easier than the current setup.\n> \n> Where might I propose such a patch? I'd really love to do it myself, except\n> that I'm a perl hacker and don't know the C necessary.\n\nThere is a clear \"MASK\" column in the pg_hba.conf file. Perhaps look at\nthe 7.2 version of the file as the documentation in it has been\nimproved.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 9 Nov 2001 13:44:38 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Where might I propose a 'feature'?"
}
] |
[
{
"msg_contents": "I'm going to look into this, but thought I'd share the misery:\n\nCREATE FUNCTION crash() RETURNS varchar AS '\nplpy.execute(\"syntax error\")\n' language 'plpython';\n\nSELECT crash();\n\n-Brad\n",
"msg_date": "Fri, 9 Nov 2001 14:01:30 -0500",
"msg_from": "Bradley McLean <brad@bradm.net>",
"msg_from_op": true,
"msg_subject": "Plpython crashing the backend in one easy step"
},
{
"msg_contents": "I need some expert guidance here. Suppose you have:\n\nCREATE FUNCTION crash() RETURNS varchar AS '\nplpy.execute(\"syntax error\")\n' language 'plpython';\n\nHere are three possible behaviors:\n\n(A)\n\na123=# select crash();\nERROR: parser: parse error at or near \"syntax\"\nERROR: plpython: Call of function `__plpython_procedure_crash_41133' failed.\nplpy.SPIError: Unknown error in PLy_spi_execute_query.\nFATAL 2: elog: error during error recovery, giving up!\nserver closed the connection unexpectedly\n\tThis probably means the server terminated abnormally\n\tbefore or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n!#\n\n(B)\n\na123=# select crash();\nERROR: parser: parse error at or near \"syntax\"\na123=#\n\n(C)\n\na123=# select crash();\nERROR: parser: parse error at or near \"syntax\"\nERROR: plpython: Call of function `__plpython_procedure_crash_41133' failed.\nplpy.SPIError: Unknown error in PLy_spi_execute_query.\na123=#\n\nOption (A) is the current code.\n\nFixing this happens near line 2290 (could be off a bit, I have some other\npatches in this file), in the first if clause in PLy_spi_execute_query.\n\nThe DECLARE_EXC, SAVE_EXC, TRAP_EXC, RESTORE_EXC, RERAISE_EXC macros\nare wrappers around sigsetjmp and longjmp, and are used to intercept\nthe elog calls occurring when plpython calls spi.\n\nIn Option (A), we return NULL, which causes the next level of code\nto call elog. Elog notices that we're already in an Error state, and\nshuts down the backend.\n\nIn Option (B), I replace the 'return NULL' with a RERAISE_EXC, which\nallows elog to function normally, albeit without any cleanup of the\nplpython environment or useful messages.\n\nIn Option (C), I set the global \"InError\" flag to false, and then\nreturn NULL, causing all of the error messages to come out and\nplpython to clean up gracefully, no backend crash. However, this\nseems to be an unprecedented approach, and I could be missing\nsomething big.\n\nThere's probably an Option (D) that I'm overlooking.\n\nHELP! (thanks)\n\n-Brad\n",
"msg_date": "Tue, 13 Nov 2001 14:32:45 -0500",
"msg_from": "Bradley McLean <brad@bradm.net>",
"msg_from_op": true,
"msg_subject": "Re: Plpython crashing the backend in one easy step"
},
{
"msg_contents": "Bradley McLean <brad@bradm.net> writes:\n> In Option (C), I set the global \"InError\" flag to false, and then\n> return NULL, causing all of the error messages to come out and\n> plpython to clean up gracefully, no backend crash. However, this\n> seems to be an unprecedented approach, and I could be missing\n> something big.\n\nYes, as in \"it's totally unsafe\". Suppressing an elog(ERROR) is \na *big* no-no at present, because way too much stuff relies on\npost-abort cleanup to clean up whatever problem is being reported.\nYou cannot allow the transaction to continue after the error, and\nyou most certainly mustn't cavalierly reset the error handling state.\n\nThe only things you should be doing with longjmp trapping are\n(a) doing any cleanup that Python itself has to have before you\nre-propagate the longjmp, or\n(b) issuing elog(NOTICE) to help identify the error location\nbefore you re-propagate the longjmp.\n\nplpgsql contains an example of doing (b).\n\nNot propagating the longjmp, which is what the code seems to be doing\nat present, is not acceptable. I had not looked at this code closely,\nbut as-is it is a huge reliability hazard. I will insist on removing\nplpython from 7.2 entirely if this can't be fixed before release.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 13 Nov 2001 16:55:14 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Plpython crashing the backend in one easy step "
},
{
"msg_contents": "Thanks, I assumed something like that.\n\nI can very quickly provide at implementation that meets\nthese criteria (It's my option \"b\"), but it may be\nless informative in terms of messages.\n\nI'll have a look to see what I can do to supplement\nthe messages.\n\nI also need to check if there's a memory leak after I\nfix it.\n\nI was aware of the example in plpgsql.\n\n-Brad\n\n\n* Tom Lane (tgl@sss.pgh.pa.us) [011113 17:08]:\n> \n> Yes, as in \"it's totally unsafe\". Suppressing an elog(ERROR) is \n> a *big* no-no at present, because way too much stuff relies on\n> post-abort cleanup to clean up whatever problem is being reported.\n> You cannot allow the transaction to continue after the error, and\n> you most certainly mustn't cavalierly reset the error handling state.\n> \n> The only things you should be doing with longjmp trapping are\n> (a) doing any cleanup that Python itself has to have before you\n> re-propagate the longjmp, or\n> (b) issuing elog(NOTICE) to help identify the error location\n> before you re-propagate the longjmp.\n> \n> plpgsql contains an example of doing (b).\n> \n> Not propagating the longjmp, which is what the code seems to be doing\n> at present, is not acceptable. I had not looked at this code closely,\n> but as-is it is a huge reliability hazard. I will insist on removing\n> plpython from 7.2 entirely if this can't be fixed before release.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n",
"msg_date": "Tue, 13 Nov 2001 17:28:26 -0500",
"msg_from": "Bradley McLean <brad@bradm.net>",
"msg_from_op": true,
"msg_subject": "Re: Plpython crashing the backend in one easy step"
},
{
"msg_contents": "Okay, the attached patch contains the following:\n\n- Kevin Jacob's patch to make plpython worthy of a 'trusted' language\nby eliminating the ability to arbitrarily read OS files. He also did\na bunch of C declaration cleanups. (His patch has yet to appear on\neither hackers or patches, and I believe is also a prerequisite for\nplpython in 7.2 final).\n\n- Changed every place that catches elog() longjmps to re-propogate.\n- Added code to keep track of the current plpython function and add\nit to the elog messages\n- #ifdef 0 around some items that appear to be not in use.\n- changes to the expected output of the regression tests, because:\n\nTom's requirement that we always reraise the longjmp is at odds with\nthe original design of plpython, which attempted to allow a plpython\nfunction to use Python's exception handling to recover when an\nembedded SPI call failed. There is no way to do this that I can\nsee without halting the propogation of the longjmp.\n\nThus, the semantics of 'execute' calls from plpython have been changed:\nshould the backend throw an error, the plpython function will be\nsummarily aborted. Previously, it could attempt to catch a python\nexception.\n\nNote that this does create some redundant exception handling paths\nin upper level methods within this module that are no longer used. \nI have not yet removed them, but will happily do so (in a careful\ndeliberate manner) once sure that there is no alternative way to\nrestore the python level exception handling.\n\nLet me know if this isn't what's needed to get this up to snuff\nfor 7.2.\n\n-Brad\n\n* Tom Lane (tgl@sss.pgh.pa.us) [011113 16:49]:\n>\n> Yes, as in \"it's totally unsafe\". Suppressing an elog(ERROR) is \n> a *big* no-no at present, because way too much stuff relies on\n> post-abort cleanup to clean up whatever problem is being reported.\n> You cannot allow the transaction to continue after the error, and\n> you most certainly mustn't cavalierly reset the error handling state.\n> \n> The only things you should be doing with longjmp trapping are\n> (a) doing any cleanup that Python itself has to have before you\n> re-propagate the longjmp, or\n> (b) issuing elog(NOTICE) to help identify the error location\n> before you re-propagate the longjmp.\n> \n> plpgsql contains an example of doing (b).\n> \n> Not propagating the longjmp, which is what the code seems to be doing\n> at present, is not acceptable. I had not looked at this code closely,\n> but as-is it is a huge reliability hazard. I will insist on removing\n> plpython from 7.2 entirely if this can't be fixed before release.\n> \n> \t\t\tregards, tom lane",
"msg_date": "Thu, 15 Nov 2001 13:55:47 -0500",
"msg_from": "Bradley McLean <brad@bradm.net>",
"msg_from_op": true,
"msg_subject": "Re: Plpython crashing the backend in one easy step - fix"
},
{
"msg_contents": "Bradley McLean <brad@bradm.net> writes:\n> Okay, the attached patch contains the following:\n\nI have checked over and applied this patch.\n\nIt appeared that you incorporated the earlier version of Kevin's patch\nrather than the later one. I made the further changes indicated by the\nattached diff, which I think are all the differences between Kevin's\ntwo submissions, but I'd appreciate it if both of you would review\nwhat's now in CVS and confirm it's okay. (There'll probably be a beta3\nrelease later today, so you can look at that if it's easier than CVS.)\n\nOne thing I noticed while running the selftest on a LinuxPPC box is\nthat the \"feature\" test output differed:\n\n--- feature.expected\tFri May 25 11:48:33 2001\n+++ feature.output\tFri Nov 16 13:00:15 2001\n@@ -29,7 +29,7 @@\n (1 row)\n \n SELECT import_fail();\n-NOTICE: ('import socket failed -- untrusted dynamic module: _socket',)\n+NOTICE: ('import socket failed -- untrusted dynamic module: socket',)\n import_fail \n --------------------\n failed as expected\n\nI assume you guys both get the \"expected\" output? Perhaps this should\nbe noted as a possible platform discrepancy.\n\n> Tom's requirement that we always reraise the longjmp is at odds with\n> the original design of plpython, which attempted to allow a plpython\n> function to use Python's exception handling to recover when an\n> embedded SPI call failed. There is no way to do this that I can\n> see without halting the propogation of the longjmp.\n> Thus, the semantics of 'execute' calls from plpython have been changed:\n> should the backend throw an error, the plpython function will be\n> summarily aborted. Previously, it could attempt to catch a python\n> exception.\n> Note that this does create some redundant exception handling paths\n> in upper level methods within this module that are no longer used. \n> I have not yet removed them, but will happily do so (in a careful\n> deliberate manner) once sure that there is no alternative way to\n> restore the python level exception handling.\n\nI would suggest leaving it as-is for now. Sooner or later we will\nprobably try to reduce the severity of most errors, and at that\npoint it'd become worthwhile to re-enable error trapping in plpython.\n\n\t\t\tregards, tom lane\n\n\n*** plpython.c~\tFri Nov 16 12:19:38 2001\n--- plpython.c\tFri Nov 16 12:24:43 2001\n***************\n*** 292,297 ****\n--- 292,298 ----\n \t\"binascii\",\n \t\"calendar\",\n \t\"cmath\",\n+ \t\"codecs\",\n \t\"errno\",\n \t\"marshal\",\n \t\"math\",\n***************\n*** 325,330 ****\n--- 326,332 ----\n \t\"hexrevision\",\n \t\"maxint\",\n \t\"maxunicode\",\n+ \t\"platform\",\n \t\"version\",\n \t\"version_info\"\n };\n",
"msg_date": "Fri, 16 Nov 2001 13:11:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Plpython crashing the backend in one easy step - fix "
},
{
"msg_contents": "On Fri, 16 Nov 2001, Tom Lane wrote:\n> --- feature.expected\tFri May 25 11:48:33 2001\n> +++ feature.output\tFri Nov 16 13:00:15 2001\n> @@ -29,7 +29,7 @@\n> (1 row)\n>\n> SELECT import_fail();\n> -NOTICE: ('import socket failed -- untrusted dynamic module: _socket',)\n> +NOTICE: ('import socket failed -- untrusted dynamic module: socket',)\n> import_fail\n> --------------------\n> failed as expected\n>\n> I assume you guys both get the \"expected\" output? Perhaps this should\n> be noted as a possible platform discrepancy.\n\nThis diff is most likely a platform issue where the module printed can\nchange depending on the version of Python installed. Its annoying, but\nharmless.\n\n-Kevin\n\n--\nKevin Jacobs\nThe OPAL Group - Enterprise Systems Architect\nVoice: (216) 986-0710 x 19 E-mail: jacobs@theopalgroup.com\nFax: (216) 986-0714 WWW: http://www.theopalgroup.com\n\n\n",
"msg_date": "Fri, 16 Nov 2001 13:21:01 -0500 (EST)",
"msg_from": "Kevin Jacobs <jacobs@penguin.theopalgroup.com>",
"msg_from_op": false,
"msg_subject": "Re: Plpython crashing the backend in one easy step - fix"
},
{
"msg_contents": "* Kevin Jacobs (jacobs@penguin.theopalgroup.com) [011116 13:15]:\n> On Fri, 16 Nov 2001, Tom Lane wrote:\n> > --- feature.expected\tFri May 25 11:48:33 2001\n> > +++ feature.output\tFri Nov 16 13:00:15 2001\n> > @@ -29,7 +29,7 @@\n> > (1 row)\n> >\n> > SELECT import_fail();\n> > -NOTICE: ('import socket failed -- untrusted dynamic module: _socket',)\n> > +NOTICE: ('import socket failed -- untrusted dynamic module: socket',)\n> > import_fail\n> > --------------------\n> > failed as expected\n> >\n> > I assume you guys both get the \"expected\" output? Perhaps this should\n> > be noted as a possible platform discrepancy.\n> \n> This diff is most likely a platform issue where the module printed can\n> change depending on the version of Python installed. Its annoying, but\n> harmless.\n\nI can confirm that: It occurs on stock python 1.5.2 (default on even the\nlatest RH 7.2 linux). Betweeen 1.5 and 2.0 the low level code was moved\nfrom socket to _socket to make it possible to include a high level python\nwrapper called socket.\n\nI've reviewed the merge from CVS and it appears correct. Thanks!\n\n-Brad\n",
"msg_date": "Fri, 16 Nov 2001 15:00:01 -0500",
"msg_from": "Bradley McLean <brad@bradm.net>",
"msg_from_op": true,
"msg_subject": "Re: Plpython crashing the backend in one easy step - fix"
},
{
"msg_contents": "Bradley McLean <brad@bradm.net> writes:\n> I can confirm that: It occurs on stock python 1.5.2 (default on even the\n> latest RH 7.2 linux).\n\nBingo --- python 1.5.2 is what's installed on that machine. Seems\nto pass the self-test fine other than this one item.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 16 Nov 2001 15:32:59 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Plpython crashing the backend in one easy step - fix "
},
{
"msg_contents": "\nHas this been addressed?\n\n---------------------------------------------------------------------------\n\n> Bradley McLean <brad@bradm.net> writes:\n> > In Option (C), I set the global \"InError\" flag to false, and then\n> > return NULL, causing all of the error messages to come out and\n> > plpython to clean up gracefully, no backend crash. However, this\n> > seems to be an unprecedented approach, and I could be missing\n> > something big.\n> \n> Yes, as in \"it's totally unsafe\". Suppressing an elog(ERROR) is \n> a *big* no-no at present, because way too much stuff relies on\n> post-abort cleanup to clean up whatever problem is being reported.\n> You cannot allow the transaction to continue after the error, and\n> you most certainly mustn't cavalierly reset the error handling state.\n> \n> The only things you should be doing with longjmp trapping are\n> (a) doing any cleanup that Python itself has to have before you\n> re-propagate the longjmp, or\n> (b) issuing elog(NOTICE) to help identify the error location\n> before you re-propagate the longjmp.\n> \n> plpgsql contains an example of doing (b).\n> \n> Not propagating the longjmp, which is what the code seems to be doing\n> at present, is not acceptable. I had not looked at this code closely,\n> but as-is it is a huge reliability hazard. I will insist on removing\n> plpython from 7.2 entirely if this can't be fixed before release.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Nov 2001 22:48:55 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Plpython crashing the backend in one easy step"
}
] |
[
{
"msg_contents": "I have a database in PG 7.1.3 with the following schema:\n\ndb02=# \\d ellipse\n Table \"ellipse\"\n Attribute | Type | Modifier \n--------------------------+--------------+----------\n subject | text | \n arm | character(1) | \n rep | integer | \n exp_date | date | \n exp_time | time | \n success | integer | \n figure_radius | integer | \n tube_radius | integer | \n cursor_radius | integer | \n direction | integer | \n ellipse_ratio | real | \n exag_ratio | real | \n exag_start | integer | \n exag_end | integer | \n rotation_angle | real | \n min_inter_trial_interval | integer | \nIndex: pkellipse\n\nIf I try the command:\ndb02=# select distinct arm from ellipse where exag_ratio = 1.0;\n arm \n-----\n L\n R\n(2 rows)\n\nwhich is correct.\n\nNow I try the same command with a different 'real' field:\ndb02=# select distinct arm from ellipse where ellipse_ratio = 1.8;\n arm \n-----\n(0 rows)\n\nBUT, if I put the value in quotes (as if it were a string), I get:\n\ndb02=# select distinct arm from ellipse where ellipse_ratio = '1.8';\n arm \n-----\n L\n R\n(2 rows)\n\nwhich is correct.\n\nThis variable ellipse_ratio seems to be the only one of type 'real'\nthat requires me to use quotes (which doesn't really make sense since\nit's not a character or string anyway). exag_ratio and rotation_angle\nbehave as I would expect a real-typed variable to behave.\n\ndb02=# select distinct exag_ratio, ellipse_ratio, rotation_angle from\nellipse;\n exag_ratio | ellipse_ratio | rotation_angle \n------------+---------------+----------------\n 1 | 0.56 | 0\n 1 | 1.8 | 0\n(2 rows)\n\n\n\nHas anyone seen this behavior before? Perhaps, I'm doing something\nwrong here or thinking of this all wrong?\n\nThanks.\n-Tony Reina\n\nWelcome to psql, the PostgreSQL interactive terminal.\n\nType: \\copyright for distribution terms\n \\h for help with SQL commands\n \\? for help on internal slash commands\n \\g or terminate with semicolon to execute query\n \\q to quit\n\ndb02=# select version();\n version \n-------------------------------------------------------------\n PostgreSQL 7.1.3 on i686-pc-linux-gnu, compiled by GCC 2.96\n(1 row)\n\nPG server is RH Linux 7.1 (Seawolf), PIII 400 MHz\nVacuum verbose analyze performed just prior to the searches listed\njust to be sure.\n",
"msg_date": "9 Nov 2001 11:09:23 -0800",
"msg_from": "reina@nsi.edu (Tony Reina)",
"msg_from_op": true,
"msg_subject": "'real' strange problem in 7.1.3"
},
{
"msg_contents": "reina@nsi.edu (Tony Reina) writes:\n> db02=# select distinct arm from ellipse where ellipse_ratio = 1.8;\n> arm \n> -----\n> (0 rows)\n\nYou realize that floating-point values aren't exact? Probably the \"1.8\"\nin the database is a few bits off in the seventh decimal place, and so\nit's not exactly equal to the \"1.8\" you've given as a constant. In\nfact, seeing that you've actually written the constant as a float8, it's\nalmost certain that the float4 value in the database will not promote to\nexactly that float8. On my machine I get\n\nregression=# select (1.8::float4)::float8;\n float8\n------------------\n 1.79999995231628\n(1 row)\n\nregression=# select 1.8::float4 - 1.8::float8;\n ?column?\n-----------------------\n -4.76837158647214e-08\n(1 row)\n\nregression=# select 1.8::float4 = 1.8::float8;\n ?column?\n----------\n f\n(1 row)\n\nYou *might* find that writing \"where ellipse_ratio = 1.8::float4\"\nselects your database row, or you might not --- if the 1.8 in the\ndatabase was the result of a calculation, and didn't arise directly\nfrom input conversion of the exact string \"1.8\", then the odds are\nit won't match. (Your example with putting single quotes around the\n1.8 is equivalent to this explicit coercion, BTW.)\n\nIn any case, any programming textbook will tell you that doing exact\ncomparisons on floats is folly. Consider something like\n\n\t... where abs(ellipse_ratio - 1.8) < 1.0e-6;\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 09 Nov 2001 15:06:28 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 'real' strange problem in 7.1.3 "
},
{
"msg_contents": ">\n> Now I try the same command with a different 'real' field:\n> db02=# select distinct arm from ellipse where ellipse_ratio = 1.8;\n> arm\n> -----\n> (0 rows)\n>\n> BUT, if I put the value in quotes (as if it were a string), I get:\n>\n> db02=# select distinct arm from ellipse where ellipse_ratio = '1.8';\n> arm\n> -----\n> L\n> R\n> (2 rows)\n>\n> which is correct.\n\nThe reason is that in the first, the 1.8 is treated as double precision\nwhich is slightly different than the 1.8 as real (you can see this with\na select 1.8::real-1.8;). I think the second postpones deciding the type\nand gets converted into a 1.8 as real.\n\n\n",
"msg_date": "Fri, 9 Nov 2001 12:17:19 -0800 (PST)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: 'real' strange problem in 7.1.3"
},
{
"msg_contents": "Tom Lane wrote:\n\n> You *might* find that writing \"where ellipse_ratio = 1.8::float4\"\n> selects your database row, or you might not --- if the 1.8 in the\n> database was the result of a calculation, and didn't arise directly\n> from input conversion of the exact string \"1.8\", then the odds are\n> it won't match. (Your example with putting single quotes around the\n> 1.8 is equivalent to this explicit coercion, BTW.)\n>\n>\n\nAh, floating point precision errors! Yes, this makes sense now. Plus,\nyou're saying that putting the 1.8 in quotes is interpreted by the parser\nas adding the ::float4 at the end. That's the bit of information that I\nneeded. I thought that perhaps my value was being stored as a string even\nthough PG was telling me that it was a float.\n\nThanks Tom and Stephan.\n\n-Tony\n\n\n",
"msg_date": "Fri, 09 Nov 2001 14:22:04 -0800",
"msg_from": "\"G. Anthony Reina\" <reina@nsi.edu>",
"msg_from_op": false,
"msg_subject": "Re: 'real' strange problem in 7.1.3"
},
{
"msg_contents": "\"G. Anthony Reina\" <reina@nsi.edu> writes:\n> you're saying that putting the 1.8 in quotes is interpreted by the parser\n> as adding the ::float4 at the end. That's the bit of information that I\n> needed. I thought that perhaps my value was being stored as a string even\n> though PG was telling me that it was a float.\n\nMore precisely, when you write\n\tWHERE foo = 'const'\nthe constant is essentially forced to take on the datatype of foo.\n(It's initially treated as a constant of type UNKNOWN, and then the\noperator resolution rules will prefer to select an \"=\" operator with\nfoo's datatype on both sides, and then the unknown constant gets\ncoerced to that type. Messy but it works.)\n\nThere has been some discussion about trying to handle numeric literals\nin a similar fashion, wherein we don't nail down their type immediately,\nbut it's not been done yet. Right now 1.8 will be taken as float8\non sight, and then you end up with a float4-vs-float8 comparison,\nwhich is unlikely to work nicely except with values that are exactly\nrepresentable in float4 (such as small integers).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 09 Nov 2001 18:43:08 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 'real' strange problem in 7.1.3 "
},
{
"msg_contents": "reina@nsi.edu (Tony Reina) writes:\n\n> db02=# select distinct arm from ellipse where exag_ratio = 1.0;\n \nYou never want to use the = test on floating point numbers. Two\napparently equal numbers may differ in the least significant digit.\nThe behavior will be close to random.\n\nWhen you convert the floating point to a string and round off to a\nspecific number of digits, you can use string compare to get more\npredictable results.\n\nAnother possibility is to do something like \n\n where abs(exag_ratio - 1.0) > 0.000001\n\n(I'm not sure about the SQL function for absolute value, but you get\nthe idea).\n\n-Knut\n\n-- \nThe early worm gets the bird.\n",
"msg_date": "09 Nov 2001 15:55:58 -0800",
"msg_from": "Knut Forkalsrud <kforkalsrud@cj.com>",
"msg_from_op": false,
"msg_subject": "Re: 'real' strange problem in 7.1.3"
}
] |
[
{
"msg_contents": "David Ford wrote:\n> \n> Is the best method of reloading pg_hba.conf to SIGHUP the master process?\n\nYou dont need to reload it. It all happens automatically - just edit it\nand\nit will be consulted at next connect.\n\n--------------\nHannu\n",
"msg_date": "Sat, 10 Nov 2001 00:57:00 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": true,
"msg_subject": "Re: best method of reloading pg_hba.conf"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > Is the best method of reloading pg_hba.conf to SIGHUP the master process?\n> \n> In 7.2, yes, pg_ctl restart or SIGHUP. On 7.1.X pg_hba.conf is reread\n> on every connection request.\n\nWhy was it changed ?\n\n--------\nHannu\n",
"msg_date": "Sat, 10 Nov 2001 01:32:31 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": true,
"msg_subject": "Re: best method of reloading pg_hba.conf"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > Bruce Momjian wrote:\n> > >\n> > > > Is the best method of reloading pg_hba.conf to SIGHUP the master process?\n> > >\n> > > In 7.2, yes, pg_ctl restart or SIGHUP. On 7.1.X pg_hba.conf is reread\n> > > on every connection request.\n> >\n> > Why was it changed ?\n> \n> Performance. Peter E found that considerable startup time was being\n> wasted reading the file.\n\nBut can't we read it only when needed ?\n\nJust stat'ing it should be much cheaper than reading it each time.\n\n----------------\nHannu\n",
"msg_date": "Sat, 10 Nov 2001 01:43:55 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": true,
"msg_subject": "Re: best method of reloading pg_hba.conf"
},
{
"msg_contents": "Is the best method of reloading pg_hba.conf to SIGHUP the master process?\n\nDavid\n\n\n",
"msg_date": "Fri, 09 Nov 2001 17:03:58 -0500",
"msg_from": "David Ford <david@blue-labs.org>",
"msg_from_op": false,
"msg_subject": "best method of reloading pg_hba.conf"
},
{
"msg_contents": "> Is the best method of reloading pg_hba.conf to SIGHUP the master process?\n\nIn 7.2, yes, pg_ctl restart or SIGHUP. On 7.1.X pg_hba.conf is reread\non every connection request.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 9 Nov 2001 17:44:26 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: best method of reloading pg_hba.conf"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> > \n> > > Is the best method of reloading pg_hba.conf to SIGHUP the master process?\n> > \n> > In 7.2, yes, pg_ctl restart or SIGHUP. On 7.1.X pg_hba.conf is reread\n> > on every connection request.\n> \n> Why was it changed ?\n\nPerformance. Peter E found that considerable startup time was being\nwasted reading the file.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 9 Nov 2001 18:31:47 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: best method of reloading pg_hba.conf"
},
{
"msg_contents": "> > Performance. Peter E found that considerable startup time was being\n> > wasted reading the file.\n> \n> But can't we read it only when needed ?\n> \n> Just stat'ing it should be much cheaper than reading it each time.\n\nWe thought about that but it seems we could be reading partial writes to\nthe file. This way the administrator controls when the changes become\neffective.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 9 Nov 2001 18:46:14 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: best method of reloading pg_hba.conf"
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n\n> Bruce Momjian wrote:\n> > \n> > > Is the best method of reloading pg_hba.conf to SIGHUP the master process?\n> > \n> > In 7.2, yes, pg_ctl restart or SIGHUP. On 7.1.X pg_hba.conf is reread\n> > on every connection request.\n> \n> Why was it changed ?\n\nI think to give the admin control over when it gets reread. Say it's\nbeing automatically generated by a cron job for some reason, and a\nconnection request comes in while it's being written--the backend\nwould get a corrupted version of the file. (Or if a text editor's in\nthe midst of saving it).\n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n",
"msg_date": "09 Nov 2001 18:50:42 -0500",
"msg_from": "Doug McNaught <doug@wireboard.com>",
"msg_from_op": false,
"msg_subject": "Re: best method of reloading pg_hba.conf"
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> Bruce Momjian wrote:\n>> In 7.2, yes, pg_ctl restart or SIGHUP. On 7.1.X pg_hba.conf is reread\n>> on every connection request.\n\n> Why was it changed ?\n\nTo cut a few more percent off connection startup time. (According to\nBruce's measurements, reading pg_hba.conf was a measurable fraction\nof startup.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 09 Nov 2001 18:55:54 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: best method of reloading pg_hba.conf "
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > > Performance. Peter E found that considerable startup time was being\n> > > wasted reading the file.\n> >\n> > But can't we read it only when needed ?\n> >\n> > Just stat'ing it should be much cheaper than reading it each time.\n> \n> We thought about that but it seems we could be reading partial writes to\n> the file. This way the administrator controls when the changes become\n> effective.\n> \n\nHannu,\n\nIt is not only much safer (you get a 2nd chance to check what you've\ndone)\nbut is also consistent with the behavior of other Unix daemons.\n\n\n-- \nFernando Nasser\nRed Hat Canada Ltd. E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n",
"msg_date": "Fri, 09 Nov 2001 19:21:02 -0500",
"msg_from": "Fernando Nasser <fnasser@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: best method of reloading pg_hba.conf"
},
{
"msg_contents": ">> But can't we read it only when needed ?\n>> Just stat'ing it should be much cheaper than reading it each time.\n\nWe'd already created a precedent for read-on-HUP for postgresql.conf,\nand no one seemed to be complaining about that. So although this change\nwill doubtless annoy some existing users, I don't see a big problem\nwith it. I'm happy to avoid the stat() call --- every kernel call we\ncan remove from the startup sequence is another small win.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 09 Nov 2001 19:28:41 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: best method of reloading pg_hba.conf "
},
{
"msg_contents": "Fernando Nasser wrote:\n> \n> Bruce Momjian wrote:\n> >\n> > > > Performance. Peter E found that considerable startup time was being\n> > > > wasted reading the file.\n> > >\n> > > But can't we read it only when needed ?\n> > >\n> > > Just stat'ing it should be much cheaper than reading it each time.\n> >\n> > We thought about that but it seems we could be reading partial writes to\n> > the file. This way the administrator controls when the changes become\n> > effective.\n\nYou could do the writing in a proper way - write to temp file and then\nrename.\n\n> Hannu,\n> \n> It is not only much safer (you get a 2nd chance to check what you've\n> done)\n> but is also consistent with the behavior of other Unix daemons.\n\nWell, that much I already knew ;)\n\nExcept for sendmail where I must run newaliases, that is.\n\n\nSending a HUP is actually ok with me, it just came as a surprise.\n\n----------------------\nHannu\n",
"msg_date": "Sat, 10 Nov 2001 18:13:19 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": true,
"msg_subject": "Re: best method of reloading pg_hba.conf"
}
] |
[
{
"msg_contents": "Deds Castillo <deds@infiniteinfo.com> writes:\n\n> Good day to you. I've searched the net but I can't seem to find the solution \n> for this. I've found a few more posts which are exactly the same as my error \n> but no solution was given.\n> \n> The problem is when I install the postgresql-tcl rpm and I try to createlang \n> pltcl. The error below occurs:\n> \n> ERROR: Load of file /usr/lib/pgsql/pltcl.so failed: /usr/lib/pgsql/pltcl.so: \n> undefined symbol: Tcl_CreateSlave\n> \n> So instead of executing createlang, I try to isolate the error using this \n> command:\n> \n> testdb=# CREATE FUNCTION pltcl_call_handler () RETURNS OPAQUE AS\n> '/usr/lib/pgsql/pltcl.so' LANGUAGE 'C';\n\nDuring the build process, the tcl shared module is created like this:\n\ngcc -pipe -shared -Wl,-soname,libtcl.so.0 -o pltcl.so pltcl.o -L/usr/lib -ltcl -ldl -lieee -lm -lc\n\nspecifying the soname \"libtcl.so.0\" and at the same time linking to\nlibtcl.so.0 (which is the name of the shared tcl library in RHL 7.2)\nresults in trouble when loading it later. \n\nCreating the module with a different so-name (\"libtcl\" seems like a\nbad idea to use outside the main package) solves the problem and gives\nyou a module which loads\n\ngcc -pipe -shared -Wl,-soname,libpgtcl.so.0 -o pltcl.so pltcl.o -L/usr/lib -ltcl -ldl -lieee -lm -lc\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n",
"msg_date": "09 Nov 2001 16:50:41 -0500",
"msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)",
"msg_from_op": true,
"msg_subject": "Re: Error on stock postgresql-tcl-7.1.3-2.i386.rpm included in RH7.2"
},
{
"msg_contents": "On Friday 09 November 2001 04:50 pm, Trond Eivind Glomsr�d wrote:\n> Creating the module with a different so-name (\"libtcl\" seems like a\n> bad idea to use outside the main package) solves the problem and gives\n> you a module which loads\n\n> gcc -pipe -shared -Wl,-soname,libpgtcl.so.0 -o pltcl.so pltcl.o -L/usr/lib\n> -ltcl -ldl -lieee -lm -lc\n\nWill this cause any conflicts with the tcl client lib 'libpgtcl.so.x'? \nShould the soname be 'pltcl.so.0' ?\n\nPeterE?\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Fri, 9 Nov 2001 17:14:06 -0500",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Error on stock postgresql-tcl-7.1.3-2.i386.rpm included in RH7.2"
},
{
"msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n\n> On Friday 09 November 2001 04:50 pm, Trond Eivind Glomsr�d wrote:\n> > Creating the module with a different so-name (\"libtcl\" seems like a\n> > bad idea to use outside the main package) solves the problem and gives\n> > you a module which loads\n> \n> > gcc -pipe -shared -Wl,-soname,libpgtcl.so.0 -o pltcl.so pltcl.o -L/usr/lib\n> > -ltcl -ldl -lieee -lm -lc\n> \n> Will this cause any conflicts with the tcl client lib 'libpgtcl.so.x'? \n> Should the soname be 'pltcl.so.0' ?\n\n--- postgresql-7.1.3/src/pl/tcl/Makefile.tcldefs.tclsoname\tFri Nov 9 17:12:38 2001\n+++ postgresql-7.1.3/src/pl/tcl/Makefile.tcldefs\tFri Nov 9 17:15:03 2001\n@@ -19,7 +19,7 @@\n TCL_SHLIB_CFLAGS = -fPIC\n TCL_CFLAGS_WARNING = -Wall -Wconversion -Wno-implicit-int\n TCL_EXTRA_CFLAGS = \n-TCL_SHLIB_LD = gcc -pipe -shared -Wl,-soname,libtcl.so.0\n+TCL_SHLIB_LD = gcc -pipe -shared -Wl,-soname,libpltcl.so.0\n TCL_STLIB_LD = ar cr\n TCL_SHLIB_LD_LIBS = ${LIBS}\n TCL_SHLIB_SUFFIX = .so\n\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n",
"msg_date": "09 Nov 2001 17:16:03 -0500",
"msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)",
"msg_from_op": true,
"msg_subject": "Re: Error on stock postgresql-tcl-7.1.3-2.i386.rpm included in RH7.2"
},
{
"msg_contents": "teg@redhat.com (Trond Eivind Glomsr�d) writes:\n\n> Lamar Owen <lamar.owen@wgcr.org> writes:\n> \n> > On Friday 09 November 2001 04:50 pm, Trond Eivind Glomsr�d wrote:\n> > > Creating the module with a different so-name (\"libtcl\" seems like a\n> > > bad idea to use outside the main package) solves the problem and gives\n> > > you a module which loads\n> > \n> > > gcc -pipe -shared -Wl,-soname,libpgtcl.so.0 -o pltcl.so pltcl.o -L/usr/lib\n> > > -ltcl -ldl -lieee -lm -lc\n> > \n> > Will this cause any conflicts with the tcl client lib 'libpgtcl.so.x'? \n> > Should the soname be 'pltcl.so.0' ?\n> \n> --- postgresql-7.1.3/src/pl/tcl/Makefile.tcldefs.tclsoname\tFri Nov 9 17:12:38 2001\n> +++ postgresql-7.1.3/src/pl/tcl/Makefile.tcldefs\tFri Nov 9 17:15:03 2001\n> @@ -19,7 +19,7 @@\n> TCL_SHLIB_CFLAGS = -fPIC\n> TCL_CFLAGS_WARNING = -Wall -Wconversion -Wno-implicit-int\n> TCL_EXTRA_CFLAGS = \n> -TCL_SHLIB_LD = gcc -pipe -shared -Wl,-soname,libtcl.so.0\n> +TCL_SHLIB_LD = gcc -pipe -shared -Wl,-soname,libpltcl.so.0\n> TCL_STLIB_LD = ar cr\n> TCL_SHLIB_LD_LIBS = ${LIBS}\n> TCL_SHLIB_SUFFIX = .so\n\n(scratch that - this file is generated when building) \n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n",
"msg_date": "09 Nov 2001 17:27:01 -0500",
"msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)",
"msg_from_op": true,
"msg_subject": "Re: Error on stock postgresql-tcl-7.1.3-2.i386.rpm included in RH7.2"
},
{
"msg_contents": "Trond Eivind Glomsr�d writes:\n\n> During the build process, the tcl shared module is created like this:\n>\n> gcc -pipe -shared -Wl,-soname,libtcl.so.0 -o pltcl.so pltcl.o -L/usr/lib -ltcl -ldl -lieee -lm -lc\n>\n> specifying the soname \"libtcl.so.0\" and at the same time linking to\n> libtcl.so.0 (which is the name of the shared tcl library in RHL 7.2)\n> results in trouble when loading it later.\n\nThis must be a bug (feature?) in the Tcl package. I see no such thing\nhappening here (RH 7.0, tcl-8.3.1-46):\n\ngcc -pipe -shared -o pltcl.so pltcl.o -L/usr/lib -ltcl8.3 -ldl -lieee -lm -lc\n\nI don't know whose idea the soname was, but it surely wasn't a good one.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Sun, 11 Nov 2001 16:38:48 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Error on stock postgresql-tcl-7.1.3-2.i386.rpm included"
}
] |
[
{
"msg_contents": "I am trying to use the new 'alter table drop constraint' syntax to drop \na primary key constraint (or foreign key constraint) on the 7.2beta2 \ncode. It gives me the following error:\n\nfiles31=# select version();\n version\n-------------------------------------------------------------\n PostgreSQL 7.2b2 on i686-pc-linux-gnu, compiled by GCC 2.96\n(1 row)\n\nfiles31=# alter table XYF_FILES DROP CONSTRAINT XYF_FILES_PK;\nERROR: parser: parse error at or near \";\"\n\n\nWhen I look at the definition on the table through psql \\d I can see \nthat this constraint does indeed exist. What am I doing wrong?\n\nI am assuming that this syntax should work because it is documented in \nthe 7.2 docs.\n\nthanks,\n--Barry\n\nPS I also notice that 'alter table drop constraint' doesn't appear to be \ntested at all in the alter_table regression test. 'alter table add \nconstraint' is tested however.\n\n",
"msg_date": "Fri, 09 Nov 2001 15:10:27 -0800",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": true,
"msg_subject": "Bug?? -- Alter table drop constraint doesn't seem to work on a\n\tprimary key constraint in 7.2beta2"
},
{
"msg_contents": "Barry Lind <barry@xythos.com> writes:\n> files31=# alter table XYF_FILES DROP CONSTRAINT XYF_FILES_PK;\n> ERROR: parser: parse error at or near \";\"\n\nYou forgot the RESTRICT/CASCADE option.\n\n> I am assuming that this syntax should work because it is documented in \n> the 7.2 docs.\n\nIf it's documented without the option then the docs are in error;\nwhere are you looking?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 09 Nov 2001 19:16:54 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bug?? -- Alter table drop constraint doesn't seem to work on a\n\tprimary key constraint in 7.2beta2"
},
{
"msg_contents": "Tom,\n\nI was looking at the 7.2 docs online at developer.postgresql.org. The \nonly example of 'drop constraint' in the text for the 'alter table' \ncommand shows its usage without the RESTRICT/CASCADE option. I also \nnoticed that RESTRICT/CASCADE is not defined in the description of alter \ntable so I am not really sure what each does.\n\nBut I still can't get it to work for me. Consider the following test case:\n\ncreate table test (col_a integer not null, col_b text);\nalter table test add constraint test_pk primary key (col_a);\nalter table test drop constraint test_pk restrict;\nalter table test drop constraint test_pk cascade;\n\nproduces the following output:\n\nfiles31=# \\i test.sql\nCREATE\npsql:test.sql:6: NOTICE: ALTER TABLE / ADD PRIMARY KEY will create \nimplicit index 'test_pk' for table 'test'\nCREATE\npsql:test.sql:8: ERROR: ALTER TABLE / DROP CONSTRAINT: test_pk does not \nexist\npsql:test.sql:10: ERROR: ALTER TABLE / DROP CONSTRAINT does not support \nthe CASCADE keyword\n\n\nNotice that it doesn't seem to be able to drop the primary key \nconstraint that was just created when I use the RESTRICT keyword and it \nclaims not to support the CASCADE keyword at all.\n\nthanks,\n--Barry\n\n\n\nTom Lane wrote:\n\n> Barry Lind <barry@xythos.com> writes:\n> \n>>files31=# alter table XYF_FILES DROP CONSTRAINT XYF_FILES_PK;\n>>ERROR: parser: parse error at or near \";\"\n>>\n> \n> You forgot the RESTRICT/CASCADE option.\n> \n> \n>>I am assuming that this syntax should work because it is documented in \n>>the 7.2 docs.\n>>\n> \n> If it's documented without the option then the docs are in error;\n> where are you looking?\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n",
"msg_date": "Fri, 09 Nov 2001 16:40:46 -0800",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": true,
"msg_subject": "Re: Bug?? -- Alter table drop constraint doesn't seem to work on a\n\tprimary key constraint in 7.2beta2"
},
{
"msg_contents": "Barry Lind <barry@xythos.com> writes:\n> I was looking at the 7.2 docs online at developer.postgresql.org. The \n> only example of 'drop constraint' in the text for the 'alter table' \n> command shows its usage without the RESTRICT/CASCADE option. I also \n> noticed that RESTRICT/CASCADE is not defined in the description of alter \n> table so I am not really sure what each does.\n\n> But I still can't get it to work for me.\n\nOkay, looks like we have both some doco and some code issues to fix ...\nthanks for the report.\n\n> Notice that it doesn't seem to be able to drop the primary key \n> constraint that was just created when I use the RESTRICT keyword and it \n> claims not to support the CASCADE keyword at all.\n\nI can believe that CASCADE might be a not-yet-supported option, but\nthe simpler case ought to work.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 09 Nov 2001 19:49:23 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bug?? -- Alter table drop constraint doesn't seem to work on a\n\tprimary key constraint in 7.2beta2"
},
{
"msg_contents": "Barry Lind <barry@xythos.com> writes:\n> I was looking at the 7.2 docs online at developer.postgresql.org. The \n> only example of 'drop constraint' in the text for the 'alter table' \n> command shows its usage without the RESTRICT/CASCADE option.\n\nErroneous example fixed.\n\n> But I still can't get it to work for me. Consider the following test case:\n\n> create table test (col_a integer not null, col_b text);\n> alter table test add constraint test_pk primary key (col_a);\n> alter table test drop constraint test_pk restrict;\n\n> psql:test.sql:8: ERROR: ALTER TABLE / DROP CONSTRAINT: test_pk does not \n> exist\n\nLooking at the code, the problem is that DROP CONSTRAINT only works with\nCHECK constraints at the moment. This does seem to be adequately\ndocumented. Improving the functionality will have to wait for some\nfuture development cycle.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 10 Nov 2001 15:16:41 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bug?? -- Alter table drop constraint doesn't seem to work on a\n\tprimary key constraint in 7.2beta2"
},
{
"msg_contents": "> Barry Lind <barry@xythos.com> writes:\n> > I was looking at the 7.2 docs online at developer.postgresql.org. The\n> > only example of 'drop constraint' in the text for the 'alter table'\n> > command shows its usage without the RESTRICT/CASCADE option.\n\nI could have sworn that the documentation patch for this specified that the\nrestrict clause was needed.\n\nHere you go (from the Notes section):\n\nIn DROP CONSTRAINT, the RESTRICT keyword is required, although dependencies\nare not yet checked. The CASCADE option is unsupported. Currently DROP\nCONSTRAINT drops only CHECK constraints. To remove a PRIMARY or UNIQUE\nconstraint, drop the relevant index using the DROP INDEX command. To remove\nFOREIGN KEY constraints you need to recreate and reload the table, using\nother parameters to the CREATE TABLE command.\n\n> Erroneous example fixed.\n>\n> > But I still can't get it to work for me. Consider the\n> following test case:\n>\n> > create table test (col_a integer not null, col_b text);\n> > alter table test add constraint test_pk primary key (col_a);\n> > alter table test drop constraint test_pk restrict;\n>\n> > psql:test.sql:8: ERROR: ALTER TABLE / DROP CONSTRAINT: test_pk\n> does not\n> > exist\n>\n> Looking at the code, the problem is that DROP CONSTRAINT only works with\n> CHECK constraints at the moment. This does seem to be adequately\n> documented. Improving the functionality will have to wait for some\n> future development cycle.\n\nDROP CONSTRAINT can only drop CHECK constraints at the moment - and I could\nalso have sworn that that was documented in my SGML patch!\n\nPerhaps the fall-through message about the constraint not existing is\nsomewhat misleading...\n\nAnyway Barry - you can just go \"DROP INDEX test_pk\". In fact -\n\nChris\n\n",
"msg_date": "Mon, 12 Nov 2001 10:37:29 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Bug?? -- Alter table drop constraint doesn't seem to work on a\n\tprimary key constraint in 7.2beta2"
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> I could have sworn that the documentation patch for this specified that the\n> restrict clause was needed.\n\nSo it did, but there was an example further down with no RESTRICT.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 11 Nov 2001 21:40:36 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bug?? -- Alter table drop constraint doesn't seem to work on a\n\tprimary key constraint in 7.2beta2"
}
] |
[
{
"msg_contents": "Someone sent this to me.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nBruce,\n\nThis is just a reminder for someone on the PostgreSQL\nteam to register PostgreSQL for the Java Developer's\nJournal 2002 awards. This will enable readers to vote\nfor PostgreSQL! The page is at:\nhttp://www.sys-con.com/java/readerschoice2002/\n\nRegards,\n\nScott Schneider\n\n__________________________________________________\nDo You Yahoo!?\nFind a job, post your resume.\nhttp://careers.yahoo.com",
"msg_date": "Sat, 10 Nov 2001 00:04:00 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Remember to register PostgreSQL for JDJ 2002 awards (fwd)"
},
{
"msg_contents": "\ndone ... for both enterprise db and drivers ... if anyone has any\napplications on gborg or sourceforge that would apply, though, they should\nget them online as well ...\n\n\nOn Sat, 10 Nov 2001, Bruce Momjian wrote:\n\n> Someone sent this to me.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n\n",
"msg_date": "Tue, 13 Nov 2001 15:08:02 -0500 (EST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Remember to register PostgreSQL for JDJ 2002 awards (fwd)"
},
{
"msg_contents": "Hi everyone,\n\nHas someone taken care of this? Maybe Barry would be the best person?\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\nBruce Momjian wrote:\n> \n> Someone sent this to me.\n> \n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ------------------------------------------------------------------------\n> \n> Subject: Remember to register PostgreSQL for JDJ 2002 awards\n> Date: Fri, 9 Nov 2001 19:53:34 -0800 (PST)\n> From: Scott Schneider <zzzkey@yahoo.com>\n> To: pgman@candle.pha.pa.us\n> \n> Bruce,\n> \n> This is just a reminder for someone on the PostgreSQL\n> team to register PostgreSQL for the Java Developer's\n> Journal 2002 awards. This will enable readers to vote\n> for PostgreSQL! The page is at:\n> http://www.sys-con.com/java/readerschoice2002/\n> \n> Regards,\n> \n> Scott Schneider\n> \n> __________________________________________________\n> Do You Yahoo!?\n> Find a job, post your resume.\n> http://careers.yahoo.com\n> \n> ------------------------------------------------------------------------\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Thu, 27 Dec 2001 01:40:52 +1100",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Remember to register PostgreSQL for JDJ 2002 awards (fwd)"
},
{
"msg_contents": "> Has someone taken care of this? Maybe Barry would be the best person?\n\nIt seems PostgreSQL is mentioned, but the page it links to is broken. Is\nthere another registration step we need, or is it just broken??\n\n - Thomas\n\n> > This is just a reminder for someone on the PostgreSQL\n> > team to register PostgreSQL for the Java Developer's\n> > Journal 2002 awards. This will enable readers to vote\n> > for PostgreSQL! The page is at:\n> > http://www.sys-con.com/java/readerschoice2002/\n",
"msg_date": "Wed, 26 Dec 2001 16:20:24 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: [JDBC] Remember to register PostgreSQL for JDJ 2002 awards "
}
] |
[
{
"msg_contents": "Dear Hiroshi,\n\nWe need this Java Unicode Notation as it is the only one accepted by \nJavascript for example. Furthermore, it is ASCII compatible and therefore \nan alternative when an existing database has ASCII encoding or PostgreSQL \nis not compiled with the right extensions (ex: provider).\n\nAt least, I need to be able to convert output to Java Unicode Notation. I \nam learning C and will have a try, but not before a few weeks. This is what \nI say everyday when I wake up: today, I am going to read PostgreSQL \ninternals source code.\n\nBut, unfortunately, I always have to postpone this step...\nThanks for your help.\n\nBest regards,\nJean-Michel POURE\n\nAt 10:13 10/11/01 +0900, you wrote:\n>Hi,\n>\n> > -----Original Message-----\n> > From: Jean-Michel POURE [mailto:jm.poure@freesurf.fr]\n> >\n> > Dear Hiroshi,\n> >\n> > Could it be possible to use the Java Unicode Notation to define UTF-8\n> > strings in PostgreSQL 7.2.\n>\n>We are now in 7.2 beta and it seems impossible to add a new feature\n>to 7.2. Am I misunderstanding your point ?\n>\n> > Information can be found on http://czyborra.com/utf/\n> >\n> > Do you think it is hard to implement?\n>\n>It seems difficult to get a consensus about it in PG community\n>in the first place. I asked some developers' opinion in Japan but\n>they(me either) aren't enthusiatic about it. However they seem\n>to have another idea though I don't the details about it now.\n>I would mail you about it if I would get some info.\n>\n>regards,\n>Hiroshi Inoue\n\n",
"msg_date": "Sat, 10 Nov 2001 11:45:12 +0100",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": true,
"msg_subject": "Re: Java's Unicode Notation "
}
] |
[
{
"msg_contents": "Rae Stiening (stiening@cannon.astro.umass.edu) reports a bug with a severity of 2\nThe lower the number the more severe it is.\n\nShort Description\nunion all changes char(3) column definition\n\nLong Description\nRae Stiening stiening@cannon.astro.umass.edu 11/10/2001\nThis script demonstrates the incorrect restoration of a\ntable created by a union under postgresql version 7.1.3.\nCHANGE\nDROP\nCREATE\nINSERT 2468415409 1\nINSERT 2468415410 1\nVACUUM\nCHANGE\nDROP\nSELECT\n rd_flg | cntr \n--------+------\n ABC | 1\n DEF | 2\n(2 rows)\n\n rd_flg | cntr \n--------+------\n ABC | 1\n DEF | 2\n ABC | 1\n DEF | 2\n(4 rows)\n\nDROP\nYou are now connected as new user postgres.\nCREATE\n rd_flg | cntr \n--------+------\n A | 1\n D | 2\n A | 1\n D | 2\n(4 rows)\n\nThe column rd_flg has not been restored properly. Note that\nthe column definition has changed from char(3) to character\n\nSample Code\necho Rae Stiening stiening@cannon.astro.umass.edu 11/10/2001\necho This script demonstrates the incorrect restoration of a\necho table created by a union under postgresql version 7.1.3.\npsql -c \"revoke select on xyzzy from public\" wsdb\npsql -c \"drop table xyzzy\" wsdb\npsql -c \"create table xyzzy (rd_flg char(3),cntr integer)\" wsdb\npsql -c \"insert into xyzzy values('ABC',1)\" wsdb\npsql -c \"insert into xyzzy values('DEF',2)\" wsdb\npsql -c \"vacuum analyze xyzzy\" wsdb\npsql -c \"revoke select on zzxxyy from public\" wsdb\npsql -c \"drop table zzxxyy\" wsdb\npsql -c \"create table zzxxyy as\n select * from xyzzy\n union all\n select * from xyzzy\" wsdb\npsql -c \"select * from xyzzy\" wsdb\npsql -c \"select * from zzxxyy\" wsdb\npg_dump -t zzxxyy wsdb > zzxxyy.tbl\npsql -c \"drop table zzxxyy\" wsdb\ncat zzxxyy.tbl | psql wsdb\npsql -c \"select * from zzxxyy\" wsdb\necho The column rd_flg has not been restored properly. Note that\necho the column definition has changed from char'('3')' to character\n\n\nNo file was uploaded with this report\n\n",
"msg_date": "Sat, 10 Nov 2001 11:59:20 -0500 (EST)",
"msg_from": "pgsql-bugs@postgresql.org",
"msg_from_op": true,
"msg_subject": "Bug #513: union all changes char(3) column definition"
},
{
"msg_contents": "Rae Stiening (stiening@cannon.astro.umass.edu) writes:\n> This script demonstrates the incorrect restoration of a\n> table created by a union under postgresql version 7.1.3.\n\nWhat's really going on here is that\n\n1. The CREATE TABLE AS command creates a column with type bpchar and\n typmod -1 (ie, no specific length enforced).\n\n2. pg_dump dumps this column with the type identified as \"character\".\n\n3. On reload, \"character\" is interpreted as \"character(1)\".\n\nWhile each of these behaviors is justifiable to some degree when\nconsidered by itself, their interaction is not good. It is actually\nnot possible for pg_dump to dump this table correctly, because there\nis no CREATE TABLE command it can give to reproduce the type/typmod\ncombination.\n\nI thought a little bit about trying to disallow the creation of such\ntables, but I don't believe that can work in the general case.\nCREATE TABLE AS cannot be expected to be able to extract a suitable\ntypmod from complex expressions. We could think about replacing\n\"bpchar/-1\" with \"text\", but that only fixes the problem for bpchar;\nwe have the exact same issue with numeric, and there is no comparable\nworkaround for numeric.\n\nSo I think what we need to do is rejigger the type display and entry\nrules so that there is a recognized representation for \"bpchar with\nno typmod\", \"numeric with no typmod\", etc, and the parser will not\nbogusly insert default length limits when it sees this representation.\n\nFor char I propose that this representation be\n\t\tbpchar\nie the underlying type name. This is a bit ugly, but since the notion\nof char(n) with no particular limit is definitely non-SQL92 anyway,\nusing a non-SQL name seems appropriate.\n\nFor varchar, it already works to write any of\n\t\tvarchar\n\t\tchar varying\n\t\tcharacter varying\nThis does not conflict with SQL92 since the standard doesn't allow the\nlength spec to be omitted in these types, and so there's not an expected\ndefault of 1 as there is for char.\n\nFor numeric, we could say that the representation for typmod -1 is\n\t\t\"numeric\"\n(double quotes required) ... but I really wonder why we have the\nconvention that numeric defaults to numeric(30,6) in the first place.\nWhy shouldn't the default behavior be to use typmod -1 (no limit)?\nThe (30,6) convention cannot be justified on the basis of the SQL spec;\nit says the default precision is implementation-defined and the default\nscale is zero. \"No limit\" looks like a good enough\nimplementation-defined precision value to me, and as for the scale,\ndefaulting to no scale adjustment is less likely to make anyone unhappy\nthan defaulting to scale zero. So I propose that we remove all notion\nof a default precision and scale for numeric, and say that numeric\nwritten without a precision/scale spec means numeric with typmod -1.\n\nFor bit, the SQL spec requires us to interpret unadorned bit as meaning\nbit(1), so there seems little choice but to use\n\t\t\"bit\"\n(quotes required) for the typmod -1 case.\n\nFor varbit, \"bit varying\" already works and need not be messed with;\nsame rationale as for varchar.\n\n\nAs far as implementation goes, on the output side all that's needed is\nsome simple changes in format_type to emit the desired representation.\nIn the parser, we need to remove transformColumnType's diddling of\ntypmod and instead insert the correct default typmod in gram.y. We\ncan't do it later than gram.y since the distinction between \"bit\" and\nbit, etc, is not visible later.\n\nComments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 10 Nov 2001 13:39:19 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bug #513: union all changes char(3) column definition "
},
{
"msg_contents": "Tom Lane writes:\n\n> CREATE TABLE AS cannot be expected to be able to extract a suitable\n> typmod from complex expressions.\n\nI don't think that would be entirely unreasonable. The current system\ndrops typmods at first sight when it gets scared of them, but in many\ncases it would make sense for them to propagate much further.\n\nWe've already seen a case where \"no typmod\" means different things in\ndifferent places for lack of a good way to keep the information. If we\never want to allow user-defined data types to have atttypmods a solution\nwould be necessary.\n\nHere's another example where the behaviour is not consistent with other\nplaces:\n\npeter=# create table one (a bit(4));\nCREATE\npeter=# create table two (b bit(6));\nCREATE\npeter=# insert into one values (b'1001');\nINSERT 16570 1\npeter=# insert into two values (b'011110');\nINSERT 16571 1\npeter=# select * from one union select * from two;\n011110\n1001\n\nWhat's the data type of that? The fact is that bit without typmod makes\nno sense, even less so than char without typmod.\n\nA possible solution would be that data types can register a\ntypmod-resolver function, which takes two typmods and returns the typmod\nto make both expressions union-compatible. For varchar(n) and varchar(m)\nis would return max(m,n), for bit(n) and bit(m) it would return an error\nif m<>n. (The behaviour of char() could be either of these two.)\n\nSurely a long-term idea though...\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Sun, 11 Nov 2001 16:39:30 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Bug #513: union all changes char(3) column definition "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> CREATE TABLE AS cannot be expected to be able to extract a suitable\n>> typmod from complex expressions.\n\n> I don't think that would be entirely unreasonable.\n\nWell, it might not be completely impossible, but I think it's well on\nthe far side of unreasonable. For *every operator* that produces a\nresult of any of the typmod-using types, we'd have to maintain an\nauxiliary bit of code that can predict the result typmod. That's\na lot of code, and when you start considering user-defined functions\nit gets worse. And all for what? Not to do anything useful, but only\nto *eliminate* functionality. Perhaps char without typmod is\nunnecessary (since it reduces to text), but numeric without typmod seems\nhighly useful to me.\n\nStrikes me as a very large amount of work to go in the wrong\ndirection...\n\n> A possible solution would be that data types can register a\n> typmod-resolver function, which takes two typmods and returns the typmod\n> to make both expressions union-compatible.\n\nThis only handles the UNION and CASE merge scenarios.\n\nIt'd probably be reasonable for UNION/CASE to copy the input typmod\nif the alternatives all agree on the type and typmod. But solving\nthe general problem would be a lot of work of dubious value.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 11 Nov 2001 11:49:47 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bug #513: union all changes char(3) column definition "
},
{
"msg_contents": "> Peter Eisentraut <peter_e@gmx.net> writes:\n> > Tom Lane writes:\n> >> CREATE TABLE AS cannot be expected to be able to extract a suitable\n> >> typmod from complex expressions.\n> \n> > I don't think that would be entirely unreasonable.\n> \n> Well, it might not be completely impossible, but I think it's well on\n> the far side of unreasonable. For *every operator* that produces a\n> result of any of the typmod-using types, we'd have to maintain an\n> auxiliary bit of code that can predict the result typmod. That's\n> a lot of code, and when you start considering user-defined functions\n> it gets worse. And all for what? Not to do anything useful, but only\n> to *eliminate* functionality. Perhaps char without typmod is\n> unnecessary (since it reduces to text), but numeric without typmod seems\n> highly useful to me.\n> \n> Strikes me as a very large amount of work to go in the wrong\n> direction...\n\nAdded to TODO:\n\n\t* CREATE TABLE AS can not determine column lengths from expressions\n\n\nSeems it should be documented. Do we throw an error in these cases?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Nov 2001 22:10:02 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bug #513: union all changes char(3) column definition"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Added to TODO:\n> \t* CREATE TABLE AS can not determine column lengths from expressions\n> Seems it should be documented. Do we throw an error in these cases?\n\nNo. What we do right now is to generate non-length-constrained column\ntypes for the created table.\n\nYour TODO item is too pessimistic: we *do* determine the column length\nin simple cases. For example:\n\nregression=# create table foo (f1 char(3));\nCREATE\nregression=# create table bar as select * from foo;\nSELECT\nregression=# \\d bar\n Table \"bar\"\n Column | Type | Modifiers\n--------+--------------+-----------\n f1 | character(3) |\n\nHowever, in more complex cases we don't know the column length:\n\nregression=# create table baz as select f1 || 'z' as f1 from foo;\nSELECT\nregression=# \\d baz\n Table \"baz\"\n Column | Type | Modifiers\n--------+--------+-----------\n f1 | bpchar |\n\nThe argument here is about how much intelligence it's reasonable to\nexpect the system to have. It's very clearly not feasible to derive\na length limit automagically in every case. How hard should we try?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 21 Nov 2001 22:51:04 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bug #513: union all changes char(3) column definition "
},
{
"msg_contents": "> regression=# create table baz as select f1 || 'z' as f1 from foo;\n> SELECT\n> regression=# \\d baz\n> Table \"baz\"\n> Column | Type | Modifiers\n> --------+--------+-----------\n> f1 | bpchar |\n> \n> The argument here is about how much intelligence it's reasonable to\n> expect the system to have. It's very clearly not feasible to derive\n> a length limit automagically in every case. How hard should we try?\n\nI don't think we can try in this case, especially because our functions\nare all burried down in adt/. However, I don't think creating a bpchar\nwith no length is a proper solution. Should we just punt to text in\nthese cases? Seems cleaner, perhaps even throw an elog(NOTICE)\nmentioning the promotion to text.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Nov 2001 22:53:27 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bug #513: union all changes char(3) column definition"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> However, I don't think creating a bpchar\n> with no length is a proper solution. Should we just punt to text in\n> these cases?\n\nHow many special cases like that do you want to put into the allegedly\ndatatype-independent CREATE TABLE code?\n\nIf I thought this were the only case then I'd not object ... but it\nlooks like a slippery slope from here.\n\nAnd --- it's not like replacing \"bpchar\" with \"text\" actually buys us\nany useful new functionality. AFAICS it's just a cosmetic thing.\n\n\t\t\tregards, tom lane\n\nPS: On the other hand, we might consider attacking the problem from\nthe reverse direction, ie *removing* code. For example, if there\nweren't redundant || operators for char and varchar, then every ||\noperation would yield text, and the example we're looking at would\nwork the way you want for free. I've thought for awhile that we\ncould use a pass through pg_proc and pg_operator to remove some\nentries we don't really need.\n",
"msg_date": "Wed, 21 Nov 2001 23:08:37 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bug #513: union all changes char(3) column definition "
},
{
"msg_contents": "> How many special cases like that do you want to put into the allegedly\n> datatype-independent CREATE TABLE code?\n> \n> If I thought this were the only case then I'd not object ... but it\n> looks like a slippery slope from here.\n> \n> And --- it's not like replacing \"bpchar\" with \"text\" actually buys us\n> any useful new functionality. AFAICS it's just a cosmetic thing.\n> \n> \t\t\tregards, tom lane\n> \n> PS: On the other hand, we might consider attacking the problem from\n> the reverse direction, ie *removing* code. For example, if there\n> weren't redundant || operators for char and varchar, then every ||\n> operation would yield text, and the example we're looking at would\n> work the way you want for free. I've thought for awhile that we\n> could use a pass through pg_proc and pg_operator to remove some\n> entries we don't really need.\n\nCan we convert bpchar to text in create table if no typmod is supplied?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Nov 2001 23:13:28 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bug #513: union all changes char(3) column definition"
},
{
"msg_contents": "Tom Lane writes:\n\n> The argument here is about how much intelligence it's reasonable to\n> expect the system to have. It's very clearly not feasible to derive\n> a length limit automagically in every case. How hard should we try?\n\nI would like to know what Proprietary database #1 does with\n\nCREATE TABLE one ( a bit(6) );\nINSERT INTO one VALUES ( b'101101' );\nCREATE TABLE two ( b bit(4) );\nINSERT INTO two VALUES ( b'0110' );\nCREATE TABLE three AS SELECT a FROM one UNION SELECT b FROM two;\n\nAccording to SQL92, clause 9.3, the result type of the union is bit(6).\nHowever, it's not possible to store a bit(4) value into a bit(6) field.\nOur current solution, \"bit(<nothing>)\" is even worse because it has no\nreal semantics at all (but you can store bit(<anything>) in it,\ninterestingly).\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 22 Nov 2001 18:21:17 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [BUGS] Bug #513: union all changes char(3) column definition "
},
{
"msg_contents": "\nThread added to TODO.detail.\n\n> Tom Lane writes:\n> \n> > The argument here is about how much intelligence it's reasonable to\n> > expect the system to have. It's very clearly not feasible to derive\n> > a length limit automagically in every case. How hard should we try?\n> \n> I would like to know what Proprietary database #1 does with\n> \n> CREATE TABLE one ( a bit(6) );\n> INSERT INTO one VALUES ( b'101101' );\n> CREATE TABLE two ( b bit(4) );\n> INSERT INTO two VALUES ( b'0110' );\n> CREATE TABLE three AS SELECT a FROM one UNION SELECT b FROM two;\n> \n> According to SQL92, clause 9.3, the result type of the union is bit(6).\n> However, it's not possible to store a bit(4) value into a bit(6) field.\n> Our current solution, \"bit(<nothing>)\" is even worse because it has no\n> real semantics at all (but you can store bit(<anything>) in it,\n> interestingly).\n> \n> -- \n> Peter Eisentraut peter_e@gmx.net\n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 5 Dec 2001 16:04:15 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [BUGS] Bug #513: union all changes char(3) column definition"
}
] |
[
{
"msg_contents": "Dear all,\n\nI need to store text documents in PosgreSQL with revision management.\nChora PHP cvs library can be used for this purpose.\n\nDid anyone think of integrating diff/patch within PostgreSQL?\nIf it were the case, we could build the first SQL CVS clone...\n\nBest regards, Jean-Michel POURE\n",
"msg_date": "Sat, 10 Nov 2001 18:29:56 +0100",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": true,
"msg_subject": "Diff/Patch integration -> SQL cvs clone"
},
{
"msg_contents": "Jean-Michel POURE writes:\n\n> Did anyone think of integrating diff/patch within PostgreSQL?\n\nCREATE OR REPLACE FUNCTION diff(text, text) RETURNS text AS '\n#!/bin/sh\n echo \"$1\" > /tmp/$$-one\n echo \"$2\" > /tmp/$$-two\n diff -c /tmp/$$-one /tmp/$$-two\n echo \"\"\n rm -f /tmp/$$-one /tmp/$$-two\n' LANGUAGE plsh;\n\npeter=> \\t\\a\npeter=> select diff('one\\ntwo\\nthree\\n', 'one\\nfive\\nthree\\n');\n\n*** /tmp/17580-one Sun Nov 11 16:09:08 2001\n--- /tmp/17580-two Sun Nov 11 16:09:08 2001\n***************\n*** 1,4 ****\n one\n! two\n three\n\n--- 1,4 ----\n one\n! five\n three\n\npatch() is left as an exercise. ;-)\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Sun, 11 Nov 2001 16:40:26 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Diff/Patch integration -> SQL cvs clone"
},
{
"msg_contents": "Hello Peter,\n\nFantastic. It is possible to provide wrappers around most utilities.\nI am stuck down on my chair. Cannot believe it...\n\nCheers,\nJean-Michel POURE\n\nAt 16:40 11/11/01 +0100, you wrote:\n>Jean-Michel POURE writes:\n>\n> > Did anyone think of integrating diff/patch within PostgreSQL?\n>\n>CREATE OR REPLACE FUNCTION diff(text, text) RETURNS text AS '\n>#!/bin/sh\n> echo \"$1\" > /tmp/$$-one\n> echo \"$2\" > /tmp/$$-two\n> diff -c /tmp/$$-one /tmp/$$-two\n> echo \"\"\n> rm -f /tmp/$$-one /tmp/$$-two\n>' LANGUAGE plsh;\n>\n>peter=> \\t\\a\n>peter=> select diff('one\\ntwo\\nthree\\n', 'one\\nfive\\nthree\\n');\n>\n>*** /tmp/17580-one Sun Nov 11 16:09:08 2001\n>--- /tmp/17580-two Sun Nov 11 16:09:08 2001\n>***************\n>*** 1,4 ****\n> one\n>! two\n> three\n>\n>--- 1,4 ----\n> one\n>! five\n> three\n>\n>patch() is left as an exercise. ;-)\n>\n>--\n>Peter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Sun, 11 Nov 2001 16:41:04 +0100",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": true,
"msg_subject": "Re: Diff/Patch integration -> SQL cvs clone"
},
{
"msg_contents": "----- Original Message ----- \nFrom: Jean-Michel POURE <jm.poure@freesurf.fr>\nSent: Sunday, November 11, 2001 10:41 AM\n\n> Fantastic. It is possible to provide wrappers around most utilities.\n\nWhich is not always portable and/or inconsistent, unfortunately;\ndepending on whether an OS you're running PG on has such utilities\nand they behave all in the same way or not...\n\n-s\n\n> >Jean-Michel POURE writes:\n> >\n> > > Did anyone think of integrating diff/patch within PostgreSQL?\n> >\n> >CREATE OR REPLACE FUNCTION diff(text, text) RETURNS text AS '\n> >#!/bin/sh\n> > echo \"$1\" > /tmp/$$-one\n> > echo \"$2\" > /tmp/$$-two\n> > diff -c /tmp/$$-one /tmp/$$-two\n> > echo \"\"\n> > rm -f /tmp/$$-one /tmp/$$-two\n> >' LANGUAGE plsh;\n> >\n> >peter=> \\t\\a\n> >peter=> select diff('one\\ntwo\\nthree\\n', 'one\\nfive\\nthree\\n');\n> >\n> >*** /tmp/17580-one Sun Nov 11 16:09:08 2001\n> >--- /tmp/17580-two Sun Nov 11 16:09:08 2001\n> >***************\n> >*** 1,4 ****\n> > one\n> >! two\n> > three\n> >\n> >--- 1,4 ----\n> > one\n> >! five\n> > three\n> >\n> >patch() is left as an exercise. ;-)\n> >\n> >--\n> >Peter Eisentraut peter_e@gmx.net\n\n\n",
"msg_date": "Sun, 11 Nov 2001 17:20:12 -0500",
"msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>",
"msg_from_op": false,
"msg_subject": "Re: Diff/Patch integration -> SQL cvs clone"
},
{
"msg_contents": "\nPeter, should plsh be added to our supplied server-side programming\nlanguages? Seems like a major feature to me and to others as well.\n\n---------------------------------------------------------------------------\n\n> Jean-Michel POURE writes:\n> \n> > Did anyone think of integrating diff/patch within PostgreSQL?\n> \n> CREATE OR REPLACE FUNCTION diff(text, text) RETURNS text AS '\n> #!/bin/sh\n> echo \"$1\" > /tmp/$$-one\n> echo \"$2\" > /tmp/$$-two\n> diff -c /tmp/$$-one /tmp/$$-two\n> echo \"\"\n> rm -f /tmp/$$-one /tmp/$$-two\n> ' LANGUAGE plsh;\n> \n> peter=> \\t\\a\n> peter=> select diff('one\\ntwo\\nthree\\n', 'one\\nfive\\nthree\\n');\n> \n> *** /tmp/17580-one Sun Nov 11 16:09:08 2001\n> --- /tmp/17580-two Sun Nov 11 16:09:08 2001\n> ***************\n> *** 1,4 ****\n> one\n> ! two\n> three\n> \n> --- 1,4 ----\n> one\n> ! five\n> three\n> \n> patch() is left as an exercise. ;-)\n> \n> -- \n> Peter Eisentraut peter_e@gmx.net\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Nov 2001 22:11:30 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Diff/Patch integration -> SQL cvs clone"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Peter, should plsh be added to our supplied server-side programming\n> languages? Seems like a major feature to me and to others as well.\n\nWhile plsh is surely a cool hack, I've got considerable unease about\nmaking it into an officially supported feature. The only reasons I can\nsee for wanting to use it (over plpgsql, pltcl, plperl, etc) are\ninherently violations of transaction semantics. Who's going to roll\nback your sendmail call when the calling transaction later aborts?\nWhat's going to ensure that the order of external effects has something\nto do with the serialization order that the database assigns to several\nconcurrent transactions?\n\nIMHO plsh is a great tool for shooting yourself in the foot (with a\nlarge-gauge firearm, in fact). People who know what they're doing are\nwelcome to use it ... but those sorts of people can find and install\nit for themselves.\n\nOf course the same worries apply to the untrusted variants of pltcl,\nplperl, etc. Perhaps we need to document the risks they carry. But\nplsh hasn't even got the possibility of a trusted variant :-(\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 21 Nov 2001 22:36:52 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Diff/Patch integration -> SQL cvs clone "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Peter, should plsh be added to our supplied server-side programming\n> > languages? Seems like a major feature to me and to others as well.\n> \n> While plsh is surely a cool hack, I've got considerable unease about\n> making it into an officially supported feature. The only reasons I can\n> see for wanting to use it (over plpgsql, pltcl, plperl, etc) are\n> inherently violations of transaction semantics. Who's going to roll\n> back your sendmail call when the calling transaction later aborts?\n> What's going to ensure that the order of external effects has something\n> to do with the serialization order that the database assigns to several\n> concurrent transactions?\n> \n> IMHO plsh is a great tool for shooting yourself in the foot (with a\n> large-gauge firearm, in fact). People who know what they're doing are\n> welcome to use it ... but those sorts of people can find and install\n> it for themselves.\n> \n> Of course the same worries apply to the untrusted variants of pltcl,\n> plperl, etc. Perhaps we need to document the risks they carry. But\n> plsh hasn't even got the possibility of a trusted variant :-(\n\nYes, I see your point, but if you need to do something in the operating\nsystem, like send mail, you are stuck with leaving transactions\nsemantics and security anyway. If they have to leave those anyway, why\nnot give them simple shell scripts?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Nov 2001 22:44:38 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Diff/Patch integration -> SQL cvs clone"
},
{
"msg_contents": "Dear friends\n\nI was surprised to notice that INTERBASE already has more than a dozen \ngraphical administration interfaces:\n- http://www.interbase2000.org/tools_dbman.htm,\n- http://delphree.clexpert.com.\n\nIn my dreams, I would welcome ALTER OR REPLACE VIEW + ALTER OR REPLACE \nTRIGGER (sorry, this is on my whish list again). Without these two features, \nthis is not easily ***possible*** to build a graphical admin & IDE interface \nfor PostgreSQL.\n\nKDE3/QT3 makes it possible to build cross-platform database administration \ntools. We don't need a dozen of them, just one good IDE with migration \nfeatures from MySQL, Oracle, MS SQL, Interbase...\n\nThe community is waiting for ALTER OR REPLACE, my friends... We also need \nALTER TABLE DROP FIELD, but this is probably more tricky.\n\nCheers,\nJean-Michel POURE\n",
"msg_date": "Thu, 22 Nov 2001 12:01:54 +0100",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": true,
"msg_subject": "ALTER OR REPLACE feature"
},
{
"msg_contents": "Jean-Michel POURE wrote:\n> \n> Dear friends\n> \n> I was surprised to notice that INTERBASE already has more than a dozen\n> graphical administration interfaces:\n> - http://www.interbase2000.org/tools_dbman.htm,\n> - http://delphree.clexpert.com.\n> \n> In my dreams, I would welcome ALTER OR REPLACE VIEW + ALTER OR REPLACE\n> TRIGGER (sorry, this is on my whish list again). Without these two features,\n> this is not easily ***possible*** to build a graphical admin & IDE interface\n> for PostgreSQL.\n> \n> KDE3/QT3 makes it possible to build cross-platform database administration\n> tools. We don't need a dozen of them, just one good IDE with migration\n> features from MySQL, Oracle, MS SQL, Interbase...\n\nCheck out TOra - this is currently for Oracle only, but teher is no\ninherent \nreason why you can't alter it for others.\n\n> The community is waiting for ALTER OR REPLACE, my friends... We also need\n> ALTER TABLE DROP FIELD, but this is probably more tricky.\n> \n> Cheers,\n> Jean-Michel POURE\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n",
"msg_date": "Thu, 22 Nov 2001 13:44:01 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] ALTER OR REPLACE feature"
},
{
"msg_contents": "> Bruce Momjian writes:\n> \n> > Peter, should plsh be added to our supplied server-side programming\n> > languages? Seems like a major feature to me and to others as well.\n> \n> I get mail from people that are using it for some pretty ugly\n> applications. Not the sort of stuff we want to let loose on newbies. I\n> should probably stick a warning in there somewhere.\n\nYes, if we outline the cases where they should/shouldn't be use plsh, it\nseems fine to me.\n\nAdded to TODO:\n\n\t* Add plsh server-side shell language (Peter E)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 22 Nov 2001 12:18:33 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Diff/Patch integration -> SQL cvs clone"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> Peter, should plsh be added to our supplied server-side programming\n> languages? Seems like a major feature to me and to others as well.\n\nI get mail from people that are using it for some pretty ugly\napplications. Not the sort of stuff we want to let loose on newbies. I\nshould probably stick a warning in there somewhere.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 22 Nov 2001 18:21:30 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Diff/Patch integration -> SQL cvs clone"
}
] |
[
{
"msg_contents": "Marc,\n\ncould you please explain what's happens with routing to\nfts.postgresql.org ? It's down for about a week !\nI already asked you in several messages but didn't get\nany clear explanation. Does postgresql community really\nneeds fts.postgresql.org ? Probably somebody could host\nfts.postgresql.org ?\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Mon, 12 Nov 2001 01:05:52 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "fts.postgresql.org problem ! still no routing"
},
{
"msg_contents": "Oleg Bartunov <oleg@sai.msu.su> writes:\n> could you please explain what's happens with routing to\n> fts.postgresql.org ? It's down for about a week !\n\nThe DNS entry still points at 151.net; perhaps it needs to be\nrepointed to one of the rackspace machines?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 11 Nov 2001 21:43:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: fts.postgresql.org problem ! still no routing "
},
{
"msg_contents": "\nAck, okay, hadn't realized the problem was as bad as it was ... in\ncleaning up the old server(s), I had accidentally removed the IP that\nfts.postgresql.org is/was pointin gat ... fixed now, please let me know if\nits okay ...\n\n\nOn Mon, 12 Nov 2001, Oleg Bartunov wrote:\n\n> Marc,\n>\n> could you please explain what's happens with routing to\n> fts.postgresql.org ? It's down for about a week !\n> I already asked you in several messages but didn't get\n> any clear explanation. Does postgresql community really\n> needs fts.postgresql.org ? Probably somebody could host\n> fts.postgresql.org ?\n>\n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n>\n>\n\n",
"msg_date": "Tue, 13 Nov 2001 12:41:04 -0500 (EST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: fts.postgresql.org problem ! still no routing"
},
{
"msg_contents": "\nright now, we haven't moved fts.postgresql.org over to the new server(s)\n... I had inadvertantly removed the IP from the old server, though, while\ndoing a major clean up of IPs ... fixed now ...\n\nOn Sun, 11 Nov 2001, Tom Lane wrote:\n\n> Oleg Bartunov <oleg@sai.msu.su> writes:\n> > could you please explain what's happens with routing to\n> > fts.postgresql.org ? It's down for about a week !\n>\n> The DNS entry still points at 151.net; perhaps it needs to be\n> repointed to one of the rackspace machines?\n>\n> \t\t\tregards, tom lane\n>\n\n",
"msg_date": "Tue, 13 Nov 2001 12:41:57 -0500 (EST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: fts.postgresql.org problem ! still no routing "
},
{
"msg_contents": "Marc,\n\nglad to hearing from you ! http://fts.postgresql.org points to\nhttp://www.hub.org/ !!!!!\n\nOoh, I had to restart httpd servers. It works now. Thanks\n\n\tRegards,\n\n\t\tOleg\nOn Tue, 13 Nov 2001, Marc G. Fournier wrote:\n\n>\n> Ack, okay, hadn't realized the problem was as bad as it was ... in\n> cleaning up the old server(s), I had accidentally removed the IP that\n> fts.postgresql.org is/was pointin gat ... fixed now, please let me know if\n> its okay ...\n>\n>\n> On Mon, 12 Nov 2001, Oleg Bartunov wrote:\n>\n> > Marc,\n> >\n> > could you please explain what's happens with routing to\n> > fts.postgresql.org ? It's down for about a week !\n> > I already asked you in several messages but didn't get\n> > any clear explanation. Does postgresql community really\n> > needs fts.postgresql.org ? Probably somebody could host\n> > fts.postgresql.org ?\n> >\n> > \tRegards,\n> > \t\tOleg\n> > _____________________________________________________________\n> > Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> > Sternberg Astronomical Institute, Moscow University (Russia)\n> > Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> > phone: +007(095)939-16-83, +007(095)939-23-83\n> >\n> >\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Tue, 13 Nov 2001 21:10:31 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "Re: fts.postgresql.org problem ! still no routing"
}
] |
[
{
"msg_contents": "I tried to submit a regression test:\n\nWarning: PostgreSQL query failed: ERROR: parser: parse error at or near \"t\"\nin /usr/local/www/developer/regress/regress.php on line 258\n\nWarning:\nfopen(\"/home/projects/pgsql/developers/vev/public_html/regress/regress/10055\n31913.failure\",\"w\") - No such file or directory in\n/usr/local/www/developer/regress/regress.php on line 265\nDatabase write failed.\n\nChris\n\n",
"msg_date": "Mon, 12 Nov 2001 10:25:42 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "regression test database borked"
}
] |
[
{
"msg_contents": "\nYou can see that we have very few open items and beta2 has generated no\nserious problems since its release on Wednesday. Should we consider a\ndate for RC1?\n\nItems postponed for 7.3 are at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches2\n\n\n---------------------------------------------------------------------------\n\n P O S T G R E S Q L\n\n 7 . 2 O P E N I T E M S\n\n\nCurrent at ftp://candle.pha.pa.us/pub/postgresql/open_items.\n\n* ALL ITEMS ARE COMPLETED *\n\nSource Code Changes\n-------------------\nCompile in syslog feature by default? (Peter, Tom)\nAIX compile (Tatsuo)\necpg patches from Christof (Michael)\n\nDocumentation Changes\n---------------------\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 12 Nov 2001 01:22:40 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Open items"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> You can see that we have very few open items\n\nNow that we have a test case for Warren Volz' problem (which I think is\nthe same thing that Barry Lind reported a week ago), I consider it a\n\"must fix\" for 7.2. No time estimate yet.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 12 Nov 2001 10:23:42 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open items "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > You can see that we have very few open items\n> \n> Now that we have a test case for Warren Volz' problem (which I think is\n> the same thing that Barry Lind reported a week ago), I consider it a\n> \"must fix\" for 7.2. No time estimate yet.\n\nAgreed. I was just giving everyone a _heads-up_ that we may be close to\nRC1.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 12 Nov 2001 11:36:54 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Open items"
},
{
"msg_contents": "\nEarliest RC1 will be is Dec 1st ...\n\nOn Mon, 12 Nov 2001, Bruce Momjian wrote:\n\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > You can see that we have very few open items\n> >\n> > Now that we have a test case for Warren Volz' problem (which I think is\n> > the same thing that Barry Lind reported a week ago), I consider it a\n> > \"must fix\" for 7.2. No time estimate yet.\n>\n> Agreed. I was just giving everyone a _heads-up_ that we may be close to\n> RC1.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n\n",
"msg_date": "Tue, 13 Nov 2001 12:43:44 -0500 (EST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Open items"
},
{
"msg_contents": "> \n> Earliest RC1 will be is Dec 1st ...\n\nAnd you get that date from where? We haven't even discussed it.\n\nI am not saying it is a bad date, but it seems we should poll people\nfirst to see when they want it.\n\nWhen do people want RC1? Let's hear from you.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 13 Nov 2001 12:50:41 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Open items"
},
{
"msg_contents": "\nNot sure which part you mis-understod ... I said \"Earliest RC1\", not \"RC1\nwill be\" ... there will be a Beta3 before RC1, today is the 13th, and Tom\nLane doesn't have a fix ready for what he's workin on ...\n\nGiving Tom a couple of days, then figuring at least a week for Beta3 to be\nout the door, we're looking at the earliest possible RC1 being Dec 1st ...\n\nIts not a matter of when ppl want RC1 ... its a matter of whe RC1 is\ndeemed ready ... its not something to vote or poll ppl about, but thanks\nfor the attempt ...\n\nOn Tue, 13 Nov 2001, Bruce Momjian wrote:\n\n> >\n> > Earliest RC1 will be is Dec 1st ...\n>\n> And you get that date from where? We haven't even discussed it.\n>\n> I am not saying it is a bad date, but it seems we should poll people\n> first to see when they want it.\n>\n> When do people want RC1? Let's hear from you.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n\n",
"msg_date": "Tue, 13 Nov 2001 12:57:04 -0500 (EST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Open items"
},
{
"msg_contents": "> \n> Not sure which part you mis-understod ... I said \"Earliest RC1\", not \"RC1\n> will be\" ... there will be a Beta3 before RC1, today is the 13th, and Tom\n> Lane doesn't have a fix ready for what he's workin on ...\n> \n> Giving Tom a couple of days, then figuring at least a week for Beta3 to be\n> out the door, we're looking at the earliest possible RC1 being Dec 1st ...\n> \n> Its not a matter of when ppl want RC1 ... its a matter of whe RC1 is\n> deemed ready ... its not something to vote or poll ppl about, but thanks\n> for the attempt ...\n\nI personally think RC1 can happen before December 1. That was my point.\n\nI also am not sure if we need a beta3 seeing how few problems there were\nwith beta2. But again, I am interested in what others say.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 13 Nov 2001 13:01:21 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Open items"
},
{
"msg_contents": "On Tue, 13 Nov 2001, Bruce Momjian wrote:\n\n> >\n> > Earliest RC1 will be is Dec 1st ...\n>\n> And you get that date from where? We haven't even discussed it.\n>\n> I am not saying it is a bad date, but it seems we should poll people\n> first to see when they want it.\n>\n> When do people want RC1? Let's hear from you.\n\nI think he's trying to tell you he won't be ready to do one till at least\nthen, not that that's the date.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 13 Nov 2001 13:10:04 -0500 (EST)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Open items"
},
{
"msg_contents": "On Tue, 13 Nov 2001, Bruce Momjian wrote:\n\n> >\n> > Not sure which part you mis-understod ... I said \"Earliest RC1\", not \"RC1\n> > will be\" ... there will be a Beta3 before RC1, today is the 13th, and Tom\n> > Lane doesn't have a fix ready for what he's workin on ...\n> >\n> > Giving Tom a couple of days, then figuring at least a week for Beta3 to be\n> > out the door, we're looking at the earliest possible RC1 being Dec 1st ...\n> >\n> > Its not a matter of when ppl want RC1 ... its a matter of whe RC1 is\n> > deemed ready ... its not something to vote or poll ppl about, but thanks\n> > for the attempt ...\n>\n> I personally think RC1 can happen before December 1. That was my point.\n>\n> I also am not sure if we need a beta3 seeing how few problems there were\n> with beta2. But again, I am interested in what others say.\n\nLooking at the number of commits that went on in the past couple of days,\nI won't put out an RC1 without a Beta3 to make sure that all potential\nbugs are covered over ... looking at a calender, though ... December 1st\n*might* be doable ...\n\nfigure Beta3 the end of this week, depending on Tom's luck with that bug\nhe's working on ... all goes smooth with Beta3, you may be right and we\ncould get a Rc1 packaged for the end of next week ... I gotta start\nlooking at calender's more often ...\n\nAlot depends on feedback on Beta3 ... and considering that there has been\nmore feedback so far with Beta2 then there was with Beta1, I still\nwouldn't hold my breath for Dec 1st, but it is conceivable ...\n\n\n",
"msg_date": "Tue, 13 Nov 2001 13:14:32 -0500 (EST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Open items"
},
{
"msg_contents": "> Looking at the number of commits that went on in the past couple of days,\n> I won't put out an RC1 without a Beta3 to make sure that all potential\n> bugs are covered over ... looking at a calender, though ... December 1st\n> *might* be doable ...\n\nSorry. I didn't mean to push you. If you are very busy now and\nafter December 1 is better for you, that is fine.\n\n> figure Beta3 the end of this week, depending on Tom's luck with that bug\n> he's working on ... all goes smooth with Beta3, you may be right and we\n\nActually, I thought he had that fixed, or is it a different one. If it\nis the one I am thinking of, he sent a bug fix to the user, the user\nconfirmed it was fixed, and he committed the fix to CVS.\n\n> could get a Rc1 packaged for the end of next week ... I gotta start\n> looking at calender's more often ...\n\nLet's see what Tom says. Maybe he or others want more time as you\nsuggested.\n\n> Alot depends on feedback on Beta3 ... and considering that there has been\n> more feedback so far with Beta2 then there was with Beta1, I still\n> wouldn't hold my breath for Dec 1st, but it is conceivable ...\n\nSure.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 13 Nov 2001 13:18:23 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Open items"
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > You can see that we have very few open items\n> \n> Now that we have a test case for Warren Volz' problem (which I think is\n> the same thing that Barry Lind reported a week ago), I consider it a\n> \"must fix\" for 7.2. No time estimate yet.\n\nWoohoo, I used the recently fixed fts.postgresql.org and found the fix\nTom has made:\n\nPatch:\n\thttp://fts.postgresql.org/db/mw/msg.html?mid=1061120\n\nUser confirmed fix:\n\n\thttp://fts.postgresql.org/db/mw/msg.html?mid=1090491\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 13 Nov 2001 13:22:01 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Open items"
},
{
"msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> Not sure which part you mis-understod ... I said \"Earliest RC1\", not \"RC1\n> will be\" ... there will be a Beta3 before RC1, today is the 13th, and Tom\n> Lane doesn't have a fix ready for what he's workin on ...\n\n?? If you're thinking about that EvalPlanQual-crash issue I was worried\nabout yesterday, that was yesterday ;-). It's fixed.\n\nWe do need a beta3, but I'd offer that we could put that out late this\nweek and plan for RC1 right after Thanksgiving holiday (ie, around 26\nNov, for the non-Americans on the list).\n\nSo far this has been a *very* quiet beta cycle, so I don't see a reason\nnot to be aggressive on the schedule. We can always slip if problems\ncome up --- but if we're not seeing any problems, why wait?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 13 Nov 2001 13:26:35 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open items "
},
{
"msg_contents": "On Tue, 13 Nov 2001, Tom Lane wrote:\n\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > Not sure which part you mis-understod ... I said \"Earliest RC1\", not \"RC1\n> > will be\" ... there will be a Beta3 before RC1, today is the 13th, and Tom\n> > Lane doesn't have a fix ready for what he's workin on ...\n>\n> ?? If you're thinking about that EvalPlanQual-crash issue I was worried\n> about yesterday, that was yesterday ;-). It's fixed.\n\nthat was the one ...\n\n> We do need a beta3, but I'd offer that we could put that out late this\n> week and plan for RC1 right after Thanksgiving holiday (ie, around 26\n> Nov, for the non-Americans on the list).\n\nya, after looking at a calender a bit more closely, I was starting to see\nwhere Bruce's thoughts were coming from ...\n\nLet's go for beta3 on Friday, and try for RC1 by the following Friday if\nbeta3 goes quiet ...\n\n\n",
"msg_date": "Tue, 13 Nov 2001 13:33:51 -0500 (EST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Open items "
},
{
"msg_contents": "On Tue, 13 Nov 2001, Bruce Momjian wrote:\n\n> > Looking at the number of commits that went on in the past couple of days,\n> > I won't put out an RC1 without a Beta3 to make sure that all potential\n> > bugs are covered over ... looking at a calender, though ... December 1st\n> > *might* be doable ...\n>\n> Sorry. I didn't mean to push you. If you are very busy now and\n> after December 1 is better for you, that is fine.\n\nNope, for some reason I thought Dec 1st was alot closer then it actually\nis ...\n\n\n",
"msg_date": "Tue, 13 Nov 2001 13:34:51 -0500 (EST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Open items"
},
{
"msg_contents": "> > We do need a beta3, but I'd offer that we could put that out late this\n> > week and plan for RC1 right after Thanksgiving holiday (ie, around 26\n> > Nov, for the non-Americans on the list).\n> \n> ya, after looking at a calender a bit more closely, I was starting to see\n> where Bruce's thoughts were coming from ...\n> \n> Let's go for beta3 on Friday, and try for RC1 by the following Friday if\n> beta3 goes quiet ...\n\nSounds great!\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 13 Nov 2001 13:36:12 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Open items"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> Source Code Changes\n> -------------------\n> Compile in syslog feature by default? (Peter, Tom)\n\nI consider this dead/postponed.\n\n> AIX compile (Tatsuo)\n> ecpg patches from Christof (Michael)\n\n* The last message translations should be in before RC1. Stuff that\ndoesn't compile at that time will be disabled. I suggest that from now on\nno more gratuitous \"word smithing\" in the C code, for the benefit of\ntranslators -- most of our messages stink anyway, and they ain't getting\nbetter with two more spaces in them. ;-)\n\n* Something needs to be done about the expected file for the geometry\ntest. The standard file used to work on my system ever since I can\nremember, but now something's changed (not my system).\n\nThe things is that the expected file was last changed without the input\nfile changing. This must not happen, IMNSHO.\n\n\n> Documentation Changes\n> ---------------------\n\n* Re-make key words table (tomorrow at the latest)\n\n* Make man pages (I'll look at that over the weekend. RC1 should be\nconsidered reference page freeze so I can finalize them.)\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Wed, 14 Nov 2001 17:24:50 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Open items"
},
{
"msg_contents": "> Bruce Momjian writes:\n> \n> > Source Code Changes\n> > -------------------\n> > Compile in syslog feature by default? (Peter, Tom)\n> \n> I consider this dead/postponed.\n\nOK, I will add it to TODO.\n\n> \n> > AIX compile (Tatsuo)\n\n> > ecpg patches from Christof (Michael)\n\nThe ecpg was just applied so that is done.\n\n> \n> * The last message translations should be in before RC1. Stuff that\n> doesn't compile at that time will be disabled. I suggest that from now on\n> no more gratuitous \"word smithing\" in the C code, for the benefit of\n> translators -- most of our messages stink anyway, and they ain't getting\n> better with two more spaces in them. ;-)\n\nAgreed.\n\n> * Something needs to be done about the expected file for the geometry\n> test. The standard file used to work on my system ever since I can\n> remember, but now something's changed (not my system).\n>\n> The things is that the expected file was last changed without the input\n> file changing. This must not happen, IMNSHO.\n> \n\nAdded to open items list.\n\n> \n> > Documentation Changes\n> > ---------------------\n> \n> * Re-make key words table (tomorrow at the latest)\n> \n> * Make man pages (I'll look at that over the weekend. RC1 should be\n> considered reference page freeze so I can finalize them.)\n\nCurrent open items list attached.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n P O S T G R E S Q L\n\n 7 . 2 O P E N I T E M S\n\n\nCurrent at ftp://candle.pha.pa.us/pub/postgresql/open_items.\n\nSource Code Changes\n-------------------\nCompile in syslog feature by default? (Peter, Tom)\nAIX compile (Tatsuo)\nFix geometry expected files\nComplete timestamp/current changes\n\nDocumentation Changes\n---------------------\nUpdate keywords table\nMake manual pages",
"msg_date": "Wed, 14 Nov 2001 11:34:18 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Open items"
},
{
"msg_contents": "> * Something needs to be done about the expected file for the geometry\n> test. The standard file used to work on my system ever since I can\n> remember, but now something's changed (not my system).\n\nI'll guess that the reference system has changed. I can't freeze my OS\nat some 1996 (or 2000) vintage version to guarantee that results never\nchange. I went the last year or two with that geometry test failing for\nme. I'm not sure what results I'd get with the latest glibc, but once I\nupgrade we'll find out.\n\nWhat system are you running for which you would expect an exact match of\ntest results in the transcendental functions?\n\n> The things is that the expected file was last changed without the input\n> file changing. This must not happen, IMNSHO.\n\nI had carefully inspected the differences (over and over and over) and\nthey were all trivial last-decimal-place kinds of things afaicr.\nActually, I'm responding here like I'm the one who made the change, but\nI haven't gone back to the cvs logs to confirm that.\n\n - Thomas\n",
"msg_date": "Wed, 14 Nov 2001 16:42:47 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Open items"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> * Something needs to be done about the expected file for the geometry\n> test. The standard file used to work on my system ever since I can\n> remember, but now something's changed (not my system).\n> The things is that the expected file was last changed without the input\n> file changing. This must not happen, IMNSHO.\n\nLockhart has always taken the position that the regression reference\nplatform is whatever he's using ;-). IIRC, he updated from Mandrake 7\nto Mandrake 8, or something like that, over the summer, and voila the\nreference geometry results changed.\n\nYou need to see if any of the existing variant files for geometry\nmatch your platform, and if not make a new variant; in any case add\nan entry to resultmap.\n\nOf course the long-term answer here is to arrange to suppress a few\nlow-order digits when displaying the geometry results, but that's\nnot happening for 7.2.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 14 Nov 2001 12:07:03 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open items "
},
{
"msg_contents": "----- Original Message ----- \nFrom: Peter Eisentraut <peter_e@gmx.net>\nSent: Wednesday, November 14, 2001 11:24 AM\n\n> * The last message translations should be in before RC1.\n\nI'll try to finish up Russian translations of pg_dump\nand fix the rest by Dec. 1.\n\n--\nSerguei A. Mokhov\n\n",
"msg_date": "Wed, 14 Nov 2001 15:38:09 -0500",
"msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>",
"msg_from_op": false,
"msg_subject": "Re: Open items"
},
{
"msg_contents": "On Wed, 14 Nov 2001, Thomas Lockhart wrote:\n\n> What system are you running for which you would expect an exact match of\n> test results in the transcendental functions?\n\nI think the general point is that it'd be nice if regression tests didn't\nreport failure due to numerical noise. :-)\n\n> > The things is that the expected file was last changed without the input\n> > file changing. This must not happen, IMNSHO.\n>\n> I had carefully inspected the differences (over and over and over) and\n> they were all trivial last-decimal-place kinds of things afaicr.\n> Actually, I'm responding here like I'm the one who made the change, but\n> I haven't gone back to the cvs logs to confirm that.\n\nIs there some way we can make the tests smart enough to only printout the\nsignificant digits, so that if there is a difference, it is important?\n\nTake care,\n\nBill\n\n",
"msg_date": "Wed, 14 Nov 2001 18:40:52 -0800 (PST)",
"msg_from": "Bill Studenmund <wrstuden@netbsd.org>",
"msg_from_op": false,
"msg_subject": "Re: Open items"
},
{
"msg_contents": "> Is there some way we can make the tests smart enough to only printout the\n> significant digits, so that if there is a difference, it is important?\n\nWell, what we've avoided doing so far, on the assumption that we might\nmask some subtle but important problem, is to run the select outputs\nthrough a formatting function which strips off a few (in)significant\ndigits.\n\nI'm not sure if to_char() can do the job (it seems to be oriented to\ndoing fixed-length fields) but if it can then we've got something usable\nalready.\n\n - Thomas\n",
"msg_date": "Thu, 15 Nov 2001 03:02:40 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Open items"
},
{
"msg_contents": "> On Wed, 14 Nov 2001, Thomas Lockhart wrote:\n> \n> > What system are you running for which you would expect an exact match of\n> > test results in the transcendental functions?\n> \n> I think the general point is that it'd be nice if regression tests didn't\n> report failure due to numerical noise. :-)\n> \n\nAdded to TODO:\n\n* Modify regression tests to prevent failures do to minor numeric\n rounding\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 14 Nov 2001 22:11:15 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Open items"
},
{
"msg_contents": "Thomas Lockhart writes:\n\n> I'll guess that the reference system has changed. I can't freeze my OS\n> at some 1996 (or 2000) vintage version to guarantee that results never\n> change. I went the last year or two with that geometry test failing for\n> me. I'm not sure what results I'd get with the latest glibc, but once I\n> upgrade we'll find out.\n\nI don't mind which system is the \"reference\" and which ones are\nresultmap-enabled, since this is really only an implementation detail.\nHowever, by changing the expected results of a test without the test input\nchanging, you implicitly deprecate all systems for which this test used to\npass, and there were plenty of them, otherwise we wouldn't have all those\nLinux systems in the supported list.\n\nNevertheless, Mandrake is just about the last OS I would trust to be a\n\"reference\" for floating point results. I will point out that for me the\ngeometry test did not change when I updated from Red Hat 5.2 to 7.0, and\nvarious other Linux distributions apparently accepted the previous results\nas well. Shades of -ffast-math come to mind...\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 15 Nov 2001 17:15:04 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Open items"
},
{
"msg_contents": "Bill Studenmund writes:\n\n> I think the general point is that it'd be nice if regression tests didn't\n> report failure due to numerical noise. :-)\n\nActually, I'm not convinced all of these are strictly numerical noise.\nDifferences in the 5th out of 10 decimal places or positive vs. negative\nzero look more like \"incorrect optimization\" or \"incomplete floating point\nimplementation\". It could be interesting to do these calculations\nsymbolically and run the final result to plenty of decimal places to see\nwho's right.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 15 Nov 2001 17:15:35 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Open items"
},
{
"msg_contents": "On Thu, 15 Nov 2001, Peter Eisentraut wrote:\n\n> Bill Studenmund writes:\n>\n> > I think the general point is that it'd be nice if regression tests didn't\n> > report failure due to numerical noise. :-)\n>\n> Actually, I'm not convinced all of these are strictly numerical noise.\n> Differences in the 5th out of 10 decimal places or positive vs. negative\n> zero look more like \"incorrect optimization\" or \"incomplete floating point\n> implementation\". It could be interesting to do these calculations\n> symbolically and run the final result to plenty of decimal places to see\n> who's right.\n\nI think it would depend on where the result came from. I think the\nproblems the regression tests hit is that we subtract two nearly-equal\nnumbers. Like ones which were the same for say 7 of 15 digits. So the\nanswer we get is only significant to 8 digits, but the display code wants\nto print 15. So we get seven digits of questionable lineage. I'm not sure\nwhat the standards say should go into them.\n\nDoing the calculation to \"plenty\" of decimal places would be really\ninteresting if we could then map the answer back to the number of bits in\nthe mantisa in the original number. But I'm not sure how to do that easily\nin the regression test.\n\nTake care,\n\nBill\n\n",
"msg_date": "Thu, 15 Nov 2001 19:33:25 -0800 (PST)",
"msg_from": "Bill Studenmund <wrstuden@netbsd.org>",
"msg_from_op": false,
"msg_subject": "Re: Open items"
}
] |
[
{
"msg_contents": "Hi,\n\nJust wondering - is there any problem with say, exclusively using Portals\nand cursors instead of regular selects in a system?\n\nIs there any reason _not_ to use portals, or vice versa?\n\nChris\n\nps. Has the regression test database been fixed yet?\n\n",
"msg_date": "Mon, 12 Nov 2001 15:07:04 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Portals vs. Selects"
}
] |
[
{
"msg_contents": "I think there should be a way to specify usernames directly in\npg_hba.conf. We do have the secondary password files, but puting the\nusernames right in the file should be possible. I will put it on the\nTODO list and discuss it later for 7.3.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 12 Nov 2001 02:09:47 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Improve pg_hba.conf"
}
] |
[
{
"msg_contents": "I just noticed that you cannot concat a fixed char column and a varchar\ncolumn in 7.1.3. Has this been fixed in 7.2? (I can't check it here...)\n\ntest=# \\d ADOXYZ\n Table \"adoxyz\"\n Attribute | Type | Modifier\n-----------+-------------------+----------\n id | integer |\n firstname | character(24) |\n lastname | character varying |\n created | date |\n\ntest=# select firstname || lastname from ADOXYZ;\nERROR: Unable to identify an operator '||' for types 'bpchar' and 'varchar'\n You will have to retype this query using an explicit cast\n\n",
"msg_date": "Mon, 12 Nov 2001 16:03:41 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Problem concating bpchar and varchar in 7.1.3"
}
] |
[
{
"msg_contents": "Hi!\n\nI just want how can I replicate \"sequence\" using rserv.\nGot any idea?\n\nThanks.\n\nSherwin\n\n\n",
"msg_date": "Mon, 12 Nov 2001 17:32:28 +0800",
"msg_from": "\"pgsql-hackers\" <pgsql-hackers@fc.emc.com.ph>",
"msg_from_op": true,
"msg_subject": "replicate \"sequence\" using rserv"
}
] |
[
{
"msg_contents": " \n> Looking back at our discussion around 24-Oct, I recall that I was\n> leaning to the idea that the correct interpretation of the spec's\n> \"triggered data change\" rule is that it prohibits scenarios that are\n> impossible anyway under MVCC, because of the MVCC tuple visibility\n> rules. Therefore we don't need any explicit test for triggered data\n> change. But I didn't hear anyone else supporting or disproving\n> that idea.\n> \n> The code as-is is certainly wrong, since it prohibits multiple changes\n> within a transaction, not within a statement as the spec says.\n> \n> Right at the moment I'd favor ripping the code out entirely ... but\n> it'd be good to hear some support for that approach. Comments anyone?\n\nIf I read the code correctly, the \"triggered data change\" check is only\nperformed for keys, that have xmin == GetCurrentTransactionId().\nThose are tuples that have already been modified by current session.\nSince nobody else can touch those (since they are locked), I think the\ncheck is not needed.\n\n(Delete lines 2176 - 2200 and 2211 - 2229, that was your intent, Tom ?)\nI think this would be correct.\n\nI somehow wonder on the contrary why a check would not be necessary\nfor the exact opposite case, where oldtup->t_data->t_xmin != \nGetCurrentTransactionId(), since such a key might have been changed \nfrom another session. (Or does a referenced key always get a lock ?)\n\nAndreas\n",
"msg_date": "Mon, 12 Nov 2001 11:18:43 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: Triggered Data Change check "
}
] |
[
{
"msg_contents": "hello.\ni am doing some database, that will have a field indicating the time when that field was last modified, and want to compare it to a file's modification time.\nwhat is the best way to do it? directly store the timestamp of the file as an integer? or turn the file timestamp into date, and then store it as date in the db? or is there a speciall field for that?\n\nproblem here is portability and easy.\n\nthank you.\n\n-- \nICQ: 15605359 Bicho\n =^..^=\nFirst, they ignore you. Then they laugh at you. Then they fight you. Then you win. Mahatma Gandhi.\n........Por que no pensaran los hombres como los animales? Pink Panther........\n-------------------------------気検体の一致------------------------------------\n暑さ寒さも彼岸まで。\nアン アン アン とっても大好き\n\n",
"msg_date": "Mon, 12 Nov 2001 07:10:45 -0600",
"msg_from": "David Eduardo Gomez Noguera <davidgn@servidor.unam.mx>",
"msg_from_op": true,
"msg_subject": "timestamp"
},
{
"msg_contents": "\nlong stamp=System.currentTimeMillis(); // Or whatever you use to get yout long.\nTimestamp ts = new Timestamp(stamp);\n\nseems like c++ or some other oolanguage code. hide most main things.\n\ni was wondering if the table should have a date field as the field for timestamp, or an integer field\nie create table p (... timestamp date ...)\nor create table p (... timestamp integer ...)\n\nor if pgsql interface libraries had a structure (plain c programming here) or a veriable defined for dates from files (creatin time, modification time, etc...), since it might vary from filesystem to filesystem, or if i did a store from the time of the file into the database as an integer into an integer, or turn the time from the file into a string, and then store it in the database as a date. always with a query, of course.\nHope that helps.\n\nAntonio\n\nDavid Eduardo Gomez Noguera wrote:\n\n> hello.\n> i am doing some database, that will have a field indicating the time when that field was last modified, and want to compare it to a file's modification time.\n> what is the best way to do it? directly store the timestamp of the file as an integer? or turn the file timestamp into date, and then store it as date in the db? or is there a speciall field for that?\n>\n-- \nICQ: 15605359 Bicho\n =^..^=\nFirst, they ignore you. Then they laugh at you. Then they fight you. Then you win. Mahatma Gandhi.\n........Por que no pensaran los hombres como los animales? Pink Panther........\n-------------------------------気検体の一致------------------------------------\n暑さ寒さも彼岸まで。\nアン アン アン とっても大好き\n\n",
"msg_date": "Mon, 12 Nov 2001 09:45:28 -0600",
"msg_from": "David Eduardo Gomez Noguera <davidgn@servidor.unam.mx>",
"msg_from_op": true,
"msg_subject": "Re: timestamp"
}
] |
[
{
"msg_contents": "In the CVS tip from this morning:\n\na123=# alter table test add test1 int4 not null;\nERROR: Adding NOT NULL columns is not implemented.\n Add the column, then use ALTER TABLE ADD CONSTRAINT.\na123=# alter table test add test1 int4 null;\nALTER\na123=#\n\nI'm pretty sure the first one used to work just fine. Is this\nintentional breakage?\n\n-Brad\n",
"msg_date": "Mon, 12 Nov 2001 11:32:05 -0500",
"msg_from": "Bradley McLean <brad@bradm.net>",
"msg_from_op": true,
"msg_subject": "ALTER TABLE ADD COLUMN can't use NOT NULL?"
},
{
"msg_contents": "Bradley McLean <brad@bradm.net> writes:\n> In the CVS tip from this morning:\n> a123=# alter table test add test1 int4 not null;\n> ERROR: Adding NOT NULL columns is not implemented.\n> Add the column, then use ALTER TABLE ADD CONSTRAINT.\n\n> I'm pretty sure the first one used to work just fine.\n\nNo, it never worked per spec. The spec requires the constraint to\nbe enforced immediately, and since the values of the new column\nwould all be null, there's no way for this to be a legal command.\n\nWhat's legal per spec is an ADD that provides a DEFAULT along with\nspecifying NOT NULL. But we don't support ADD with a DEFAULT yet :-(\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 12 Nov 2001 12:36:49 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE ADD COLUMN can't use NOT NULL? "
},
{
"msg_contents": "> Bradley McLean <brad@bradm.net> writes:\n> > In the CVS tip from this morning:\n> > a123=# alter table test add test1 int4 not null;\n> > ERROR: Adding NOT NULL columns is not implemented.\n> > Add the column, then use ALTER TABLE ADD CONSTRAINT.\n>\n> > I'm pretty sure the first one used to work just fine.\n>\n> No, it never worked per spec. The spec requires the constraint to\n> be enforced immediately, and since the values of the new column\n> would all be null, there's no way for this to be a legal command.\n>\n> What's legal per spec is an ADD that provides a DEFAULT along with\n> specifying NOT NULL. But we don't support ADD with a DEFAULT yet :-(\n\nAs far as I am aware, we don't even support using ALTER TABLE ADD CONSRAINT\nto add a NOT NULL constraint, so I have no idea why the ERROR: message tells\npeople to do that!!!\n\nOr am I wrong?\n\nChris\n\n",
"msg_date": "Tue, 13 Nov 2001 09:38:46 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE ADD COLUMN can't use NOT NULL? "
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> As far as I am aware, we don't even support using ALTER TABLE ADD CONSRAINT\n> to add a NOT NULL constraint, so I have no idea why the ERROR: message tells\n> people to do that!!!\n\nNot directly, but you can add a CHECK(foo NOT NULL) constraint.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 12 Nov 2001 20:52:14 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE ADD COLUMN can't use NOT NULL? "
},
{
"msg_contents": "Thus spake Bradley McLean\n> a123=# alter table test add test1 int4 not null;\n> ERROR: Adding NOT NULL columns is not implemented.\n> Add the column, then use ALTER TABLE ADD CONSTRAINT.\n> a123=# alter table test add test1 int4 null;\n> ALTER\n> a123=#\n> \n> I'm pretty sure the first one used to work just fine. Is this\n> intentional breakage?\n\nAre you sure? I seem to recall that it was accepted but the constraint\nwas simply ignored. I thought that the recent change was just that it\nrejected the attempt.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Tue, 13 Nov 2001 07:36:44 -0500 (EST)",
"msg_from": "darcy@druid.net (D'Arcy J.M. Cain)",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE ADD COLUMN can't use NOT NULL?"
}
] |
[
{
"msg_contents": "I am writing an analysis package that needs to create a table and index on a\nlive system.\n\nThe problem:\n\nI have a program which does data analysis which, when completed, copys the\nresults back to PostgreSQL. This has to be done on a live system, therefore,\nthe summary table must be indexed prior to use. Here are the steps currently\nneeded:\n\ncreate table fubar_tmp (...);\ncopy fubar_temp from stdin ;\ncreate index fubar_tmp_id on fubar_tmp (id);\nalter table fubar rename to fubar_old;\nalter table fubar_tmp rename to fubar;\ndrop table fubar_old;\ncreate index fubar_id on fubar(id);\ndrop index fubar_tmp_id;\n\n\nIt would be usefull to be able to do it this way:\n\ncreate table fubar_tmp (...);\ncopy fubar_temp from stdin ;\nalter index fubar_id rename fubar_id_old;\ncreate index fubar_id on fubar_tmp (id);\nalter table fubar rename to fubar_old;\nalter table fubar_tmp rename to fubar;\ndrop table fubar_old;\n\nThe ability to rename an index so that it follows the table for which it was\ncreated would be very helpfull. Otherwise one has to create a second index\nprior summary tables being swapped, or come up with some way to track the index\nname.\n",
"msg_date": "Mon, 12 Nov 2001 13:03:11 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "rename index?"
},
{
"msg_contents": "ALTER TABLE RENAME works on indexes (at least in recent releases).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 12 Nov 2001 13:33:12 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: rename index? "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> ALTER TABLE RENAME works on indexes (at least in recent releases).\n> \n> regards, tom lane\n\nHmm, how does that work? Is there a naming convention which I must follow for\nthis to work? (I am using 7.2B2 to devlope this system).\n",
"msg_date": "Mon, 12 Nov 2001 13:35:52 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: rename index?"
},
{
"msg_contents": "On 12 Nov 2001 at 13:35 (-0500), mlw wrote:\n| Tom Lane wrote:\n| > \n| > ALTER TABLE RENAME works on indexes (at least in recent releases).\n| > \n| > regards, tom lane\n| \n| Hmm, how does that work? Is there a naming convention which I must follow for\n| this to work? (I am using 7.2B2 to devlope this system).\n\nALTER TABLE /will/ keep the index on the table, but the index will\nretain its original name, i.e., idx_fubar_tmp_id, even after the \ntable is renamed, so doing your create/copy/rename/drop sequence will \nnot work the second time, since the temp index already exists (by name).\n\nThis seems like a useful feature to have. Is there anything like \nthis in SQL99?\n\ncheers.\n brent-'who doesn''t have SQL99 docs'\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n",
"msg_date": "Mon, 12 Nov 2001 14:32:19 -0500",
"msg_from": "Brent Verner <brent@rcfile.org>",
"msg_from_op": false,
"msg_subject": "Re: rename index?"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> ALTER TABLE RENAME works on indexes (at least in recent releases).\n> \n> regards, tom lane\n\nOH, stupid me, I didn't get what you meant. Treat the index name as the table\nname, i.e.\n\nalter table fubar_idx rename to fubar_idx_old;\n\nYes, that works, but I would never have guessed that. Is that what Postgres\nshould be doing?\n\nMight not it be useful to have an \"alter Object ...\" which will work on\nPostgres objects, like sequences, functions, etc. to make general changes.\nUsing alter table to rename an index seems a bit arcane.\n",
"msg_date": "Mon, 12 Nov 2001 18:19:14 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: rename index?"
},
{
"msg_contents": "> Tom Lane wrote:\n> > \n> > ALTER TABLE RENAME works on indexes (at least in recent releases).\n> > \n> > regards, tom lane\n> \n> OH, stupid me, I didn't get what you meant. Treat the index name as the table\n> name, i.e.\n> \n> alter table fubar_idx rename to fubar_idx_old;\n> \n> Yes, that works, but I would never have guessed that. Is that what Postgres\n> should be doing?\n> \n> Might not it be useful to have an \"alter Object ...\" which will work on\n> Postgres objects, like sequences, functions, etc. to make general changes.\n> Using alter table to rename an index seems a bit arcane.\n\nWe have already forced DROP object to honor the object type, so ALTER\nTABLE should do the same, right? Do we need to add an ALTER INDEX\ncommand, and an ALTER SEQUENCE command too? Maybe ALTER NONTABLE? :-)\n\nAdded to TODO:\n\n o Prevent ALTER TABLE RENAME from renaming indexes and sequences (?)\n\nWe can figure out figure out later how we want to address this.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Nov 2001 21:43:45 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: rename index?"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Added to TODO:\n> o Prevent ALTER TABLE RENAME from renaming indexes and sequences (?)\n\nThis would clearly be a step backwards, unless we provide alternate\nsyntax.\n\nWhile it's maybe a tad inconsistent to allow ALTER TABLE RENAME to work\non the other relation types, I'm having a hard time getting excited about\ndoing any work just to be more rigid about it. There's a good reason\nfor DROP to be extremely tight about what it will do: you can't always\nundo it. So the more checking we can do to be sure you meant what you\nsaid, the better. OTOH a mistaken RENAME is easy enough to undo, so I'm\nnot so concerned about having very tight consistency checking on it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 21 Nov 2001 22:11:12 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: rename index? "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Added to TODO:\n> > o Prevent ALTER TABLE RENAME from renaming indexes and sequences (?)\n> \n> This would clearly be a step backwards, unless we provide alternate\n> syntax.\n> \n> While it's maybe a tad inconsistent to allow ALTER TABLE RENAME to work\n> on the other relation types, I'm having a hard time getting excited about\n> doing any work just to be more rigid about it. There's a good reason\n> for DROP to be extremely tight about what it will do: you can't always\n> undo it. So the more checking we can do to be sure you meant what you\n> said, the better. OTOH a mistaken RENAME is easy enough to undo, so I'm\n> not so concerned about having very tight consistency checking on it.\n\nGood point! Item removed from TODO. I will add documentation about\nthis capability. Thanks.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Nov 2001 22:12:30 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: rename index?"
}
] |
[
{
"msg_contents": "Kevin Jacobs <jacobs@penguin.theopalgroup.com> writes:\n> I had to guess what a \"trusted\" language should or should not do. For\n> example, I do not allow Python to report the platform it is running on,\n> though I do allow it to report the native byte-order and interpreter\n> version.\n\nDo you mean platform type, or host name? \"select version()\" reports the\nplatform type, so I see no reason to hide that in plpython. Otherwise\nthe changes sound reasonable in brief. I don't know Python well enough\nto review the change properly, though. D'Arcy or someone, please check\nit...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 12 Nov 2001 13:32:20 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Possible major bug in PlPython (plus some other ideas) "
},
{
"msg_contents": "On Mon, 12 Nov 2001, Tom Lane wrote:\n> Kevin Jacobs <jacobs@penguin.theopalgroup.com> writes:\n> > I had to guess what a \"trusted\" language should or should not do. For\n> > example, I do not allow Python to report the platform it is running on,\n> > though I do allow it to report the native byte-order and interpreter\n> > version.\n>\n> Do you mean platform type, or host name? \"select version()\" reports the\n> platform type, so I see no reason to hide that in plpython.\n\nI meant platform type. Since its already available in Postgre, I'll make it\navailable in my next path.\n\n-Kevin\n\n--\nKevin Jacobs\nThe OPAL Group - Enterprise Systems Architect\nVoice: (216) 986-0710 x 19 E-mail: jacobs@theopalgroup.com\nFax: (216) 986-0714 WWW: http://www.theopalgroup.com\n\n\n",
"msg_date": "Mon, 12 Nov 2001 14:19:24 -0500 (EST)",
"msg_from": "Kevin Jacobs <jacobs@penguin.theopalgroup.com>",
"msg_from_op": false,
"msg_subject": "Re: Possible major bug in PlPython (plus some other ideas)"
}
] |
[
{
"msg_contents": "hi,\nin one of my production servers with 7.0.3 now need to user order by on an intersect statement\n> template1=# select relname from pg_class intersect select relname from pg_class order by relname;\n> ERROR: get_sortgroupclause_tle: ORDER/GROUP BY expression not found in targetlist\nthis is a bug that I can see ok with 7.1.2\nbut menwhile I can't upgrade this server, how could I bypass this bug ?\n\nbests from barcelona,\n\t\tjaume teixi.\n",
"msg_date": "Mon, 12 Nov 2001 20:02:23 +0100",
"msg_from": "Jaume Teixi <teixi@6tems.com>",
"msg_from_op": true,
"msg_subject": "howto bypass the intersect + order by bug in 7.0.3"
}
] |
[
{
"msg_contents": "\nhi,\n\nan index on a table column of any number type only gets honoured if you\nquery it like a string, e.g.\n\ncreate table t1 ( n int2 ) ;\n\ncreate index t1n on t1 (n) ;\n\nexplain select * from t1 where n = 1 ;\n\n-- Seq Scan on t1 (cost=0.00..22.50 rows=10 width=2)\n\nexplain select * from t1 where n = '1' ;\n\n-- Index Scan using t1n on t1 (cost=0.00..8.14 rows=10 width=2)\n\nfirst i thought this might be an psql client error and tried the same via\njdbc, and look, there it happens again. if i create a PreparedStatemnt and\nbind the INT or LONG value with setLong (1,x) the index won't be used in the\nselect statement. if i bind the value with a setString (1,x+\"\") command,\nthen the index is honored correctly. I tested the code against postgres\n7.1.3 as well as 7.0.2. this means that i would have to change all my java\ncode from setLong to setString in order to speed up my apps every time i\nquery a number. quite ugly!\n\nilker -)\n\n\n--\n--\ngate5 AG\nschoenhauser allee 62\n10437 berlin\n\nfon + 49 30 446 76 0\nfax + 49 30 446 76 555\n\nhttp://www.gate5.de/ | ilker@gate5.de\n\n\n\n--\n--\ngate5 AG\nschoenhauser allee 62\n10437 berlin\n\nfon + 49 30 446 76 0\nfax + 49 30 446 76 555\n\nhttp://www.gate5.de/ | ilker@gate5.de\n\n\n",
"msg_date": "Mon, 12 Nov 2001 21:06:16 +0100",
"msg_from": "\"Ilker Egilmez\" <ilker@gate5.de>",
"msg_from_op": true,
"msg_subject": "problem: index on number not honoured"
},
{
"msg_contents": ">>>>> \"Ilker\" == Ilker Egilmez <ilker@gate5.de> writes:\n\n\n Ilker> an index on a table column of any number type only gets honoured if you\n Ilker> query it like a string, e.g.\n\n Ilker> create table t1 ( n int2 ) ;\n\n Ilker> create index t1n on t1 (n) ;\n\n Ilker> explain select * from t1 where n = 1 ;\n\n Ilker> -- Seq Scan on t1 (cost=0.00..22.50 rows=10 width=2)\n\n Ilker> explain select * from t1 where n = '1' ;\n\n Ilker> -- Index Scan using t1n on t1 (cost=0.00..8.14 rows=10 width=2)\n\nTwo questions: have you run vacuum analyze on the table? have you\nsubmitted a bug report?\n\nroland\n-- \n\t\t PGP Key ID: 66 BC 3B CD\nRoland B. Roberts, PhD RL Enterprises\nroland@rlenter.com 76-15 113th Street, Apt 3B\nroland@astrofoto.org Forest Hills, NY 11375\n",
"msg_date": "15 Nov 2001 11:26:41 -0500",
"msg_from": "Roland Roberts <roland@astrofoto.org>",
"msg_from_op": false,
"msg_subject": "Re: problem: index on number not honoured"
},
{
"msg_contents": "This is an instance of a known problem: numeric constants get resolved to\n'float8' during the parsing stage, so the planner doesn't know it can\nuse the 'int2' (or whatever) index for this query. The 'string like'\nconstants (i.e. anything quoted with ') are kept as 'unknown' until the\nlast stages, when the planner can then attempt to resolve them to match\nthe type of the underlying column/index.\n\nIt is interesting to see your comments regarding the JDBC interface: I\nhaven't seen that angle reported before. To date, it's been a matter of\nsuggesting workarounds to individual cases. The correct solution i(which\nhas arisen out of discussion on the HACKERS list: I think it's Tom Lane's\nidea) is probably to come up with the concept of an 'unknown numeric'\nconstant, and treat it like the string constant gets treated. We've seen\na rash of these problems recently: unfortunately, this isn't fixed in\n7.2 (which is in Beta 2 right now: should go Release Candidate1 tomorrow),\nbut I'd suggest it's a candidate for fixing early in 7.3. Might even\nbe argued as a 'bug fix', but the changes needed to fix this right are \nprobably so extensive that it'll never go into a stable release. \n\n From the JDBC side, it _might_ be possible to put in a kludge to do the\nquoting for you, well commented so it get's taken out when the backend\nget's fixed. The JDBC driver has it's own release cycle, so that might\ngo in sooner than a 7.3 release ...\n\nP.S. I'm copying HACKERS on this, to let the core know about the impact\non code written to use the JDBC driver.\n\nRoss\n\nOn Mon, Nov 12, 2001 at 09:06:16PM +0100, Ilker Egilmez wrote:\n> \n> hi,\n> \n> an index on a table column of any number type only gets honoured if you\n> query it like a string, e.g.\n> \n> create table t1 ( n int2 ) ;\n> \n> create index t1n on t1 (n) ;\n> \n> explain select * from t1 where n = 1 ;\n> \n> -- Seq Scan on t1 (cost=0.00..22.50 rows=10 width=2)\n> \n> explain select * from t1 where n = '1' ;\n> \n> -- Index Scan using t1n on t1 (cost=0.00..8.14 rows=10 width=2)\n> \n> first i thought this might be an psql client error and tried the same via\n> jdbc, and look, there it happens again. if i create a PreparedStatemnt and\n> bind the INT or LONG value with setLong (1,x) the index won't be used in the\n> select statement. if i bind the value with a setString (1,x+\"\") command,\n> then the index is honored correctly. I tested the code against postgres\n> 7.1.3 as well as 7.0.2. this means that i would have to change all my java\n> code from setLong to setString in order to speed up my apps every time i\n> query a number. quite ugly!\n> \n> ilker -)\n> \n> \n> --\n> --\n> gate5 AG\n> schoenhauser allee 62\n> 10437 berlin\n> \n> fon + 49 30 446 76 0\n> fax + 49 30 446 76 555\n> \n> http://www.gate5.de/ | ilker@gate5.de\n> \n> \n> \n> --\n> --\n> gate5 AG\n> schoenhauser allee 62\n> 10437 berlin\n> \n> fon + 49 30 446 76 0\n> fax + 49 30 446 76 555\n> \n> http://www.gate5.de/ | ilker@gate5.de\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n",
"msg_date": "Thu, 15 Nov 2001 11:29:41 -0600",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: problem: index on number not honoured"
}
] |
[
{
"msg_contents": "When running a test with multiple (>=2) users pg 'seizes up' after a\nfew transactions. Cpu goes 100% and disk goes to 0%. This lasts\n'forever' (overnight)On the same test all other tested databases don't\nhave this problem.\n\nThe error occurs with higher tx rate, when transactions bump into each\nother more frequently. Some 'deadlock detected' messages appear around\nthe hang up time, but _not_ always.\n\nOccasionally - but rarely - the seizure looks differently. CPU goes to\n0%, disk goes to 0% and, after about one minute, the processing is\nresumed.\n\nThe current suspicion is that it is due to difference in lock and\ndeadlock handling between pg and most other dbs. In general I found\nthat I can't set transaction isolation level to READ_UNCOMMITTED (as\nfor other databases), the lowest level is READ_COMMITEED.\n\nAny ideas how to reduce this problem? I really want to prove pg perf\nwith 20-100 users and have a problem running 2...\n\nIt wouldn't be fair (to other db's) to rewrite the test with some\npg-only LOCK command etc. I suspect I'm missing something simple, like\none of .conf parameters.\n\n\nP.S. Using WinNT/Win2K system, pg 7.1.3 (current cygwin), jdbc driver\nis jdbc7.1-1.3, cygipc is 1.10-1, java is 1.3.1_01a (current jdk).\nDefault pg installation, except for bumped up memory and 8 wal files.\n",
"msg_date": "12 Nov 2001 14:51:24 -0800",
"msg_from": "czl@iname.com (charles)",
"msg_from_op": true,
"msg_subject": "pg locking problem"
},
{
"msg_contents": "czl@iname.com (charles) writes:\n> When running a test with multiple (>=2) users pg 'seizes up' after a\n> few transactions.\n\nSince you haven't told us a thing about what this test is, it's hard\nto see how you expect to get any useful help ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 13 Nov 2001 16:43:12 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg locking problem "
},
{
"msg_contents": "> When running a test with multiple (>=2) users pg 'seizes up' after a\n> few transactions. Cpu goes 100% and disk goes to 0%. This lasts\n> 'forever' (overnight)On the same test all other tested databases don't\n> have this problem.\n> \n> The error occurs with higher tx rate, when transactions bump into each\n> other more frequently. Some 'deadlock detected' messages appear around\n> the hang up time, but _not_ always.\n> \n> Occasionally - but rarely - the seizure looks differently. CPU goes to\n> 0%, disk goes to 0% and, after about one minute, the processing is\n> resumed.\n> \n> The current suspicion is that it is due to difference in lock and\n> deadlock handling between pg and most other dbs. In general I found\n> that I can't set transaction isolation level to READ_UNCOMMITTED (as\n> for other databases), the lowest level is READ_COMMITEED.\n> \n> Any ideas how to reduce this problem? I really want to prove pg perf\n> with 20-100 users and have a problem running 2...\n\nHave you ever tried the test on UNIX (or UNIX like systems)? I have\nnever seen such a problem with PostgreSQL running on UNIX. Also I\nthink PostgreSQL on Win is not ready for practical use...\n\n> It wouldn't be fair (to other db's) to rewrite the test with some\n> pg-only LOCK command etc. I suspect I'm missing something simple, like\n> one of .conf parameters.\n> \n> \n> P.S. Using WinNT/Win2K system, pg 7.1.3 (current cygwin), jdbc driver\n> is jdbc7.1-1.3, cygipc is 1.10-1, java is 1.3.1_01a (current jdk).\n> Default pg installation, except for bumped up memory and 8 wal files.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n",
"msg_date": "Wed, 14 Nov 2001 10:01:12 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: pg locking problem"
},
{
"msg_contents": "For various reasons, not wholly dependent on me, the test should show\ngood perf on Windows. Otherwise Sorry about that. Can't change the\nplatform for this one. I did scan the archives, without finding\nanything similar - though maybe my search was not thorough enough.\n\nI managed to isolate the bug further. \n\n1. Running with read-only transactions the bug does not occur. This\nmeans that the bug is not directly related to the number of users (as\nlong as there's more than one).\n\n2. Running with read-only transactions _and_ just _one_ type of a\nread-write transaction the bug occurs. This means that the bug is not\ncaused by a deadlock - single transaction type always requests the\ntables in the same order. (Am I right here? i'm sleepy so my thinking\nis not up to scratch). Anyway, regardless which one of read-write tx\ntypes is chosen, the problem occurs.\n\n3. Overall this suggests that, in crude terms, the problem is\ntriggered when reading updated but uncommitted records. Possibly even\nby one user reading updated uncommitted records of another (since this\nhappens with only two users)\n\n4. The seizure problem manifests itself in high (100%) cpu\nutilization. Also, about 80% of that cpu utilization is system state.\nAll pg processes (for all users) use about the same amount of cpu time\n- that is the situation is not caused by one process/user getting out\nof whack.\n\n\nt-ishii@sra.co.jp (Tatsuo Ishii) wrote in message news:<20011114100112O.t-ishii@sra.co.jp>...\n> > When running a test with multiple (>=2) users pg 'seizes up' after a\n> > few transactions. Cpu goes 100% and disk goes to 0%. This lasts\n> > 'forever' (overnight)On the same test all other tested databases don't\n> > have this problem.\n> > \n> > The error occurs with higher tx rate, when transactions bump into each\n> > other more frequently. Some 'deadlock detected' messages appear around\n> > the hang up time, but _not_ always.\n> > \n> > Occasionally - but rarely - the seizure looks differently. CPU goes to\n> > 0%, disk goes to 0% and, after about one minute, the processing is\n> > resumed.\n> > \n> > The current suspicion is that it is due to difference in lock and\n> > deadlock handling between pg and most other dbs. In general I found\n> > that I can't set transaction isolation level to READ_UNCOMMITTED (as\n> > for other databases), the lowest level is READ_COMMITEED.\n> > \n> > Any ideas how to reduce this problem? I really want to prove pg perf\n> > with 20-100 users and have a problem running 2...\n> \n> Have you ever tried the test on UNIX (or UNIX like systems)? I have\n> never seen such a problem with PostgreSQL running on UNIX. Also I\n> think PostgreSQL on Win is not ready for practical use...\n> \n> > It wouldn't be fair (to other db's) to rewrite the test with some\n> > pg-only LOCK command etc. I suspect I'm missing something simple, like\n> > one of .conf parameters.\n> > \n> > \n> > P.S. Using WinNT/Win2K system, pg 7.1.3 (current cygwin), jdbc driver\n> > is jdbc7.1-1.3, cygipc is 1.10-1, java is 1.3.1_01a (current jdk).\n> > Default pg installation, except for bumped up memory and 8 wal files.\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 6: Have you searched our list archives?\n> > \n> > http://archives.postgresql.org\n> > \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n",
"msg_date": "14 Nov 2001 14:14:33 -0800",
"msg_from": "czl@iname.com (charles)",
"msg_from_op": true,
"msg_subject": "Re: pg locking problem"
},
{
"msg_contents": "On Wed, Nov 14, 2001 at 02:14:33PM -0800, charles wrote:\n> For various reasons, not wholly dependent on me, the test should show\n> good perf on Windows. Otherwise Sorry about that. Can't change the\n> platform for this one. I did scan the archives, without finding\n> anything similar - though maybe my search was not thorough enough.\n> \n> I managed to isolate the bug further. \n\n<minor rant mode>\nBased on your descriptions, I don't think this can really qualify as a\nbug. Asking PostgreSQL to adapt to the locking scheme optimized for a\ndifferent RDBMS, when it's underlying locking mechanisms are not only\ndifferent, but fundamentally _better_ in many contexts, is just silly,\nnot to mention not \"fair\" to PostgreSQL. I think you would find, (I am,\nof course, guessing, since you've given few actual details of the tests)\nthat the PG specific rewriting would be to _remove_ LOCK calls.\n</minor rant mode>\n\nGiven all that, it is still probably bad behavior for PG to use so\nmuch CPU. We're interested in fixing that, but not just in solving your\nproblem.\n\n> 1. Running with read-only transactions the bug does not occur. This\n> means that the bug is not directly related to the number of users (as\n> long as there's more than one).\n> \n> 2. Running with read-only transactions _and_ just _one_ type of a\n> read-write transaction the bug occurs. This means that the bug is not\n> caused by a deadlock - single transaction type always requests the\n> tables in the same order. (Am I right here? i'm sleepy so my thinking\n> is not up to scratch). Anyway, regardless which one of read-write tx\n> types is chosen, the problem occurs.\n> \n> 3. Overall this suggests that, in crude terms, the problem is\n> triggered when reading updated but uncommitted records. Possibly even\n> by one user reading updated uncommitted records of another (since this\n> happens with only two users)\n> \n> 4. The seizure problem manifests itself in high (100%) cpu\n> utilization. Also, about 80% of that cpu utilization is system state.\n> All pg processes (for all users) use about the same amount of cpu time\n> - that is the situation is not caused by one process/user getting out\n> of whack.\n> \n\nSeveral people have suggested tests you could run to help isolate the\nproblem: running the identical code against an identical PG database\nhosted on Unix would tell us if it's a problem with the NT compatibility\nlayer: i.e. cygwin. As Tatsuo Ishii pointed out, this is a likely cause,\nsince _many_ people use PG for heavy duty service under Unix, and NT\nisn't a primary platform for any of the core developers. But, if your\napplication can trigger the same behavior on a Unix hosted server,\nyou will get a _lot_ of attention, trust me.\n\n> > > P.S. Using WinNT/Win2K system, pg 7.1.3 (current cygwin), jdbc driver\n> > > is jdbc7.1-1.3, cygipc is 1.10-1, java is 1.3.1_01a (current jdk).\n> > > Default pg installation, except for bumped up memory and 8 wal files.\n\nEven if your application _does_ have the same behavior under Unix, the\nnext thing you'd be asked to do is try the latest version, 7.2b2, which\nwould be a good idea anyway, though I don't know if whoever builds the\nNT binaries has built one yet (hint hint): there's been a lot of bug\nfixes and code rework since 7.1.\n\n\nRoss\n\n\n",
"msg_date": "Thu, 15 Nov 2001 13:44:42 -0600",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: pg locking problem"
},
{
"msg_contents": "All,\n\nThanks for all your comments. After looking more at the problem and\nreading your mail I suspect that it may be somewhere at the boundary\nbetween cyg-ipc and postgres. Firstly, as you point out, pg is too\nheavily used on UNIX, hence a bug like that couldn't have gone\nunnoticed. Secondly, nothing in the testing done so far contradicts\nthat hypothesis.\n\nUnfortunately, end of the year is near so I can't put much more into\nthis pg bug - hopefully for the time being. And testing it on UNIX is\nbeyond my time, money and logistics budget on this one, unfortunately.\n\nwith many thanks and hope to revisit it soon (got some promising\nnumbers on single user test)\n\n charles\n\n\nreedstrm@rice.edu (\"Ross J. Reedstrom\") wrote in message news:<20011115134442.A3811@rice.edu>...\n> On Wed, Nov 14, 2001 at 02:14:33PM -0800, charles wrote:\n> > For various reasons, not wholly dependent on me, the test should show\n> > good perf on Windows. Otherwise Sorry about that. Can't change the\n> > platform for this one. I did scan the archives, without finding\n> > anything similar - though maybe my search was not thorough enough.\n> > \n> > I managed to isolate the bug further. \n> \n> <minor rant mode>\n> Based on your descriptions, I don't think this can really qualify as a\n> bug. Asking PostgreSQL to adapt to the locking scheme optimized for a\n> different RDBMS, when it's underlying locking mechanisms are not only\n> different, but fundamentally _better_ in many contexts, is just silly,\n> not to mention not \"fair\" to PostgreSQL. I think you would find, (I am,\n> of course, guessing, since you've given few actual details of the tests)\n> that the PG specific rewriting would be to _remove_ LOCK calls.\n> </minor rant mode>\n> \n> Given all that, it is still probably bad behavior for PG to use so\n> much CPU. We're interested in fixing that, but not just in solving your\n> problem.\n> \n> > 1. Running with read-only transactions the bug does not occur. This\n> > means that the bug is not directly related to the number of users (as\n> > long as there's more than one).\n> > \n> > 2. Running with read-only transactions _and_ just _one_ type of a\n> > read-write transaction the bug occurs. This means that the bug is not\n> > caused by a deadlock - single transaction type always requests the\n> > tables in the same order. (Am I right here? i'm sleepy so my thinking\n> > is not up to scratch). Anyway, regardless which one of read-write tx\n> > types is chosen, the problem occurs.\n> > \n> > 3. Overall this suggests that, in crude terms, the problem is\n> > triggered when reading updated but uncommitted records. Possibly even\n> > by one user reading updated uncommitted records of another (since this\n> > happens with only two users)\n> > \n> > 4. The seizure problem manifests itself in high (100%) cpu\n> > utilization. Also, about 80% of that cpu utilization is system state.\n> > All pg processes (for all users) use about the same amount of cpu time\n> > - that is the situation is not caused by one process/user getting out\n> > of whack.\n> > \n> \n> Several people have suggested tests you could run to help isolate the\n> problem: running the identical code against an identical PG database\n> hosted on Unix would tell us if it's a problem with the NT compatibility\n> layer: i.e. cygwin. As Tatsuo Ishii pointed out, this is a likely cause,\n> since _many_ people use PG for heavy duty service under Unix, and NT\n> isn't a primary platform for any of the core developers. But, if your\n> application can trigger the same behavior on a Unix hosted server,\n> you will get a _lot_ of attention, trust me.\n> \n> > > > P.S. Using WinNT/Win2K system, pg 7.1.3 (current cygwin), jdbc driver\n> > > > is jdbc7.1-1.3, cygipc is 1.10-1, java is 1.3.1_01a (current jdk).\n> > > > Default pg installation, except for bumped up memory and 8 wal files.\n> \n> Even if your application _does_ have the same behavior under Unix, the\n> next thing you'd be asked to do is try the latest version, 7.2b2, which\n> would be a good idea anyway, though I don't know if whoever builds the\n> NT binaries has built one yet (hint hint): there's been a lot of bug\n> fixes and code rework since 7.1.\n> \n> \n> Ross\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n",
"msg_date": "18 Nov 2001 13:30:28 -0800",
"msg_from": "czl@iname.com (charles)",
"msg_from_op": true,
"msg_subject": "Re: pg locking problem"
}
] |
[
{
"msg_contents": "Hi,\n\nI still can't post regression tests BTW:\n\nWarning: PostgreSQL query failed: ERROR: parser: parse error at or near \"t\"\nin /usr/local/www/developer/regress/regress.php on line 258\n\nWarning:\nfopen(\"/home/projects/pgsql/developers/vev/public_html/regress/regress/10056\n15773.failure\",\"w\") - No such file or directory in\n/usr/local/www/developer/regress/regress.php on line 265\nDatabase write failed.\n\nURL: http://developer.postgresql.org/regress/regress.php\n\nChris\n\n",
"msg_date": "Tue, 13 Nov 2001 09:44:16 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Regression test database still stuffed"
}
] |
[
{
"msg_contents": "This patch should help fix cases with two separate fk constraints\nin a row that happen to reference the same pk constraint with\nan on update cascade and similar cases. It now should detect\ncorrectly that a pk row was added later after a delete or update\non a no action deferred fk and not incorrect error as well\nas dropping a check on insert/update to the fk table if the\nrow we're being referred to no longer is valid (I'm using\nHeapTupleSatisfiesItself because the comment implied it was\nwhat I was looking for and it appears to work :) ).\n\nI've got regression tests but I'm holding off on those until\nwe decide whether or not we're dropping the triggered data change\nerrors since a couple of the tests would hit that case and I'll\neither change them or drop them if we're not dropping the error.",
"msg_date": "Mon, 12 Nov 2001 18:22:15 -0800 (PST)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": true,
"msg_subject": "More FK patches"
},
{
"msg_contents": "Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> This patch should help fix cases with two separate fk constraints\n> in a row that happen to reference the same pk constraint with\n> an on update cascade and similar cases.\n\nAren't those NOT EXISTS clauses going to cause a humungous\nperformance hit?\n\nSeems it would be better for the RI triggers to do more in C code\nand stop expecting the query engine to handle these things. I've\nalways thought that ReferentialIntegritySnapshotOverride was an\nabsolutely unacceptable kluge, not least because it's turned on\n*before* we do parsing/planning of the RI queries, and so is\nlikely to screw up system catalog checks.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 12 Nov 2001 23:07:04 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: More FK patches "
},
{
"msg_contents": "\nOn Mon, 12 Nov 2001, Tom Lane wrote:\n\n> Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> > This patch should help fix cases with two separate fk constraints\n> > in a row that happen to reference the same pk constraint with\n> > an on update cascade and similar cases.\n>\n> Aren't those NOT EXISTS clauses going to cause a humungous\n> performance hit?\n\nYou're right. Thinking about it, it would make more sense to check it\nonce for the cases we support, since the only case where a different\nrow would come up would be in match partial. So that should probably\ngo away to a direct search for a matching row.\n\n> Seems it would be better for the RI triggers to do more in C code\n> and stop expecting the query engine to handle these things. I've\n> always thought that ReferentialIntegritySnapshotOverride was an\n> absolutely unacceptable kluge, not least because it's turned on\n> *before* we do parsing/planning of the RI queries, and so is\n> likely to screw up system catalog checks.\nWell, would it time correctly if the override was only around the\nactual execp rather than the prepare and such?\n\nDo you think it would be better to directly implement the constraint\nchecks and actions using scans and C modifying rows rather than the\nquery planner and switch over in 7.3? Without looking too hard yet, I'd be\nworried that it'd end up reimplementing alot of glue code that already\nexists, but maybe that's not so bad. I'm willing to give it a shot, but\nI couldn't guarantee anything and I'd also like to know the reasoning Jan\nhad for his decisions so as to make an informed attempt. :)\n\n\n",
"msg_date": "Tue, 13 Nov 2001 00:25:11 -0800 (PST)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] More FK patches "
},
{
"msg_contents": "Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> Well, would it time correctly if the override was only around the\n> actual execp rather than the prepare and such?\n\nThat would definitely feel better, but ultimately global variables\nchanging the behavior of low-level subroutines are Bad News.\n\n> Do you think it would be better to directly implement the constraint\n> checks and actions using scans and C modifying rows rather than the\n> query planner and switch over in 7.3?\n\nI believe that's the way to go in the long run, but I don't have any\nidea how much work might be involved. What I don't like about the\npresent setup is (a) the overhead involved, and (b) the fact that we\ncan't implement quite the right semantics using only user-level queries.\nSELECT FOR UPDATE doesn't get the kind of lock we want, and there are\nthese other issues too.\n\n> I'd also like to know the reasoning Jan had for his decisions so as to\n> make an informed attempt. :)\n\nI should probably let Jan speak for himself, but I'm guessing that\nit was an easy way to get a prototype implementation up and going.\nThat was fine at the time --- it was a pretty neat hack, in fact.\nBut we need to start thinking about an industrial-strength implementation.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 13 Nov 2001 18:04:49 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] More FK patches "
},
{
"msg_contents": "\nWas this resolved?\n\n---------------------------------------------------------------------------\n\n> \n> This patch should help fix cases with two separate fk constraints\n> in a row that happen to reference the same pk constraint with\n> an on update cascade and similar cases. It now should detect\n> correctly that a pk row was added later after a delete or update\n> on a no action deferred fk and not incorrect error as well\n> as dropping a check on insert/update to the fk table if the\n> row we're being referred to no longer is valid (I'm using\n> HeapTupleSatisfiesItself because the comment implied it was\n> what I was looking for and it appears to work :) ).\n> \n> I've got regression tests but I'm holding off on those until\n> we decide whether or not we're dropping the triggered data change\n> errors since a couple of the tests would hit that case and I'll\n> either change them or drop them if we're not dropping the error.\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Nov 2001 21:28:44 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: More FK patches"
},
{
"msg_contents": "\nOn Wed, 21 Nov 2001, Bruce Momjian wrote:\n\n> Was this resolved?\n\nNot yet. Per Tom's message on this, I've started looking at working\non doing more of the fk stuff without relying on SPI. I could do\na version of the patch that did a single query for the two no action\ncases rather than the not exists, but I'm not sure if that's worth\nit then.\n\n\n",
"msg_date": "Wed, 21 Nov 2001 22:57:54 -0800 (PST)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": true,
"msg_subject": "Re: More FK patches"
},
{
"msg_contents": "\nIs there a TODO item to add then?\n\n---------------------------------------------------------------------------\n\n> \n> On Wed, 21 Nov 2001, Bruce Momjian wrote:\n> \n> > Was this resolved?\n> \n> Not yet. Per Tom's message on this, I've started looking at working\n> on doing more of the fk stuff without relying on SPI. I could do\n> a version of the patch that did a single query for the two no action\n> cases rather than the not exists, but I'm not sure if that's worth\n> it then.\n> \n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 22 Nov 2001 13:38:27 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: More FK patches"
},
{
"msg_contents": "\nOn Thu, 22 Nov 2001, Bruce Momjian wrote:\n\n> Is there a TODO item to add then?\n\nFix foreign key constraints to not error on intermediate states\n of the database.\n\n",
"msg_date": "Thu, 22 Nov 2001 14:30:19 -0800 (PST)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": true,
"msg_subject": "Re: More FK patches"
},
{
"msg_contents": "> \n> On Thu, 22 Nov 2001, Bruce Momjian wrote:\n> \n> > Is there a TODO item to add then?\n> \n> Fix foreign key constraints to not error on intermediate states\n> of the database.\n\nAdded to TODO. Thanks. I put your name on it. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 22 Nov 2001 20:45:01 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: More FK patches"
}
] |
[
{
"msg_contents": "\nRight now, referential actions get deferred along with normal\nchecks and run against the state of the database at that time.\nI think this violates SQL92 11.8 General Rules 4-6 and have some\nreasoning and proposed ideas towards making it more complient\nalthough I don't actually have an implementation in mind for\nthe most correct version. :(\n\nHere are my interpretations:\n\n\tGR 4 says that the matching rows (unique and non-unique)\nare determined immediately before the execution of an SQL\nstatement. We can ignore the fluff about non-unique matching\nrows for now because I believe that applies to match partial only.\n\tGR 5 says when there's a delete rule and a row of the\nreferenced table is marked for deletion (if it's not already\nmarked such) then do something based on the action, for example\nmark matching rows for deletion if it is cascade. This seems\nto imply the action is supposed to occur immediately, since\nAFAICS the rows aren't marked for deletion on the commit but\nrather on the delete itself.\n\tGR 6 seems to be pretty much the same for update.\n\nI think the correct course of action would be if I'm right:\n*Make referential actions (other than no action) not deferrable\n and thus initially immediate. This means that you see the\n cascaded (or nulled or defaulted) results immediately, but\n I think that satisfies GRs 5 and 6. It also makes the\n problems of what we can see a little less problematic, but\n doesn't quite cure them.\n*To fix the visibility issues I think we'd need to be able to\n see what rows matched immediately before the statement and\n then reference those rows later, even if the values that we're\n keying on have changed. I'm really not sure how we'd do\n this without a great deal of extra work.\n An intermediate step towards complience would probably\n be making sure the row existed before this statement\n (I think for the fk constraints this means if it was\n created by another statement or a command before this\n one) which is wrong if a row that matched before this\n statement was modified by this statement to a new value\n that we won't match. Most of these cases would be errors\n by sql anyway (I think these'd probably be real triggered\n data change violations) and would be wrong by our current\n implementation as well.\n\nI'm not sure that the intermediate step on the second is\nactually worthwhile over just waiting and trying to do it\nright, but if I'm right in what it takes, it's reasonably\nminimal.\n\n",
"msg_date": "Mon, 12 Nov 2001 18:50:17 -0800 (PST)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": true,
"msg_subject": "Foreign key referential actions "
},
{
"msg_contents": "Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> Right now, referential actions get deferred along with normal\n> checks and run against the state of the database at that time.\n> I think this violates SQL92 11.8 General Rules 4-6 and have some\n> reasoning and proposed ideas towards making it more complient\n> although I don't actually have an implementation in mind for\n> the most correct version. :(\n\nI'm not convinced. 11.8 GR 1 refers to clause 10.6 as specifying\nwhen the referential constraint is to be checked. 10.6 says that\nimmediate-mode constraints are checked \"on completion\" of each SQL\nstatement. (It doesn't say anything about deferred-mode constraints,\nbut I suppose those are checked at end of transaction.)\n\nI think the intended meaning is that the actions caused by the\nconstraint are taken when the constraint is checked, which is\neither end of statement or end of transaction. Which is what\nwe're doing now.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 13 Nov 2001 18:26:31 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Foreign key referential actions "
},
{
"msg_contents": "On Tue, 13 Nov 2001, Tom Lane wrote:\n\n> Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> > Right now, referential actions get deferred along with normal\n> > checks and run against the state of the database at that time.\n> > I think this violates SQL92 11.8 General Rules 4-6 and have some\n> > reasoning and proposed ideas towards making it more complient\n> > although I don't actually have an implementation in mind for\n> > the most correct version. :(\n>\n> I'm not convinced. 11.8 GR 1 refers to clause 10.6 as specifying\n> when the referential constraint is to be checked. 10.6 says that\n> immediate-mode constraints are checked \"on completion\" of each SQL\n> statement. (It doesn't say anything about deferred-mode constraints,\n> but I suppose those are checked at end of transaction.)\n> I think the intended meaning is that the actions caused by the\n> constraint are taken when the constraint is checked, which is either\n> end of statement or end of transaction. Which is what we're doing\n> now.\n\nBut checking the constraint and the actions are not necessarily the\nsame thing, I believe they're meant as two components. There's a\nconstraint which says what is a legal state of the database and\nthere are actions which make modifications to the state of the\ndatabase based on the deletes and updates.\n\nFor example, in GR 5, it uses the present tense. \"and a row\nof the referenced table that has not previously marked for\ndeletion *is* marked for deletion...\" (emph. mine). I'd\nread that to mean that the following occurs at the time. If\nthey wanted it to be at the constraint check time, that should\nbe \"has been\" or \"was\" because other places it says things about\nhow rows that are marked for deletion are effectively deleted\nprior to the checking of any integrity constraint (13.7 GR 4\nfor example) so there'd be no rows remaining that were marked\nfor deletion at that point. I guess I'm just reading it with a\ndifferent set of semantic filters for the language.\n\nBehaviorally I would think that a sequence like:\nbegin;\n insert into pk\n insert into fk\n delete from pk\n insert into pk\n insert into fk\nend;\nwould leave you with one row in each, rather than a row in pk\nand none in fk or one in pk and two in fk.\n\n\n",
"msg_date": "Tue, 13 Nov 2001 15:55:04 -0800 (PST)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": true,
"msg_subject": "Re: Foreign key referential actions "
},
{
"msg_contents": "\nAre there any TODO items here?\n\n---------------------------------------------------------------------------\n\n> \n> Right now, referential actions get deferred along with normal\n> checks and run against the state of the database at that time.\n> I think this violates SQL92 11.8 General Rules 4-6 and have some\n> reasoning and proposed ideas towards making it more complient\n> although I don't actually have an implementation in mind for\n> the most correct version. :(\n> \n> Here are my interpretations:\n> \n> \tGR 4 says that the matching rows (unique and non-unique)\n> are determined immediately before the execution of an SQL\n> statement. We can ignore the fluff about non-unique matching\n> rows for now because I believe that applies to match partial only.\n> \tGR 5 says when there's a delete rule and a row of the\n> referenced table is marked for deletion (if it's not already\n> marked such) then do something based on the action, for example\n> mark matching rows for deletion if it is cascade. This seems\n> to imply the action is supposed to occur immediately, since\n> AFAICS the rows aren't marked for deletion on the commit but\n> rather on the delete itself.\n> \tGR 6 seems to be pretty much the same for update.\n> \n> I think the correct course of action would be if I'm right:\n> *Make referential actions (other than no action) not deferrable\n> and thus initially immediate. This means that you see the\n> cascaded (or nulled or defaulted) results immediately, but\n> I think that satisfies GRs 5 and 6. It also makes the\n> problems of what we can see a little less problematic, but\n> doesn't quite cure them.\n> *To fix the visibility issues I think we'd need to be able to\n> see what rows matched immediately before the statement and\n> then reference those rows later, even if the values that we're\n> keying on have changed. I'm really not sure how we'd do\n> this without a great deal of extra work.\n> An intermediate step towards complience would probably\n> be making sure the row existed before this statement\n> (I think for the fk constraints this means if it was\n> created by another statement or a command before this\n> one) which is wrong if a row that matched before this\n> statement was modified by this statement to a new value\n> that we won't match. Most of these cases would be errors\n> by sql anyway (I think these'd probably be real triggered\n> data change violations) and would be wrong by our current\n> implementation as well.\n> \n> I'm not sure that the intermediate step on the second is\n> actually worthwhile over just waiting and trying to do it\n> right, but if I'm right in what it takes, it's reasonably\n> minimal.\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Nov 2001 21:28:02 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Foreign key referential actions"
},
{
"msg_contents": "\nOn Wed, 21 Nov 2001, Bruce Momjian wrote:\n\n> Are there any TODO items here?\n\nNo. Tom and I don't agree on the spec's meaning for this and noone\nelse has really jumped in that I saw.\n\n",
"msg_date": "Wed, 21 Nov 2001 23:00:02 -0800 (PST)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": true,
"msg_subject": "Re: Foreign key referential actions"
}
] |
[
{
"msg_contents": "At 08:35 13/11/01 +0000, you wrote:\n>In pgAdmin II there is a long running bug that I can't\n>resolve that prevents dropping a database because I can't persuade *all*\n>connections to the specified database to close.\n\nDear all,\n\nThe same problem arises when working in psql after the closing of Php \nsocket connections.\nI have to do a 'service postgresql restart' server-side, and then psql \ntemplate1 < drop database xxxx;\n\nDoes anyone know a simpler solution?\n\nCheers,\nJean-Michel\n",
"msg_date": "Tue, 13 Nov 2001 10:33:08 +0100",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": true,
"msg_subject": "Re: Last inserted id "
},
{
"msg_contents": "I believe that in the release notes for the most recent version, it states\nthat this problem is known and can't really be worked around.\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Jean-Michel\n> POURE\n> Sent: Tuesday, 13 November 2001 5:33 PM\n> To: pgsql-odbc@postgresql.org\n> Cc: pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] [ODBC] Last inserted id\n>\n>\n> At 08:35 13/11/01 +0000, you wrote:\n> >In pgAdmin II there is a long running bug that I can't\n> >resolve that prevents dropping a database because I can't persuade *all*\n> >connections to the specified database to close.\n>\n> Dear all,\n>\n> The same problem arises when working in psql after the closing of Php\n> socket connections.\n> I have to do a 'service postgresql restart' server-side, and then psql\n> template1 < drop database xxxx;\n>\n> Does anyone know a simpler solution?\n>\n> Cheers,\n> Jean-Michel\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n",
"msg_date": "Wed, 14 Nov 2001 09:27:18 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Last inserted id "
}
] |
[
{
"msg_contents": "Could someone please give me a quick tip as to where in the source code the\nbit that auto-generates sequence names is?\n\nI plan to patch it to stop it generating conflicting names...\n\nCheers,\n\nChris\n\n",
"msg_date": "Tue, 13 Nov 2001 18:00:59 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Quick question"
},
{
"msg_contents": "On 13 Nov 2001 at 18:00 (+0800), Christopher Kings-Lynne wrote:\n| Could someone please give me a quick tip as to where in the source code the\n| bit that auto-generates sequence names is?\n\nI used cscope (http://cscope.sourceforge.net/) and did a text search\nfor 'implicit sequence'. A bit of backtracking from there led me to\nsrc/backend/parser/anaylyz.c:783\n sname = makeObjectName(cxt->relname, column->colname, \"seq\");\n\nThat function is in the same file, and has some comments related to\nname collision above it. I believe this is where you'll want to work.\n\ncheers. \n brent\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n",
"msg_date": "Tue, 13 Nov 2001 07:51:34 -0500",
"msg_from": "Brent Verner <brent@rcfile.org>",
"msg_from_op": false,
"msg_subject": "Re: Quick question"
},
{
"msg_contents": "> On 13 Nov 2001 at 18:00 (+0800), Christopher Kings-Lynne wrote:\n> | Could someone please give me a quick tip as to where in the source code the\n> | bit that auto-generates sequence names is?\n> \n> I used cscope (http://cscope.sourceforge.net/) and did a text search\n> for 'implicit sequence'. A bit of backtracking from there led me to\n> src/backend/parser/anaylyz.c:783\n> sname = makeObjectName(cxt->relname, column->colname, \"seq\");\n> \n> That function is in the same file, and has some comments related to\n> name collision above it. I believe this is where you'll want to work.\n\nI think we handled this. Have you tried 7.2 beta2?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 13 Nov 2001 12:33:15 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Quick question"
},
{
"msg_contents": "On 13 Nov 2001 at 12:33 (-0500), Bruce Momjian wrote:\n| > On 13 Nov 2001 at 18:00 (+0800), Christopher Kings-Lynne wrote:\n| > | Could someone please give me a quick tip as to where in the source code the\n| > | bit that auto-generates sequence names is?\n| > \n| > I used cscope (http://cscope.sourceforge.net/) and did a text search\n| > for 'implicit sequence'. A bit of backtracking from there led me to\n| > src/backend/parser/anaylyz.c:783\n| > sname = makeObjectName(cxt->relname, column->colname, \"seq\");\n| > \n| > That function is in the same file, and has some comments related to\n| > name collision above it. I believe this is where you'll want to work.\n| \n| I think we handled this. Have you tried 7.2 beta2?\n\nI believe the following demonstrates the problem Christopher would\nlike to solve.\n\nbrent=# select version();\n version \n---------------------------------------------------------------\n PostgreSQL 7.2b2 on i686-pc-linux-gnu, compiled by GCC 2.95.4\n(1 row)\n\nbrent=# create table test (id serial);\nNOTICE: CREATE TABLE will create implicit sequence 'test_id_seq' for SERIAL column 'test.id'\nNOTICE: CREATE TABLE / UNIQUE will create implicit index 'test_id_key' for table 'test'\nERROR: Relation 'test_id_seq' already exists\n\n\n ISTM, that these sequences created by way of a SERIAL type should \nbe named \"pg_serial_test_id_HASH\" or similar, since they are system\n(bookkeeping) rels. Also, I /personally/ would like it if the sequence\nwas dropped along with the table using it, provided that no other atts \nin the system are using it. I'm not sure right now if this behavior\nis even feasible.\n\n That said, there is certainly immediate benefit in making sure the \nCREATE TABLE with a SERIAL will succeed if the (initial choice for) \nsequence name already exists.\n\ncheers.\n brent\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n",
"msg_date": "Tue, 13 Nov 2001 12:59:04 -0500",
"msg_from": "Brent Verner <brent@rcfile.org>",
"msg_from_op": false,
"msg_subject": "Re: Quick question"
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> Could someone please give me a quick tip as to where in the source code the\n> bit that auto-generates sequence names is?\n> I plan to patch it to stop it generating conflicting names...\n\nBefore you start hacking, you might want to discuss your proposed\nchange in behavior with the rest of us. The code is simple; figuring\nout what it really Ought To Do is not so simple (and has been discussed\nbefore, BTW; read the archives).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 13 Nov 2001 13:15:31 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Quick question "
},
{
"msg_contents": "Brent Verner <brent@rcfile.org> writes:\n> ISTM, that these sequences created by way of a SERIAL type should \n> be named \"pg_serial_test_id_HASH\" or similar, since they are system\n> (bookkeeping) rels. Also, I /personally/ would like it if the sequence\n> was dropped along with the table using it, provided that no other atts \n> in the system are using it.\n\nI think there are two completely different issues here: one is what\nname to use for the auto-generated sequence, and the other is whether\n(when) to drop the sequence if the table is dropped. Fixing the\nlatter issue would reduce but not entirely eliminate the issue of\nname collisions.\n\nIIRC, the major objection to the notion of adding random hash characters\nto the auto-generated names was that people wanted to be able to predict\nthe names. There was a long discussion about this a couple years back\nwhen we settled on the present algorithm. Please search the archives\na bit if you want to re-open that issue.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 13 Nov 2001 13:37:59 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Quick question "
},
{
"msg_contents": "> > That function is in the same file, and has some comments related to\n> > name collision above it. I believe this is where you'll want to work.\n>\n> I think we handled this. Have you tried 7.2 beta2?\n\nOh, ok. Hmmm - haven't really checked to tell the truth. All I see in the\nHISTORY is: \"Truncate extra-long sequence names to a reasonable value (Tom)\"\nThat doesn't seem to be it. I'll try it when I get home.\n\nChris\n\n",
"msg_date": "Wed, 14 Nov 2001 09:32:26 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Re: Quick question"
},
{
"msg_contents": "> I think there are two completely different issues here: one is what\n> name to use for the auto-generated sequence, and the other is whether\n> (when) to drop the sequence if the table is dropped. Fixing the\n> latter issue would reduce but not entirely eliminate the issue of\n> name collisions.\n\nHmmm? No way - see below.\n\n> IIRC, the major objection to the notion of adding random hash characters\n> to the auto-generated names was that people wanted to be able to predict\n> the names. There was a long discussion about this a couple years back\n> when we settled on the present algorithm. Please search the archives\n> a bit if you want to re-open that issue.\n\nI will search the archives, but I'll explain my thoughts here a well.\n\nWell, what's the problem with appending a number - that's how index names\nget generated.\n\nThis is my horrible schema that forced me to abandon using SERIAL in favour\nof explicit CREATE SEQUENCE statements:\n\nBEGIN;\n\n-- Categories of foods\nCREATE TABLE medidiets_categories_foods (\n\tcategory_id SERIAL,\n\tdescription varchar(255) NOT NULL,\n\tPRIMARY KEY(category_id)\n);\n\n-- Categories of recipes\nCREATE TABLE medidiets_categories_rec (\n\tcategory_id SERIAL,\n\tdescription varchar(255) NOT NULL,\n\tPRIMARY KEY(category_id)\n);\n\nCOMMIT;\n\nBoth of these SERIALs are given the same name - it's a real pain.\n\nChris\n\n",
"msg_date": "Wed, 14 Nov 2001 09:51:08 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Re: Quick question "
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> Both of these SERIALs are given the same name - it's a real pain.\n\nA large part of the problem would go away if we just doubled\nNAMEDATALEN, which is on the to-do-soon list anyway...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 13 Nov 2001 21:12:15 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Quick question "
},
{
"msg_contents": "> > I think there are two completely different issues here: one is what\n> > name to use for the auto-generated sequence, and the other is whether\n> > (when) to drop the sequence if the table is dropped. Fixing the\n> > latter issue would reduce but not entirely eliminate the issue of\n> > name collisions.\n> \n> Hmmm? No way - see below.\n> \n> > IIRC, the major objection to the notion of adding random hash characters\n> > to the auto-generated names was that people wanted to be able to predict\n> > the names. There was a long discussion about this a couple years back\n> > when we settled on the present algorithm. Please search the archives\n> > a bit if you want to re-open that issue.\n> \n> I will search the archives, but I'll explain my thoughts here a well.\n> \n> Well, what's the problem with appending a number - that's how index names\n> get generated.\n> \n> This is my horrible schema that forced me to abandon using SERIAL in favour\n> of explicit CREATE SEQUENCE statements:\n\nAdded to TODO:\n\n * Have SERIAL generate non-colliding sequence names when we have\n auto-destruction\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Nov 2001 20:34:11 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Quick question"
}
] |
[
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n(This is using 7.2b1)\n\nAnyone know why I am getting the \"internal form\" of the \npartial-index predicate? In other words, instead of \ngetting something like this (thanks Tom):\n\nregression=# \\d apple\n Index \"apple\"\n Column | Type\n- ---------+---------\n topping | integer\nhash\nIndex predicate: (topping > 99)\n\n\nI get something like this:\n\n Index \"apple\"\n Column | Type\n- ---------+---------\n topping | integer\nhash for table \"pizza\" WHERE (topping > 2000)\nIndex predicate: ({ EXPR :typeOid 16 :opType op :oper { OPER :opno 521 :opid 14\n7 :opresulttype 16 } :args ({ VAR :varno 1 :varattno 3 :vartype 23 :vartypmod -1\n :varlevelsup 0 :varnoold 1 :varoattno 3} { CONST :consttype 23 :constlen 4 :co\nnstbyval true :constisnull false :constvalue 4 [ 99 0 0 0 ] })})\n\nDoing a:\n\nSELECT c.relname, i.indpred\nFROM pg_index i, pg_class c\nWHERE c.oid = i.indexrelid\nand i.indpred like '(%'\n\nreveals that this is happening to every partial index I \ncreate.\n\nI'm also wondering if we even need the \"Index predicate:\" \nsection at all? When it works properly, will it ever give \nmore information than what the tail end of pg_get_indexdef \nreturns? If it gives the same, is one preferred over the other?\n\nThanks,\nGreg Sabino Mullane\ngreg@turnstep.com\nPGP Key: 0x14964AC8 200111131146\n\n-----BEGIN PGP SIGNATURE-----\nComment: http://www.turnstep.com/pgp.html\n\niQA/AwUBO/FOorybkGcUlkrIEQL9KACgyJu7YFWCjJQPwEL32yjhmegocRYAn1iC\n4djb4ZoOkrSDePXJ6rsQcSCW\n=M66f\n-----END PGP SIGNATURE-----\n\n",
"msg_date": "Tue, 13 Nov 2001 11:45:26 -0500 (EST)",
"msg_from": "\"Greg Sabino Mullane\" <greg@turnstep.com>",
"msg_from_op": true,
"msg_subject": "Detailed index predicate with \\d on indexes in psql"
},
{
"msg_contents": "\"Greg Sabino Mullane\" <greg@turnstep.com> writes:\n> Anyone know why I am getting the \"internal form\" of the \n> partial-index predicate?\n\nIf you look in describe.c, you'll see that what psql is printing is\nthe result of\n\tSELECT pg_get_expr(i.indpred, i.indrelid) as indpred\n\tFROM pg_index i\nwhich should yield the exact same text as what pg_get_indexdef offers\nin WHERE. Have you mucked with this SELECT? Are you perhaps trying\nto run against a pre-7.2 server (pg_get_expr is new in 7.2)?\n\n> I'm also wondering if we even need the \"Index predicate:\" \n> section at all?\n\nNot if you intend to print the results of pg_get_indexdef instead.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 13 Nov 2001 12:55:43 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Detailed index predicate with \\d on indexes in psql "
}
] |
[
{
"msg_contents": "Today I resolved to leave major redesigns aside and implement privileges\nfor functions using the existing mechanisms. I added a proacl column of\ntype aclitem[] to pg_proc (internally, the EXECUTE privilege uses the same\nbit as SELECT), added a permission check in the executor, and that was it.\nGetting the GRANT and REVOKE commands to work will take a bit still, but\nas long as you're willing to tweak proacl manually you have a working\nsystem.\n\nSo here are a couple of details to think about:\n\n* I didn't bother about operators -- an operator has the same permission\nas the underlying function. This is the most sensible thing to do, IMHO,\nbut in some cases it would make for an unpretty user interface. If\nrequested I could imagine making a GRANT ON OPERATOR command that still\nchanges the permissions of the function.\n\n* How to handle built-in functions? For tables we have a hack that treats\nnames beginning with \"pg_\" specially, which obviously won't work for\nfunctions. Option 1 is to treat at NULL proacl as world access, option 2\nis to initialize all builtin proacls with explicit world access.\nHowever, option 1 would introduce a weird behavior that the first GRANT\ncommand will mysteriously revoke the world access, similar to some bug\nwe've already had with relation acls. I'm currently giving all functions\nwith oids under 16384 a free pass, which is fastest but not really\nsatisfactory.\n\n* Backward compatibility: If we go for option 1 above then there are no\nbackward compatibility problems. In other cases we might have some schemas\nwhich won't work anymore without explicitly granting privileges on\nfunctions.\n\n* For tables, we have the rule that tables introduced into the query by a\nrule have their permissions checked as the user that created the rule. I\ndon't see this easily possible for functions. Moreover, it might not even\nmake sense when an operator is the join operator between two tables.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Tue, 13 Nov 2001 19:15:09 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Privileges for functions"
}
] |
[
{
"msg_contents": "Hi,\n\nI have a lot of IP addresses and would like to aggregate them\nby networks. Is there aggregate functions for this ?\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Tue, 13 Nov 2001 21:48:12 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "aggregate functions for inet ?"
},
{
"msg_contents": "> Hi,\n> \n> I have a lot of IP addresses and would like to aggregate them\n> by networks. Is there aggregate functions for this ?\n\nIs this a valid TODO item?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Nov 2001 20:15:46 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: aggregate functions for inet ?"
}
] |
[
{
"msg_contents": "Hi dear all,\n\nNow, we need to know if it is possible from the ODBC interface to access to\ndiagnostic registers like \"GET DIAGNOSTICS rc =ROW_COUNT\". It seems not to\nwork from odbc, maybe it need some changes to work. Can anybody help?,\nthanks.\n\n\"Henshall, Stuart\" wrote:\n> I believe LOCK TABLE IN EXCLUSIVE MODE should block everything but\n> selects, but it locks for the entire transaction I think. Maybe in tcl you\n> could create your own locking using global variables. If the spin lock code\n> is available to user functions you might be able to use that.\n> Alternativley, inside a plpgsql function, could you use something like this:\n> \n> INSERT INTO ex_tbl (a,b,pk) SELECT var1 AS a,var2 AS b,var3 AS pk WHERE NOT\n> EXISTS (SELECT * FROM ex_tbl WHERE pk=var3) LIMIT 1;\n> GET DIAGNOSTICS rc =ROW_COUNT;\n> \n> where pk is the primary key is the primary key of ex_tbl.\n> if rc=0 then you'd know the primary key already existed and if rc=1 then it\n> would have inserted succesfully\n> - Stuart\n> \n> \"Haoldo Stenger\" wrote:\n> \n> > \"Matthew T. O'Connor\" wrote:\n> > >\n> > > > A solution, could be to query for the existance of the PK, just before\n> > the\n> > > > insertion. But there is a little span between the test and the\n> > > > insertion, where another insertion from another transaction could void\n> > > > the existance test. Any clever ideas on how to solve this? Using\n> > > > triggers maybe? Other solutions?\n> > > >\n> > >\n> > > All you need to do is use a sequence. If you set the sequence to be the\n> > > primary key with a default value of nextval(seq_name) then you will\n> > never\n> > > have a collision. Alternatly if you need to know that number before you\n> > > start inserting you can select next_val(seq_name) before you inser and\n> > use\n> > > that. By the way the datatype serial automates exactly what I\n> > described.\n> >\n> > Yes, but there are situations where a sequenced PK isn't what is needed.\n> > Imagine a DW app, where composed PKs such as (ClientNum, Year, Month,\n> > ArticleNum) in a table which has ArticleQty as a secondary field are\n> > used, in order to consolidate detail record from other tables. There,\n> > the processing cycle goes like checking for the existance of the PK, if\n> > it exists, add ArticleQtyDetail to ArticleQty, and update; and if it\n> > doesn't exist, insert the record with ArticleQtyDetail as the starting\n> > value of ArticleQty. See it? Then, if between the \"select from\" and the\n> > \"insert into\", other process in the system (due to parallel processing\n> > for instance) inserts a record with the same key, then the first\n> > transaction would cancel, forcing redoing of all the processing. So,\n> > sort of atomicity of the check?update:insert operation is needed. How\n> > can that be easily implemented using locks and triggers for example?\n> >\n> > Regards,\n> > Haroldo.\n>\n",
"msg_date": "Tue, 13 Nov 2001 18:10:52 -0600",
"msg_from": "Haroldo Stenger <hstenger@adinet.com.uy>",
"msg_from_op": true,
"msg_subject": "Re: Abort state on duplicated PKey in transactions"
}
] |
[
{
"msg_contents": "Tatsuo found the following paragraph in the docs, in datatype.sgml:\n\n---------------------------------------------------------------------------\n\n<literal>'now'</literal> is resolved when the value is inserted,\n<literal>'current'</literal> is resolved every time the value is\nretrieved. So you probably want to use <literal>'now'</literal> in most\napplications. (Of course you <emphasis>really</emphasis> want to use\n<literal>CURRENT_TIMESTAMP</literal>, which is equivalent to\n<literal>'now'</literal>.)\n\n---------------------------------------------------------------------------\n\nThis seems wrong to me. What does it mean when it says 'current' is\nresolved every time the value is retrieved?\n\nAlso, we mention 'now' a lot in the documentation. Should we change\nthose to CURRENT_TIMESTAMP? I have change that in the FAQ.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 13 Nov 2001 20:38:41 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Use of 'now'"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> This seems wrong to me. What does it mean when it says 'current' is\n> resolved every time the value is retrieved?\n\nNothing of interest anymore, since 'current' has been removed as of 7.2.\nHowever, Thomas has yet to commit any docs updates for his recent\ndatetime-related changes ... including that one ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 13 Nov 2001 21:27:19 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Use of 'now' "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > This seems wrong to me. What does it mean when it says 'current' is\n> > resolved every time the value is retrieved?\n> \n> Nothing of interest anymore, since 'current' has been removed as of 7.2.\n> However, Thomas has yet to commit any docs updates for his recent\n> datetime-related changes ... including that one ...\n\nSeems it is still in there somewhere:\n\n\ttest=> create table bb (x timestamp default 'current', y int);\n\tCREATE\n\ttest=> insert into bb (y) values (1);\n\tINSERT 16591 1\n\ttest=> select * from bb;\n\t x | y \n\t-------------------------------+---\n\t 2001-11-13 21:45:22.473896-05 | 1\n\t(1 row)\n\nDo you mean that 'current' is now the same as 'now'? :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 13 Nov 2001 21:46:32 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Use of 'now'"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> Nothing of interest anymore, since 'current' has been removed as of 7.2.\n>> However, Thomas has yet to commit any docs updates for his recent\n>> datetime-related changes ... including that one ...\n\n> Seems it is still in there somewhere:\n\n> \ttest=> create table bb (x timestamp default 'current', y int);\n\nHmm. It was *supposed* to be removed entirely, but possibly what\nThomas actually did was to continue to accept the keyword as equivalent\nto 'now'. Thomas?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 13 Nov 2001 21:52:32 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Use of 'now' "
},
{
"msg_contents": "On Tue, 13 Nov 2001, Bruce Momjian wrote:\n\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > This seems wrong to me. What does it mean when it says 'current' is\n> > > resolved every time the value is retrieved?\n> >\n> > Nothing of interest anymore, since 'current' has been removed as of 7.2.\n> > However, Thomas has yet to commit any docs updates for his recent\n> > datetime-related changes ... including that one ...\n>\n> Seems it is still in there somewhere:\n>\n> \ttest=> create table bb (x timestamp default 'current', y int);\n> \tCREATE\n> \ttest=> insert into bb (y) values (1);\n> \tINSERT 16591 1\n> \ttest=> select * from bb;\n> \t x | y\n> \t-------------------------------+---\n> \t 2001-11-13 21:45:22.473896-05 | 1\n> \t(1 row)\n>\n> Do you mean that 'current' is now the same as 'now'? :-)\n\nISTM that 'current' when used as a default meant the time the table\nwas created but now() would (as one woule expect) return the current\ndatetime.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 13 Nov 2001 22:15:22 -0500 (EST)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Use of 'now'"
},
{
"msg_contents": "> > Seems it is still in there somewhere:\n> >\n> > \ttest=> create table bb (x timestamp default 'current', y int);\n> > \tCREATE\n> > \ttest=> insert into bb (y) values (1);\n> > \tINSERT 16591 1\n> > \ttest=> select * from bb;\n> > \t x | y\n> > \t-------------------------------+---\n> > \t 2001-11-13 21:45:22.473896-05 | 1\n> > \t(1 row)\n> >\n> > Do you mean that 'current' is now the same as 'now'? :-)\n> \n> ISTM that 'current' when used as a default meant the time the table\n> was created but now() would (as one woule expect) return the current\n> datetime.\n\nYou would think so, but in fact 'current' does change for each insert:\n\n\ttest=> create table dd (x timestamp default 'current', y int);\n\tCREATE\n\ttest=> insert into dd (y) values (1);\n\tINSERT 16596 1\n\ttest=> insert into dd (y) values (1);\n\tINSERT 16597 1\n\ttest=> select * from dd;\n\t x | y \n\t-------------------------------+---\n\t 2001-11-13 22:39:18.283834-05 | 1\n\t 2001-11-13 22:39:19.196797-05 | 1\n\t(2 rows)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 13 Nov 2001 22:40:13 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Use of 'now'"
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> Nothing of interest anymore, since 'current' has been removed as of 7.2.\n> >> However, Thomas has yet to commit any docs updates for his recent\n> >> datetime-related changes ... including that one ...\n> \n> > Seems it is still in there somewhere:\n> \n> > \ttest=> create table bb (x timestamp default 'current', y int);\n> \n> Hmm. It was *supposed* to be removed entirely, but possibly what\n> Thomas actually did was to continue to accept the keyword as equivalent\n> to 'now'. Thomas?\n\n[ CC'ing to hackers because this is getting into code problems. ]\n\nHere's another inconsistency that Tatsuo found:\n\t\n\ttest=> create table ff (x time default 'current_timestamp');\n\tERROR: Bad time external representation 'current_timestamp'\n\ttest=> create table ff (x time default 'current');\n\tERROR: Bad time external representation 'current'\n\ttest=> create table ff (x time default 'now');\n\tCREATE\n\ttest=> select current_timestamp;\n\t timestamptz \n\t-------------------------------\n\t 2001-11-13 22:49:50.607401-05\n\t(1 row)\n\nYou can default a time to now, but not to current or current_timestamp.\n\nI believe this is happening because current is implemented as special\ntimezones in datetime.c and timestamp.c, and current_timestamp is\nimplemented in gram.y, while 'now' is a function.\n\nAnyway, looks like confusion that should be fixed.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 13 Nov 2001 22:58:48 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Use of 'now'"
},
{
"msg_contents": "...\n> Hmm. It was *supposed* to be removed entirely, but possibly what\n> Thomas actually did was to continue to accept the keyword as equivalent\n> to 'now'. Thomas?\n\nNot sure where \"supposed to\" came from ;)\n\nPrevious versions of PostgreSQL can and will generate dump files which\nhave 'current'. I did make it equivalent to 'now' for at least the 7.2\nseries of releases.\n\n - Thomas\n",
"msg_date": "Wed, 14 Nov 2001 07:08:34 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Use of 'now'"
},
{
"msg_contents": "> [ CC'ing to hackers because this is getting into code problems. ]\n\nNot sure I agree with that conclusion yet.\n\n> Here's another inconsistency that Tatsuo found:\n> test=> create table ff (x time default 'current_timestamp');\n> ERROR: Bad time external representation 'current_timestamp'\n\nNever was a feature, and not documented as such. CURRENT_TIMESTAMP (and\nCURRENT_DATE and CURRENT_TIME; note lack of quotes) are defined by SQL9x\nas specialty constants (they have some other term for them afaicr).\n\n> test=> create table ff (x time default 'current');\n> ERROR: Bad time external representation 'current'\n\nNever was a feature, but sure seems like it should have been. How have\nwe missed all of those complaints about this over the last six years? ;)\nWe'll guess that 'current' was not one of the most utilized features of\nthe date/time types (which is one reason why I supported removing it).\n\n> test=> create table ff (x time default 'now');\n> CREATE\n> test=> select current_timestamp;\n> timestamptz\n> -------------------------------\n> 2001-11-13 22:49:50.607401-05\n> (1 row)\n> \n> You can default a time to now, but not to current or current_timestamp.\n> \n> I believe this is happening because current is implemented as special\n> timezones in datetime.c and timestamp.c, and current_timestamp is\n> implemented in gram.y, while 'now' is a function.\n\nNot sure what special time zones have to do with it (did you mean\n\"special timestamps\"?). CURRENT_xxx has to be implemented in gram.y\nsince they are keywords, not quoted strings. 'now' is not a function,\nthough now() is; both 'now' and 'current' are special cases in the input\nparser for the date/time data types, with one inconsistancy as noted\nabove. That will be fixed.\n\n> Anyway, looks like confusion that should be fixed.\n\nThe documentation covers some of this, and Tom has pointed out\n(presumably to encourage a contribution) that it hasn't been updated yet\nfor the most recent changes for 7.2. I expect to do so in the next\ncouple of weeks.\n\n - Thomas\n",
"msg_date": "Wed, 14 Nov 2001 07:23:30 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Use of 'now'"
},
{
"msg_contents": "On Tue, 13 Nov 2001, Bruce Momjian wrote:\n\n> > > Seems it is still in there somewhere:\n> > >\n> > > \ttest=> create table bb (x timestamp default 'current', y int);\n> > > \tCREATE\n> > > \ttest=> insert into bb (y) values (1);\n> > > \tINSERT 16591 1\n> > > \ttest=> select * from bb;\n> > > \t x | y\n> > > \t-------------------------------+---\n> > > \t 2001-11-13 21:45:22.473896-05 | 1\n> > > \t(1 row)\n> > >\n> > > Do you mean that 'current' is now the same as 'now'? :-)\n> >\n> > ISTM that 'current' when used as a default meant the time the table\n> > was created but now() would (as one woule expect) return the current\n> > datetime.\n>\n> You would think so, but in fact 'current' does change for each insert:\n>\n> \ttest=> create table dd (x timestamp default 'current', y int);\n> \tCREATE\n> \ttest=> insert into dd (y) values (1);\n> \tINSERT 16596 1\n> \ttest=> insert into dd (y) values (1);\n> \tINSERT 16597 1\n> \ttest=> select * from dd;\n> \t x | y\n> \t-------------------------------+---\n> \t 2001-11-13 22:39:18.283834-05 | 1\n> \t 2001-11-13 22:39:19.196797-05 | 1\n> \t(2 rows)\n>\n>\n\nOr this:\n\n PostgreSQL 7.0.3 on i386-unknown-freebsdelf4.2, compiled by gcc 2.95.2\n(1 row)\n\ntemplate1=# create table dd (x timestamp default 'current', y int);\nCREATE\ntemplate1=# insert into dd (y) values (1);\nINSERT 1407083 1\ntemplate1=# insert into dd (y) values (1);\nINSERT 1407084 1\ntemplate1=# select * from dd;\n x | y\n---------+---\n current | 1\n current | 1\n(2 rows)\n\n\nMust be since 7.0.3?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Wed, 14 Nov 2001 05:59:46 -0500 (EST)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Use of 'now'"
},
{
"msg_contents": "...\n> template1=# insert into dd (y) values (1);\n> template1=# select * from dd;\n> x | y\n> ---------+---\n> current | 1\n> Must be since 7.0.3?\n\nPrior to 7.2 (and up to two months ago -- ?? haven't checked the dates)\n'current' was stored as a special value. It was only evaluated as the\ncurrent transaction time when math or some other transformation was\ninvolved.\n\nThe feature dates from sometime after 1987 and sometime before 1995\n(back when gods roamed the earth, etc etc).\n\nRegarding the TIME data type: there was never a reserved value defined\nfor that type, so the feature was never available for it. Since\n'current' and 'now' are synonymous, it is a one-liner to add recognition\nof 'current' to that type. I've got patches...\n\n - Thomas\n",
"msg_date": "Wed, 14 Nov 2001 14:11:05 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Use of 'now'"
},
{
"msg_contents": "> Regarding the TIME data type: there was never a reserved value defined\n> for that type, so the feature was never available for it. Since\n> 'current' and 'now' are synonymous, it is a one-liner to add recognition\n> of 'current' to that type. I've got patches...\n\nYou know I am totally lost with the date/time stuff. I am just pointing\nout stuff and guessing. Please do whatever you think is appropriate.\n\nThanks.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 14 Nov 2001 11:10:14 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Use of 'now'"
},
{
"msg_contents": "\n\nBruce Momjian wrote:\n\n>>>I don't think this is a good idea. If someone really relied on 'current'\n>>>for his application, substituting 'now' for it is not going to make things\n>>>better. If it's done silently it will definitely make things worse.\n>>>\n>>I hadn't thought about it, but I believe Peter is right. Rejecting\n>>'current' is better than silently translating it to 'now'. We have\n>>removed this feature and we shouldn't try to paper over the fact.\n>>\n>\n>My only question is how many people were using current thinking it\n>functioned as 'now'? Was current ever a desired feature?\n>\nThe only times I have used current were by mistake when PG interpreted \nsomething\nstarting with current as current. I suspect that everybody who has been \nusing current\nthinking it functions as 'now' has either found out it does not or does \nnot really care ;)\n-----------\nHannu\n\n\n\n",
"msg_date": "Thu, 15 Nov 2001 19:54:44 +0500",
"msg_from": "Hannu Krosing <hannu@sid.tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: [DOCS] Use of 'now'"
},
{
"msg_contents": "Thomas Lockhart writes:\n\n> Previous versions of PostgreSQL can and will generate dump files which\n> have 'current'. I did make it equivalent to 'now' for at least the 7.2\n> series of releases.\n\nI don't think this is a good idea. If someone really relied on 'current'\nfor his application, substituting 'now' for it is not going to make things\nbetter. If it's done silently it will definitely make things worse.\n\nIf that someone replays his dump and sees \"invalid date/time value\n'current'\" then he knows he's got something to fix -- and he has to fix\nsomething anyway.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 15 Nov 2001 17:15:48 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [DOCS] Use of 'now'"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Thomas Lockhart writes:\n>> Previous versions of PostgreSQL can and will generate dump files which\n>> have 'current'. I did make it equivalent to 'now' for at least the 7.2\n>> series of releases.\n\n> I don't think this is a good idea. If someone really relied on 'current'\n> for his application, substituting 'now' for it is not going to make things\n> better. If it's done silently it will definitely make things worse.\n\nI hadn't thought about it, but I believe Peter is right. Rejecting\n'current' is better than silently translating it to 'now'. We have\nremoved this feature and we shouldn't try to paper over the fact.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 15 Nov 2001 11:36:16 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [DOCS] Use of 'now' "
},
{
"msg_contents": "> > I don't think this is a good idea. If someone really relied on 'current'\n> > for his application, substituting 'now' for it is not going to make things\n> > better. If it's done silently it will definitely make things worse.\n> \n> I hadn't thought about it, but I believe Peter is right. Rejecting\n> 'current' is better than silently translating it to 'now'. We have\n> removed this feature and we shouldn't try to paper over the fact.\n\nMy only question is how many people were using current thinking it\nfunctioned as 'now'? Was current ever a desired feature?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 15 Nov 2001 11:39:16 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: [DOCS] Use of 'now'"
}
] |
[
{
"msg_contents": "\nAFAIK the regress test database is working again. If anyone's having\na problem with it let me know.\n\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 13 Nov 2001 21:27:12 -0500 (EST)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": true,
"msg_subject": "regress test db"
},
{
"msg_contents": "On Tue, 13 Nov 2001 21:27:12 -0500 (EST)\nVince Vielhaber <vev@michvhf.com> wrote:\n\n> AFAIK the regress test database is working again. If anyone's having\n> a problem with it let me know.\n\n Hi, Vince\n \n \n Not a problem, but...\n \n I pondered whether the several examples, which is used nested\n EXCEPT/INTERSECT/UNION among three SELECTs at the \"Operator precedence\n and (((((extra))))) parentheses\" in the ../regress/sql/union.sql,\n have the case like a query returning an incorrect result in v7.1.* \n (e.g. See below) or not. Finally, the reconfirmation may lead me to\n conclude that the examples can't reflect the behavior of the fixed\n nested EXCEPT/INTERSECT(Thanks Tom). I wish there was something new\n to be able to demonstrate the fixed. \n \n \n\n SELECT q1 FROM int8_tbl\n INTERSECT ALL\n SELECT q2 FROM int8_tbl\n INTERSECT\n SELECT q2 FROM int8_tbl;\n\n q1\n -----\n 123 <--NG\n (1 row)\n\n\n SELECT q1 FROM int8_tbl\n INTERSECT\n SELECT q2 FROM int8_tbl \n INTERSECT\n SELECT q2 FROM int8_tbl;\n\n q1 \n -----\n 123 <--NG\n (1 row)\n\n\n SELECT q1 FROM int8_tbl \n INTERSECT ALL\n SELECT q2 FROM int8_tbl\n EXCEPT\n SELECT q2 FROM int8_tbl WHERE q2 < 1000;\n\n q1\n ----\n (0 rows) <--NG\n\n\n SELECT q1 FROM int8_tbl\n INTERSECT\n SELECT q2 FROM int8_tbl\n EXCEPT ALL\n SELECT q2 FROM int8_tbl WHERE q2 < 1000;\n\n q1\n ----\n (0 rows) <--NG\n\n\n\n TIA,\n Masaru Sugawara\n\n",
"msg_date": "Sat, 17 Nov 2001 15:31:32 +0900",
"msg_from": "Masaru Sugawara <rk73@echna.ne.jp>",
"msg_from_op": false,
"msg_subject": "Re: regress test db"
}
] |
[
{
"msg_contents": "Hi!\n\nI could easily find chapters like \"API reference\" in Interbase and Postgres\nmanuals.\n\nWhat can be considered as API reference in Postgres: libpq, libpq++ ?\n\nActually, I am looking for an exact information about ability of using 2\nphase commited transaxctions in Postgres and I was given an advice to ask\nabout it here.\n\n\n",
"msg_date": "Wed, 14 Nov 2001 10:59:56 +0500",
"msg_from": "\"Ivan Babikov\" <iab@qms.e-burg.ru>",
"msg_from_op": true,
"msg_subject": "Is there an API reference for Postgres? What about 2-phase\n\ttransactions?"
}
] |
[
{
"msg_contents": "Ciao,\nI'm testing 7.2b2 and I see it is much slower when I use like on column \nwith indexes,\nthe explain sais it is a sequantial scan (same as 7.1.3), but the time \nit get to complete is much longer.\n\nI got the problem only on SMP machine:\nexplain on 7.1.3 say it is a seq scan with cost 18837\nexplain on 7.2b2 say it is a seq scan 22.5\n\non single cpu 7.2b2 says:\nIndex scan with cost 29 and it is very fast!!\n\nAll platform are linux/i386 using redhat 7.2 and latest libraries\n\nI'm testing latest snapshot just now.\n\nthanks\n\n \n\n-- \n-------------------------------------------------------\nGiuseppe Tanzilli\t\tg.tanzilli@gruppocsf.com\nCSF Sistemi srl\t\t\tphone ++39 0775 7771\nVia del Ciavattino \nAnagni FR\nItaly\n\n\n\n",
"msg_date": "Wed, 14 Nov 2001 12:26:17 +0100",
"msg_from": "Giuseppe Tanzilli - CSF <g.tanzilli@gruppocsf.com>",
"msg_from_op": true,
"msg_subject": "7.2b2 problem using like 'XXX%' sequential scan "
},
{
"msg_contents": "Yes, I vacuumed/reindexed the db,\njust tested the latest snapshot, same problem.\n\nbye\n\nAntonio Fiol Bonn�n wrote:\n\n>Did you VACUUM ANALYZE on your SMP machine?\n>\n>Antonio\n>\n>Giuseppe Tanzilli - CSF wrote:\n>\n>>Ciao,\n>>I'm testing 7.2b2 and I see it is much slower when I use like on column\n>>with indexes,\n>>the explain sais it is a sequantial scan (same as 7.1.3), but the time\n>>it get to complete is much longer.\n>>\n>>I got the problem only on SMP machine:\n>>explain on 7.1.3 say it is a seq scan with cost 18837\n>>explain on 7.2b2 say it is a seq scan 22.5\n>>\n>>on single cpu 7.2b2 says:\n>>Index scan with cost 29 and it is very fast!!\n>>\n>>All platform are linux/i386 using redhat 7.2 and latest libraries\n>>\n>>I'm testing latest snapshot just now.\n>>\n>>thanks\n>>\n>>\n>>\n>>--\n>>-------------------------------------------------------\n>>Giuseppe Tanzilli g.tanzilli@gruppocsf.com\n>>CSF Sistemi srl phone ++39 0775 7771\n>>Via del Ciavattino\n>>Anagni FR\n>>Italy\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 5: Have you checked our extensive FAQ?\n>>\n>>http://www.postgresql.org/users-lounge/docs/faq.html\n>>\n>\n>\n\n\n-- \n-------------------------------------------------------\nGiuseppe Tanzilli\t\tg.tanzilli@gruppocsf.com\nCSF Sistemi srl\t\t\tphone ++39 0775 7771\nVia del Ciavattino \nAnagni FR\nItaly\n\n\n\n\n",
"msg_date": "Wed, 14 Nov 2001 12:59:28 +0100",
"msg_from": "Giuseppe Tanzilli - CSF <g.tanzilli@gruppocsf.com>",
"msg_from_op": true,
"msg_subject": "Re: 7.2b2 problem using like 'XXX%' sequential scan"
},
{
"msg_contents": "...\n> All platform are linux/i386 using redhat 7.2 and latest libraries\n\nI'm guessing that one platform has \"locale\" enabled (or some combination\nof multibyte parameters?), whereas the other does not. SMP will not make\na difference in configuration or optimizer choices.\n\nThe good news is that you have *one* machine which does what you want,\nso we know that we can get the other one doing that too :)\n\n - Thomas\n",
"msg_date": "Wed, 14 Nov 2001 14:00:35 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: 7.2b2 problem using like 'XXX%' sequential scan"
},
{
"msg_contents": "(back on-list)\n\n> You got it, I have locale anebled on all platform,\n> but I see a different locale setting:\n> \n> it is working where I have LC_* = it_IT\n> not working where I have LC_* = it_IT@euro\n> what I can do ?\n> I tried to start postgresql 7.2b2 with the working locale but nothing\n> appened, I must initdb ??\n\nI would try that. If you stay on list, then someone who actually *knows*\nis likely to answer. I'm just guessing ;)\n\n - Thomas\n",
"msg_date": "Wed, 14 Nov 2001 16:27:32 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: 7.2b2 problem using like 'XXX%' sequential scan"
},
{
"msg_contents": "\nI did a fresh initdb after setting LC_* to it_IT\nbut nothing seems to change\n\nbye\n\n\nThomas Lockhart wrote:\n\n>(back on-list)\n>\n>>You got it, I have locale anebled on all platform,\n>>but I see a different locale setting:\n>>\n>>it is working where I have LC_* = it_IT\n>>not working where I have LC_* = it_IT@euro\n>>what I can do ?\n>>I tried to start postgresql 7.2b2 with the working locale but nothing\n>>appened, I must initdb ??\n>>\n>\n>I would try that. If you stay on list, then someone who actually *knows*\n>is likely to answer. I'm just guessing ;)\n>\n> - Thomas\n>\n\n\n-- \n-------------------------------------------------------\nGiuseppe Tanzilli\t\tg.tanzilli@gruppocsf.com\nCSF Sistemi srl\t\t\tphone ++39 0775 7771\nVia del Ciavattino \nAnagni FR\nItaly\n\n\n\n\n",
"msg_date": "Wed, 14 Nov 2001 17:41:25 +0100",
"msg_from": "Giuseppe Tanzilli - CSF <g.tanzilli@gruppocsf.com>",
"msg_from_op": true,
"msg_subject": "Re: 7.2b2 problem using like 'XXX%' sequential scan"
},
{
"msg_contents": ">> it is working where I have LC_* = it_IT\n>> not working where I have LC_* = it_IT@euro\n>> what I can do ?\n>> I tried to start postgresql 7.2b2 with the working locale but nothing\n>> appened, I must initdb ??\n\nYes. The database locale is determined at initdb time.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 14 Nov 2001 11:59:58 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 7.2b2 problem using like 'XXX%' sequential scan "
}
] |
[
{
"msg_contents": "Hi 2 everybody!\n\nI'm evaluating a database that supports our 3-tier solutions, in addition to\nOracle.\nPostgres is at the moment the candidate n. 1, but I realized it's very hard\nto translate this type of code (Orqcle PL/SQL):\n\nFUNCTION F_EXT(par1, par2 ...) RETURN.... IS\nBEGIN\n SAVEPOINT spF_EXT;\n ...[instructions]\n INSERT INTO...\n UPDATE...\n DELETE...\n ...[instructions]\n\n SAVEPOINT spF_INT;\n ret := F_INT(par1, par2 ...);\n\n IF ret = 'KO' THEN\n ROLLBACK TO spF_EXT;\n ELSE IF ret IS NULL THEN\n ROLLBACK TO spF_INT;\n END IF;\n\n ...[instructions]\n COMMIT;\nEXCEPTION\n WHEN.....\n ...[instructions]\nEND F_EXT;\n\nThe goal to achieve is to trap every kind of execution errors without trying\nto prevent their occurrence.\nIn any case, I usually can't be sure the function (F_EXT) works correcty: an\nexception could be lanched by an internal function (F_INT), I couldn't know\nto estabilish what happens and where ...\nBy the way, the evironment is:\nthe browser requests a PHP page that connect to a Data Source (Oracle) and\nlanch a Stored Procedure; after the execution by Oracle, PHP get the result\nand returns data or a message to the client.\nIt's very important for me to solve the application logic in the Back-End of\nthe system (Oracle or Postgres): integrity and meaning of function aren't\nonly \"commit all\" or \"rollback all\", and when something of wrong happens, I\nneed to know the type of exception and when it occurred. In Oracle I can do\nthis, but in Postgres I've not yet found anything of similar.\nCan anyone help me?\nThanks in advance...\n\nEaglet\n\n\n",
"msg_date": "Wed, 14 Nov 2001 12:51:30 +0100",
"msg_from": "\"Eaglet\" <Aquil8@infinito.it>",
"msg_from_op": true,
"msg_subject": "handling exceptions, really not simple... :-(("
},
{
"msg_contents": "Eaglet,\n\nIn the future, please refrain from cross-posting on multiple PostgreSQL\nlists. We get enough traffic without seeing the same message on 3\nlists.\n\n> The goal to achieve is to trap every kind of execution errors without\n> trying\n> to prevent their occurrence.\n> In any case, I usually can't be sure the function (F_EXT) works\n> correcty: an\n> exception could be lanched by an internal function (F_INT), I\n> couldn't know\n> to estabilish what happens and where ...\n\nUnfortunately, PG/plSQL does not currently support any programmed\nexception handling. If an exception occurs in a pgplsql function, it\nrolls back the entire function, including rolling back any calling\nfunctions on a cascading basis. \n\nThis is partly due, as I understand it, to Postgres' lack of support for\nnested transactions. Hopefully one of the Core Team will speak up with\nthe prognosis on fixing this particular issue. Right now, projects\nneeding sophisticated exception handling are being done in middleware\nlanguages that support it, such as Java and Perl.\n\nSee Oracle <--> Postgres porting guides at\nhttp://techdocs.postgresql.org/\n\n-Josh\n\n______AGLIO DATABASE SOLUTIONS___________________________\n Josh Berkus\n Complete information technology josh@agliodbs.com\n and data management solutions (415) 565-7293\n for law firms, small businesses fax 621-2533\n and non-profit organizations. San Francisco\n",
"msg_date": "Thu, 15 Nov 2001 08:52:05 -0800",
"msg_from": "\"Josh Berkus\" <josh@agliodbs.com>",
"msg_from_op": false,
"msg_subject": "Re: handling exceptions, really not simple... :-(("
}
] |
[
{
"msg_contents": "\n> > > The correct way would be to check for the existance of int8,\nint16, etc.\n\nJust tried to check your patch, but failed because I don't have a\nrunning autoconf \n:-(\n\nTried manual edit of pg_config:\nIt is important that all lines #undef SIZEOF_INTxx get replaced.\nWouldn't it be better to default those lines to #define SIZEOF_INT8 0\n\nThe check for int64 and uint64 has to be separated, my AIX \nhas: int8, int16, int32, int64\nbut not: uint8, uint16, uint32, uint64\n\nWould you be so kind, as to supply me another patch (maybe including\nconfigure) I can test before beta3 ?\n\nThanks in advance\nAndreas\n2.\n\n\n> >\n> > Good in theory ... but ... are you sure you have included \n> the correct\n> > set of system headers before checking this?\n> \n> I'm sure I haven't, that's why someone is supposed to check this.\n> \n> > (It's not at all clear to me that we know what \"correct\" is in this\n> > context.)\n> \n> If the compiler is complaining that int8 is defined twice we \n> need to check\n> if its already defined once and avoid a second declaration. \n> The problem\n> is setting up an appropriate environment to be relatively \n> sure about the\n> result of \"already defined once\". That's the usual procedure \n> in autoconf\n> programming.\n> \n> -- \n> Peter Eisentraut peter_e@gmx.net\n> \n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n",
"msg_date": "Wed, 14 Nov 2001 18:20:46 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: Beta going well "
}
] |
[
{
"msg_contents": "> The check for int64 and uint64 has to be separated, my AIX \n> has: int8, int16, int32, int64\n> but not: uint8, uint16, uint32, uint64\n\nThis would be an incremental patch to Peter's, but as I said I have not\nbeen able to check configure itself. (The rest works, needless to say)\nI am actually very suspicious whether the configure trick \nAC_CHECK_SIZEOF(int8, 0) will work, because they only get defined\nwith _ALL_SOURCE defined and inttypes.h included.\n\nPrevious patch included just in case.\nTatsuo would you be so kind as to check this, that would be great ?\n\nAndreas",
"msg_date": "Wed, 14 Nov 2001 19:00:19 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Open Items (was: RE: [HACKERS] Beta going well)"
},
{
"msg_contents": "> > The check for int64 and uint64 has to be separated, my AIX \n> > has: int8, int16, int32, int64\n> > but not: uint8, uint16, uint32, uint64\n> \n> This would be an incremental patch to Peter's, but as I said I have not\n> been able to check configure itself. (The rest works, needless to say)\n> I am actually very suspicious whether the configure trick \n> AC_CHECK_SIZEOF(int8, 0) will work, because they only get defined\n> with _ALL_SOURCE defined and inttypes.h included.\n> \n> Previous patch included just in case.\n> Tatsuo would you be so kind as to check this, that would be great ?\n\nPeter's patches could not be applied to the current and I cannot test\nyour patches too.\n\n[t-ishii@srapc1474 pgsql]$ patch -b -p2 < ~/int8-patch\nmissing header for context diff at line 3 of patch\npatching file configure.in\npatching file src/include/pg_config.h.in\npatching file src/include/c.h\nHunk #1 FAILED at 204.\nHunk #2 FAILED at 216.\nHunk #3 FAILED at 266.\n3 out of 3 hunks FAILED -- saving rejects to file src/include/c.h.rej\n--\nTatsuo Ishii\n",
"msg_date": "Thu, 15 Nov 2001 10:22:00 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open Items (was: RE: [HACKERS] Beta going well)"
}
] |
[
{
"msg_contents": "> > * The last message translations should be in before RC1. Stuff that\n> > doesn't compile at that time will be disabled. I suggest that from now on\n> > no more gratuitous \"word smithing\" in the C code, for the benefit of\n> > translators -- most of our messages stink anyway, and they ain't getting\n> > better with two more spaces in them. ;-)\n> \n> Agreed.\n\nOr should I have said \"a g r e e d.\" :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 14 Nov 2001 13:07:14 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Open items"
}
] |
[
{
"msg_contents": "Hi.\n\nLet me ask you somthing about the magic of palloc()\n- at least for me, it looks like magic, and variable-length user-defined\ndata types.\nHopefully, there is anyone who can help me... I am really struggling to\ngrasp PostgreSQL.\n\nAccording to chapter 4 of the Programmer's Guide, user-defined data types\ncan\nhave one of these internal formats.\n\n- pass by value, fixed-length\n- pass by reference, fixed-length\n- pass by reference, variable-length\n\nI am trying to define an user-defined data type corresponding to the third\ncase,\nbut let me use the example in the manual. In the manual, I found this\nexample:\n\ntypedef struct {\n int4 length;\n char data[1];\n} text;\n\n\n1) then, my first question is about the example for coding as follows:\n\nchar buffer[40];\n....\ntext *destination = (text *) palloc(VARHDRSZ + 40);\n....\n\nI cannot understand this mechanism... it looks like magic to me..\nIf it is like the following, I can see it:\n\ntext *destination = (text *) palloc(sizeof(text));\ndestination->data = (char *) palloc(40);\n\nIn this case, there still remains a question, how can I allocate a bunch of\nmemory\nto \"char data[1]\" - it is an array data type, not pointer...\n\nI tried to see the source code, and found it just call MemoryContextAlloc(),\nbut\nMemoryContextAlloc() just has an empty function body.\n\nIs there anyone who can tell me the magic of the palloc()?\nAnd which header file do I need to include for palloc()?\n\n2) And my sencond question is how to create such variable-lenth data type.\n\nIn Chapter 5 of the Programmer's Guide, the example for a fixed-length\nuser-defined\ndata types, 'Complex', is:\n\n....\n(let's assume input and output function for a new user-defined data type\nhave been\ncreated.)\n\nCREATE TYPE complex (\n internallength = 16,\n input = complex_in,\n output = complex_out\n);\n\nBut, I have no idea what I need to set to 'internallength' for\nvariable-length data types.\n\n\n3) My last curiosity is about linking problem.\n\nIf I want to make a stand-alone program which call internal functions,\nespecially palloc(),\nto which library I need to link my program?\nI started this attempt to find an answer for my first question, but now I am\nvery\ncurious about it, becuase I realize that I cannot use the client libraries\nin ..../pgsql/lib\nsuch as 'libpgeasy' and 'libpg', instead I suppose I need to link my\nstand-alone program\nto server libraries. But the problem is server libraries are shared\nlibraries and I have\nno idea about the mechanism.\nWhich shared library has the reference to MemoryContextAlloc() which is\ncalled by palloc()?\nAnd is there anything on which I need to take care to link my stand-alone\nprogram to such\nshared library? Is it perhaps impossible?\n\nAnyway, thank you for reading my long e-mail.\nCheers.\n\n From someone who is trying to love PostgreSQL.\n\n\n\n",
"msg_date": "Wed, 14 Nov 2001 20:22:14 -0000",
"msg_from": "\"Seung Hyun Jeong\" <jeongs@cs.man.ac.uk>",
"msg_from_op": true,
"msg_subject": "about the magic(?) of palloc() and variable-length user-defined data\n\ttype"
},
{
"msg_contents": "On 14 Nov 2001 at 20:22 (-0000), Seung Hyun Jeong wrote:\n| \n| Let me ask you somthing about the magic of palloc()\n\nThere is no magic; only things I am unable to explain :-)\n\n| - at least for me, it looks like magic, and variable-length user-defined\n| data types.\n| Hopefully, there is anyone who can help me... I am really struggling to\n| grasp PostgreSQL.\n\n Great questions. I wish I could answer them... Take a look in\nthe contrib/ directory for examples of various user defined types.\ncontrib/cube and contrib/lo in particular deal (in different ways) \nwith user types of varying size.\n\n Hopefully others more knowledgeable will address your questions \ndirectly. I'm certainly interested in the answers to many of them ;-)\n\nhth.\n Brent\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n",
"msg_date": "Wed, 14 Nov 2001 17:49:45 -0500",
"msg_from": "Brent Verner <brent@rcfile.org>",
"msg_from_op": false,
"msg_subject": "Re: about the magic(?) of palloc() and variable-length user-defined\n\tdata type"
}
] |
[
{
"msg_contents": "Hi.\n\nLet me ask you somthing about the magic of palloc()\n- at least for me, it looks like magic, and variable-length user-defined\ndata types.\nHopefully, there is anyone who can help me... I am really struggling to\ngrasp PostgreSQL.\n\nAccording to chapter 4 of the Programmer's Guide, user-defined data types\ncan\nhave one of these internal formats.\n\n- pass by value, fixed-length\n- pass by reference, fixed-length\n- pass by reference, variable-length\n\nI am trying to define an user-defined data type corresponding to the third\ncase,\nbut let me use the example in the manual.\nIn the manual, I found this example:\n\ntypedef struct {\n int4 length;\n char data[1];\n} text;\n\n\n1) then, my first question is about the example for coding as follows:\n\nchar buffer[40];\n....\ntext *destination = (text *) palloc(VARHDRSZ + 40);\n....\n\nI cannot understand this mechanism... it looks like magic to me..\nIf it is like the following, I can see it:\n\ntext *destination = (text *) palloc(sizeof(text));\ndestination->data = (char *) palloc(40);\n\nIn this case, there still remains a question, how can I allocate a bunch of\nmemory\nto \"char data[1]\" - it is an array data type, not pointer...\n\nI tried to see the source code, and found it just call MemoryContextAlloc(),\nbut\nMemoryContextAlloc() just has an empty function body.\n\nIs there anyone who can tell me the magic of the palloc()?\nAnd which header file do I need to include for palloc()?\n\n2) And my sencond question is how to create such variable-lenth data type.\n\nIn Chapter 5 of the Programmer's Guide, the example for a fixed-length\nuser-defined\ndata types, 'Complex', is:\n\n....\n(let's assume input and output function for a new user-defined data type\nhave been\ncreated.)\n\nCREATE TYPE complex (\n internallength = 16,\n input = complex_in,\n output = complex_out\n);\n\nBut, I have no idea what I need to set to 'internallength' for\nvariable-length data types.\n\n\n3) My last curiosity is about linking problem.\n\nIf I want to make a stand-alone program which call internal functions,\nespecially palloc(),\nto which library I need to link my program?\nI started this attempt to find an answer for my first question, but now I am\nvery\ncurious about it, becuase I realize that I cannot use the client libraries\nin ..../pgsql/lib\nsuch as libpgeasy and libpg, instead I suppose I need to link my stand-alone\nprogram\nto server libraries. But the problem is server libraries are shared\nlibraries and I have\nno idea about the mechanism.\nWhich shared library has the reference to MemoryContextAlloc()?\nAnd is there anything on which I need to take care to link my stand-alone\nprogram to such\nshared library? Is it perhaps impossible?\n\nAnyway, thank you for reading my long e-mail.\nCheers.\n\n From someone who is trying to love PostgreSQL.\n\n\n\n",
"msg_date": "Wed, 14 Nov 2001 20:23:57 -0000",
"msg_from": "\"Seung Hyun Jeong\" <jeongs@cs.man.ac.uk>",
"msg_from_op": true,
"msg_subject": "about the magic(?) of palloc() and variable-length user-defined data\n\ttype"
},
{
"msg_contents": "\"Seung Hyun Jeong\" <jeongs@cs.man.ac.uk> writes:\n> In the manual, I found this example:\n> typedef struct {\n> int4 length;\n> char data[1];\n> } text;\n\n> 1) then, my first question is about the example for coding as follows:\n> char buffer[40];\n> ....\n> text *destination = (text *) palloc(VARHDRSZ + 40);\n> ....\n\n> I cannot understand this mechanism... it looks like magic to me.\n\nNo, it's simply relying on the fact that C doesn't check array\nsubscripts. Given the stated declaration for struct text, we can\naccess data[0], or we can (try to) access data[1], data[2], etc etc.\nThese latter array elements are off the end of the declared structure;\nbut if we've allocated sufficient memory, it'll work fine.\n\nThis is a very standard C programming trick to get around the language's\nlack of explicit variable-sized arrays. If you haven't seen it before,\nyou may need to spend more time with an introductory C textbook before\nyou start trying to make sense of the Postgres internals...\n\n> I tried to see the source code, and found it just call MemoryContextAlloc(),\n> but MemoryContextAlloc() just has an empty function body.\n\nNot hardly. Where are you looking? MemoryContextAlloc is in\nsrc/backend/utils/mmgr/mcxt.c, and the function pointer it invokes\ngenerally points at AllocSetAlloc in src/backend/utils/mmgr/aset.c.\n\n> But, I have no idea what I need to set to 'internallength' for\n> variable-length data types.\n\nYou say \"variable\".\n\n> If I want to make a stand-alone program which call internal functions,\n> especially palloc(),\n> to which library I need to link my program?\n\nYou don't. There is no library built for the backend, only the server\nexecutable. It's pretty unclear to me what a standalone program would\nwant with these functions anyway ... they are of no value except to\nsomething that plans to run inside a server backend process.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 14 Nov 2001 19:26:25 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: about the magic(?) of palloc() and variable-length user-defined\n\tdata type"
}
] |
[
{
"msg_contents": "Hi,\n\ni have a problem that when I run the initdb script that after a while the initialisation stops.\n\nI�ve enabled some debug output, I hope that there is someone who can help in this issue:\n...\nDEBUG: heap_getnext returning tuple\nDEBUG: heap_getnext([pg_type,nkeys=0],backw=0) called\nDEBUG: heapgettup(pg_type, tid=0xca9c6e16[65536,26], dir=1, ...)\nDEBUG: heapgettup(..., b=0xca9c6e42, nkeys=0, key=0x0\nDEBUG: heapgettup: relation(u)=`pg_type', 00000001\nDEBUG: heap_getnext returning tuple\nDEBUG: heap_getnext([pg_type,nkeys=0],backw=0) called\nDEBUG: heapgettup(pg_type, tid=0xca9c6e16[65536,27], dir=1, ...)\nDEBUG: heapgettup(..., b=0xca9c6e42, nkeys=0, key=0x0\nDEBUG: heapgettup: relation(u)=`pg_type', 00000001\nDEBUG: heap_getnext returning EOS\nDEBUG: heap_getnext([pg_proc,nkeys=1],backw=0) called\nDEBUG: heapgettup(pg_proc, tid=0x0, dir=1, ...)\nDEBUG: heapgettup(..., b=0xca9c6f22, nkeys=1, key=0xca9c6f64\nDEBUG: heapgettup: relation(u)=`pg_proc', 00000001\nDEBUG: heap_getnext returning EOS\nERROR: fmgr_info: function 117833728: cache lookup failed\n\nThe line in the template1.bki file that causes this problem is the last one of this snippet:\n\n..\ncreate bootstrap pg_attribute\n (\n attrelid = oid ,\n attname = name ,\n atttypid = oid ,\n attdispersion = float4 ,\n attlen = int2 ,\n attnum = int2 ,\n attnelems = int4 ,\n attcacheoff = int4 ,\n atttypmod = int4 ,\n attbyval = bool ,\n attstorage = char ,\n attisset = bool ,\n attalign = char ,\n attnotnull = bool ,\n atthasdef = bool\n )\ninsert OID = 0 ( 1247 typname 19 0 32 1 0 -1 -1 f p f i f f)\n\nthank you\n\nUlrich Neumann\n\n",
"msg_date": "Thu, 15 Nov 2001 00:50:00 +0200",
"msg_from": "Ulrich Neumann<u_neumann@gne.de>",
"msg_from_op": true,
"msg_subject": "Problem with 7.1.3 and template1.bki during first init"
},
{
"msg_contents": "Ulrich Neumann<u_neumann@gne.de> writes:\n> ERROR: fmgr_info: function 117833728: cache lookup failed\n\nThis is evidently an attempt to look up a garbage function OID (none\nof the real function OIDs that might be encountered during initdb are\nmore than a couple thousand).\n\nFor this to happen during initdb implies that there's something pretty\nbroken about the server executable you're using. What platform are you\non, what compiler are you using, what configure options did you select,\netc etc?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 14 Nov 2001 19:34:26 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Problem with 7.1.3 and template1.bki during first init "
}
] |
[
{
"msg_contents": "> Please let me introduce myself. I'm Rada Hussein from Egypt and I'm\n> doing my Ph.D. in Medical Informatics in Germany. We are using Postgres\n> in our database applications. I'm working specifically in Arabization.\n> So, I'd be grateful if you could tell me if Postgres supports the Arabic\n> encoding (8859-6)or not?. If not, how can I add it? or do you recommend\n> to use Unicode instead? Or it will take huge data storage?\n\nPostgreSQL 7.2 (in beta phase now) will support ISO 8859-6.\nYou could get it from, for example,\nftp://ftp2.us.postgresql.org/pub/beta.\n--\nTatsuo Ishii\n",
"msg_date": "Thu, 15 Nov 2001 10:30:47 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: postgres and 8859-6 encoding"
}
] |
[
{
"msg_contents": "\n\nTom Lane wrote:\n\n>bpalmer <bpalmer@crimelabs.net> writes:\n>\n>>Is there a way to ask pgsql how much memory, sort memory, etc is being\n>>used by pgsql at a given time (like in a system table or something)?\n>>\n>\n>Um ... have you considered top, sar, or the other sysadmin tools your\n>platform may provide?\n>\ntop knows nothing about sort memory or internal cache/buffer sizes.\n\n------------\nHannu\n\n\n",
"msg_date": "Thu, 15 Nov 2001 08:59:25 +0500",
"msg_from": "Hannu Krosing <hannu@sid.tm.ee>",
"msg_from_op": true,
"msg_subject": "Re: Shared memory use?"
},
{
"msg_contents": "Is there a way to ask pgsql how much memory, sort memory, etc is being\nused by pgsql at a given time (like in a system table or something)?\n\nThanks,\n- Brandon\n\n\n----------------------------------------------------------------------------\n c: 646-456-5455 h: 201-798-4983\n b. palmer, bpalmer@crimelabs.net pgp:crimelabs.net/bpalmer.pgp5\n\n",
"msg_date": "Thu, 15 Nov 2001 00:15:55 -0500 (EST)",
"msg_from": "bpalmer <bpalmer@crimelabs.net>",
"msg_from_op": false,
"msg_subject": "Shared memory use?"
},
{
"msg_contents": "bpalmer <bpalmer@crimelabs.net> writes:\n> Is there a way to ask pgsql how much memory, sort memory, etc is being\n> used by pgsql at a given time (like in a system table or something)?\n\nUm ... have you considered top, sar, or the other sysadmin tools your\nplatform may provide?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 15 Nov 2001 00:46:19 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Shared memory use? "
}
] |
[
{
"msg_contents": "Are we gathering info about platforms those successfully run 7.2?\n\nAnyway, here is a report from Hisao SHIBUYA <shibuya@alpha.or.jp>.\n\nLinux MIPS (Cobalt Qube2)\n\nkernel-2.0.34C53_SK-2\nglibc-2.0.7-29.4C1\ngcc-2.7.2-C2\n--\nTatsuo Ishii\n",
"msg_date": "Thu, 15 Nov 2001 13:11:16 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "PostgreSQL 7.2b2 on Qube2"
},
{
"msg_contents": "> Are we gathering info about platforms those successfully run 7.2?\n\nYes, I'll put out a call to make it official.\n\n> Anyway, here is a report from Hisao SHIBUYA <shibuya@alpha.or.jp>.\n\nThanks. I assume that this is a successful report? ;)\n\n - Thomas\n",
"msg_date": "Thu, 15 Nov 2001 05:10:10 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 7.2b2 on Qube2"
},
{
"msg_contents": "> > Are we gathering info about platforms those successfully run 7.2?\n> \n> Yes, I'll put out a call to make it official.\n\nThanks.\n\n> > Anyway, here is a report from Hisao SHIBUYA <shibuya@alpha.or.jp>.\n> \n> Thanks. I assume that this is a successful report? ;)\n\nOh, of course:-)\n--\nTatsuo Ishii\n",
"msg_date": "Thu, 15 Nov 2001 14:12:18 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 7.2b2 on Qube2"
},
{
"msg_contents": "On Thu, 15 Nov 2001, Thomas Lockhart wrote:\n\n> > Are we gathering info about platforms those successfully run 7.2?\n>\n> Yes, I'll put out a call to make it official.\n>\n> > Anyway, here is a report from Hisao SHIBUYA <shibuya@alpha.or.jp>.\n>\n> Thanks. I assume that this is a successful report? ;)\n\nDon't forget about the regression test database. There's a link on\nthe Developer's site, and everything should be working now.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 15 Nov 2001 05:42:59 -0500 (EST)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 7.2b2 on Qube2"
}
] |
[
{
"msg_contents": "It is time to try to get complete test coverage on 7.2 for our supported\nplatforms. We especially welcome new platforms and refreshed info for\nolder platforms which may not have had a recent report.\n\nIt would be helpful for folks to reiterate the current status on their\nplatform to make sure that we get it right (e.g. is AIX working or not?\nWhich versions??)\n\nI'll try checking Vince's web site, but email reports are welcome also.\nThanks!\n\n - Thomas\n\nI'll post a list of platforms in a few days, after the first reports\ncome in.\n",
"msg_date": "Thu, 15 Nov 2001 05:27:22 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": true,
"msg_subject": "Call for testing"
},
{
"msg_contents": "\n======================\n All 79 tests passed.\n======================\n\nFreeBSD 4.4-STABLE #1: Tue Oct 30 11:54:19 AST 2001\n\n\nOn Thu, 15 Nov 2001, Thomas Lockhart wrote:\n\n> It is time to try to get complete test coverage on 7.2 for our supported\n> platforms. We especially welcome new platforms and refreshed info for\n> older platforms which may not have had a recent report.\n>\n> It would be helpful for folks to reiterate the current status on their\n> platform to make sure that we get it right (e.g. is AIX working or not?\n> Which versions??)\n>\n> I'll try checking Vince's web site, but email reports are welcome also.\n> Thanks!\n>\n> - Thomas\n>\n> I'll post a list of platforms in a few days, after the first reports\n> come in.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n",
"msg_date": "Thu, 15 Nov 2001 10:53:07 -0500 (EST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Call for testing"
},
{
"msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> It would be helpful for folks to reiterate the current status on their\n> platform to make sure that we get it right (e.g. is AIX working or not?\n> Which versions??)\n\nI've filed success reports in Vince's database for CVS tip on\n\tHPUX 10.20 on PA-RISC 2.0\n\tLinux/PPC 2.2.x on PPC G3\n\tMac OS X 10.1 on PPC G3\n\tDebian Linux 2.2.x on Alpha\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 16 Nov 2001 16:20:57 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Call for testing "
},
{
"msg_contents": "> > It would be helpful for folks to reiterate the current status on their\n> > platform to make sure that we get it right (e.g. is AIX working or not?\n> > Which versions??)\n> >\n> > I'll try checking Vince's web site, but email reports are welcome also.\n> > Thanks!\n\nIt would be REALLY nice to either update the \"regression test database\" so\nit just has the new stuff, or point to Vince's website (if it's something\ndifferent) (or to link off the dev page for that matter).\n\n- Brandon\n\n----------------------------------------------------------------------------\n c: 646-456-5455 h: 201-798-4983\n b. palmer, bpalmer@crimelabs.net pgp:crimelabs.net/bpalmer.pgp5\n\n",
"msg_date": "Sun, 18 Nov 2001 10:17:31 -0500 (EST)",
"msg_from": "bpalmer <bpalmer@crimelabs.net>",
"msg_from_op": false,
"msg_subject": "Re: Call for testing"
}
] |
[
{
"msg_contents": "I have noticed a strange error message when adding check constraints.\nBasically the ADD CHECK below is failing because the field it is trying to\nforce to be NOT NULL already has NULL values in it. However, the error\nmessage it produces is quite cryptic and it took me a while to figure out\nwhat was going on.\n\nIs it still like this in 7.2b2, and should it be changed?\n\nChris\n\ntest=# select version();\n version\n--------------------------------------------------------------\n PostgreSQL 7.1.3 on i386--freebsd4.2, compiled by GCC 2.95.2\n(1 row)\ntest=# create table test (foo char(1) check (foo in ('M', 'V')));\nCREATE\ntest=# insert into test values('M');\nINSERT 2823326 1\ntest=# alter table test add column bar varchar(255);\nALTER\ntest=# alter table test add check (bar is not null);\nERROR: AlterTableAddConstraint: rejected due to CHECK constraint <unnamed>\n\n",
"msg_date": "Thu, 15 Nov 2001 15:15:30 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Strange ADD CHECK error message"
},
{
"msg_contents": "On Thu, 15 Nov 2001, Christopher Kings-Lynne wrote:\n\n> I have noticed a strange error message when adding check constraints.\n> Basically the ADD CHECK below is failing because the field it is trying to\n> force to be NOT NULL already has NULL values in it. However, the error\n> message it produces is quite cryptic and it took me a while to figure out\n> what was going on.\n>\n> Is it still like this in 7.2b2, and should it be changed?\n\nPretty sure and possibly. In general, the message is currently\nbased on the same template as the error you get if you were to\nfail it later. I don't have a good phrase to replace it with,\nbut a more descriptive message would probably be good.\n\n\n",
"msg_date": "Wed, 14 Nov 2001 23:44:10 -0800 (PST)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: Strange ADD CHECK error message"
}
] |
[
{
"msg_contents": "I have a problem building the beta2 (tarball, not current CVS) on\nalphaev67-dec-osf4.0f, compiled by cc -std. All the 7.1.X tree compiled\nfine, and I didn't have a chance before to build 7.2.\n\nWhile building backend/utils/fmgr/dfmgr.c\n\nmake[4]: Entering directory\n`/usr/local/src/postgresql-7.2b2/src/backend/utils/fmgr'\ncc -std -O4 -Olimit 2000 -I../../../../src/include \n-DPKGLIBDIR=\\\"/tmp/pg72b2/usr/local/pgsql/lib\\\" -DDLSUFFIX=\\\".so\\\" -c\n-o dfmgr.o dfmgr.c\ncc: Error: dfmgr.c, line 123: In this statement, \"RTLD_GLOBAL\" is not\ndeclared.\n(undeclared)\n file_scanner->handle = pg_dlopen(fullname);\n---------------------------------------^\nmake[4]: *** [dfmgr.o] Error 1\n\npg_dlopen is defined in src/include/dynloader.h\n\n#define pg_dlopen(f) dlopen((f), RTLD_LAZY | RTLD_GLOBAL)\n\nThis is what my Digital Unix sysadm told me:\n\nRTLD_GLOBAL is not a valid definition under 4.0x - possibly under 5.x\nbut I haven't got a machine handy to check it on. The only two\nallowed values are:\n\n If mode is RTLD_LAZY, then the run-time loader does symbol resolution\nonly\n as needed. Typically, this means that the first call to a function in\nthe\n newly loaded library will cause the resolution of the address of that\nfunc-\n tion to occur. If mode is RTLD_NOW, then the run-time loader must do\nall\n symbol binding during the dlopen call. The dlopen function returns a\nhan-\n dle that is used by dlsym or dlclose call. If an error occurs, a NULL\n pointer is returned.\n\nI suspect that RTLD_GLOBAL might be something new... actually it\nappears to be a Linuxism or glibcism. From the Linux dlopen man page:\n\n flag must be either RTLD_LAZY, meaning resolve undefined\n symbols as code from the dynamic library is executed, or\n RTLD_NOW, meaning resolve all undefined symbols before\n dlopen returns, and fail if this cannot be done. Option�\n ally, RTLD_GLOBAL may be or'ed with flag, in which case\n the external symbols defined in the library will be made\n available to subsequently loaded libraries.\n\nthe last few lines being the relevant ones. I suspect it is something\nto be sent off to the mailing list. The flag appears to be available\nalso under Solaris 2.8 (where they also have RTLD_LOCAL for the\nopposite effect).\n\nHope this helps, do you prefer I file it as a bug?\n\n-- \nAlessio F. Bragadini\t\talessio@albourne.com\nAPL Financial Services\t\thttp://village.albourne.com\nNicosia, Cyprus\t\t \tphone: +357-22-755750\n\n\"It is more complicated than you think\"\n\t\t-- The Eighth Networking Truth from RFC 1925\n",
"msg_date": "Thu, 15 Nov 2001 10:11:31 +0200",
"msg_from": "Alessio Bragadini <alessio@albourne.com>",
"msg_from_op": true,
"msg_subject": "Problem compiling beta2 on Digital Unix (cc)"
},
{
"msg_contents": "Alessio Bragadini writes:\n\n> cc: Error: dfmgr.c, line 123: In this statement, \"RTLD_GLOBAL\" is not\n> declared.\n> (undeclared)\n\nI'm going to apply the attached patch to appear in beta3.\n\nMost systems that use the dlopen()-style interface support RTLD_GLOBAL.\nSome older ones don't, but they should act like it by default.\n\n-- \nPeter Eisentraut peter_e@gmx.net",
"msg_date": "Thu, 15 Nov 2001 17:14:35 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Problem compiling beta2 on Digital Unix (cc)"
}
] |
[
{
"msg_contents": "New patch for open item: AIX compile (Peter E, Zeugswetter)\n(applies to today's snapshot)\n\nI now have a working autoconf, and was thus able to confirm, that \nPeter's SIZEOF_INT8 check works correctly on AIX.\n\nPlease apply this patch before beta3, and please someone check BEOS\nwhich is also affected.\n\nThank you Peter\nAndreas",
"msg_date": "Thu, 15 Nov 2001 09:33:10 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Open Items (was: RE: [HACKERS] Beta going well)"
},
{
"msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n\n /* Plain \"long int\" fits, use it */\n+ #if SIZEOF_INT8 == 0\n typedef long int int64;\n+ #endif\n+ #if SIZEOF_UINT8 == 0\n typedef unsigned long int uint64;\n+ #endif\n \n\nThis coding appears to assume \"if the platform defines int8, then\nit will define int64 as well\". Seems mighty fragile to me.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 15 Nov 2001 11:01:07 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open Items (was: RE: [HACKERS] Beta going well) "
},
{
"msg_contents": "Zeugswetter Andreas SB SD writes:\n\n> New patch for open item: AIX compile (Peter E, Zeugswetter)\n> (applies to today's snapshot)\n>\n> I now have a working autoconf, and was thus able to confirm, that\n> Peter's SIZEOF_INT8 check works correctly on AIX.\n\nI'm confused a bit: In the previous message you added a check for uint64,\nin this version you removed the u?int64 checks completely. I suppose it\ndoesn't matter, but is there a reason?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 15 Nov 2001 17:14:19 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open Items (was: RE: [HACKERS] Beta going well)"
},
{
"msg_contents": "> New patch for open item: AIX compile (Peter E, Zeugswetter)\n> (applies to today's snapshot)\n> \n> I now have a working autoconf, and was thus able to confirm, that \n> Peter's SIZEOF_INT8 check works correctly on AIX.\n> \n> Please apply this patch before beta3, and please someone check BEOS\n> which is also affected.\n\nThe only problem I have now is that odbc/md5.h needs those unsigned\ndefines and it can't probe the results of queries by configure. odbc\nallows for stand-alone compile.\n\n\t#if SIZEOF_UINT8 == 0\n\nRight now it is testing for __BEOS__, which I believe is something set\nby the compiler and not by configure. \n\nMy idea is to just unconditionally define the unsigned's in odbc. It\nwill fail on a few platforms but I don't see another solution.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 15 Nov 2001 11:14:46 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open Items (was: RE: [HACKERS] Beta going well)"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> The only problem I have now is that odbc/md5.h needs those unsigned\n> defines and it can't probe the results of queries by configure. odbc\n> allows for stand-alone compile.\n\nODBC uses all kinds of other configure results, so it can use this one as\nwell. You only need to make sure you hard-code the test result for\nWindows somewhere.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Fri, 16 Nov 2001 14:59:40 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open Items (was: RE: [HACKERS] Beta going well)"
}
] |
[
{
"msg_contents": "\nThe current snapshot additionally contains a full subdir for 7.2b1\nI don't think this is intended.\n(got snapshot from http://www.ca.postgresql.org/ftpsite/dev)\n\nAndreas \n",
"msg_date": "Thu, 15 Nov 2001 09:58:40 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "snapshot additionally contains 7.2b1"
}
] |
[
{
"msg_contents": "In article <m21yj0d6ju.fsf@kuiper.rlent.pnet>,\nRoland Roberts <roland@rlenter.com> wrote:\n>>>>>> \"Neil\" == Neil Madden <nem00u@cs.nott.ac.uk> writes:\n>\n> Neil> If it is a widely available database system, there is a high\n> Neil> likely-hood that there exists a Tcl extension out there\n> Neil> which will do most if not all of what you want. I don't do\n> Neil> much database work, so I can't comment on any particularly\n> Neil> useful extensions.\n>\n>Actually, it is PostgreSQL and there is a pgtclsh provided. So my\n>exercise is, well, an exercise. While I can read the man pages on\n>each of the calls used, I can't quite put together the rationale for\n>*why* things are done the way they are.\n>\n>libpgtcl.so doesn't use the Tcl object calls, but that's a pretty\n>minor thing to change. It also is not really set up the way I would\n>have preferred which is to make is usable via \"package require\", but\n>that's a nit...\n\t\t\t.\n\t\t\t.\n\t\t\t.\nI'll say a few words about Postgres (PG).\n\nWhile Great Bridge's demise might lead some to expect\nstagnation for PG, it's far healthier than that. Look,\nfor example, at Bruce Momjian's new book. In any case,\nthere's a lot of life left in PG.\n\nPG has a history of friendship and co-operation with\nTcl. Other languages have received more attention\nlately. This is a good time, though, I think, for\nthose with an interest in pgtclsh to work up patches\nto implement the (Tcl) object interface, stubs, and\nproper [packag]ing.\n\nWhile we're on the subject, I'd sure appreciate it if\nsomeone at www.postgresql.org would fix up the hyper-\nlink anchors with meaningful \"ALT =\"-s so image-free\nbrowsers can make better use of the site.\n-- \n\nCameron Laird <Cameron@Lairds.com>\nBusiness: http://www.Phaseit.net\nPersonal: http://starbase.neosoft.com/~claird/home.html\n",
"msg_date": "Thu, 15 Nov 2001 08:21:20 -0600 (CST)",
"msg_from": "Cameron Laird <claird@starbase.neosoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Tcl C API documentation?"
}
] |
[
{
"msg_contents": "> > > 3 out of 3 hunks FAILED -- saving rejects to file\nsrc/include/c.h.rej\n> >\n> > My guess is that pgindent has modified c.h. Tatsuo, if you send me\nthe\n> > patch, I will manually apply it to current CVS and send you a new\n> > version of the patch for testing. \n> \n> OK, Tatsuo, here is an updated version for testing. It initdb and\n> regression tests fine on my BSD/OS machine.\n\nBruce, you take me off the cc: and expect me to test (i am not on\npatches).\nYour Patch does not work here. \n\nPlease test the one I sent in previously. The problem with yours is, \nthat AIX (at least 4.3.3 and below) does not typedef the unsigned types.\nBut your patch assumes, that checking int64 is sufficient for uint64.\n\nBEOS has both signed and unsigned, thus my patch should satisfy both.\n\nThanks\nAndreas\n",
"msg_date": "Thu, 15 Nov 2001 16:44:07 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Open Items (was: RE: [HACKERS] Beta going well)"
},
{
"msg_contents": "> > OK, Tatsuo, here is an updated version for testing. It initdb and\n> > regression tests fine on my BSD/OS machine.\n> \n> Bruce, you take me off the cc: and expect me to test (i am not on\n> patches).\n\nOh. I accidentally deleted the email thread so I head to email a new\nmessage. I couldn't figure out how to see the CC line in\nfts.postgresql.org archives.\n\n> Your Patch does not work here. \n> \n> Please test the one I sent in previously. The problem with yours is, \n> that AIX (at least 4.3.3 and below) does not typedef the unsigned types.\n> But your patch assumes, that checking int64 is sufficient for uint64.\n> \n> BEOS has both signed and unsigned, thus my patch should satisfy both.\n\nOK, we package beta3 tomorrow. I am going to apply the patch now so\npeople can test it more easily and maybe we will see any problem reports\nbefore tomorrow.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 15 Nov 2001 11:07:58 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open Items (was: RE: [HACKERS] Beta going well)"
}
] |
[
{
"msg_contents": "\n> \t#if SIZEOF_UINT8 == 0\n> \n> Right now it is testing for __BEOS__, which I believe is something set\n> by the compiler and not by configure. \n> \n> My idea is to just unconditionally define the unsigned's in odbc. It\n> will fail on a few platforms but I don't see another solution.\n\nWell, since AIX does not have the unsigned's, I think leaving odbc's\nmd5.h as it was with the #ifndef __BEOS__ in place is OK.\n\nAndreas\n",
"msg_date": "Thu, 15 Nov 2001 17:22:48 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Open Items (was: RE: [HACKERS] Beta going well)"
},
{
"msg_contents": "> \n> > \t#if SIZEOF_UINT8 == 0\n> > \n> > Right now it is testing for __BEOS__, which I believe is something set\n> > by the compiler and not by configure. \n> > \n> > My idea is to just unconditionally define the unsigned's in odbc. It\n> > will fail on a few platforms but I don't see another solution.\n> \n> Well, since AIX does not have the unsigned's, I think leaving odbc's\n> md5.h as it was with the #ifndef __BEOS__ in place is OK.\n\nReally? Seems some AIX must have the unsigneds or we wouldn't be\nneeding a new patch, right? Am I missing something?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 15 Nov 2001 11:26:13 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open Items (was: RE: [HACKERS] Beta going well)"
}
] |
[
{
"msg_contents": "> \"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> \n> /* Plain \"long int\" fits, use it */\n> + #if SIZEOF_INT8 == 0\n> typedef long int int64;\n> + #endif\n> + #if SIZEOF_UINT8 == 0\n> typedef unsigned long int uint64;\n> + #endif\n> \n> \n> This coding appears to assume \"if the platform defines int8, then\n> it will define int64 as well\". Seems mighty fragile to me.\n\nWell the absolute correct solution would involve all of:\nint8, int16, int32, int64 and separately uint8, uint16, uint32, uint64\n\nThe previous patch grouped:\nint8, int16 and int32\nuint8, uint16 and uint32\nint64 and uint64 <-- this grouping is wrong on AIX 4.3.3 and below\n\nIf you prefer to make 4 groups out of this you could apply this patch.\n\nAndreas",
"msg_date": "Thu, 15 Nov 2001 17:25:20 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Open Items (was: RE: [HACKERS] Beta going well) "
},
{
"msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> Well the absolute correct solution would involve all of:\n> int8, int16, int32, int64 and separately uint8, uint16, uint32, uint64\n\nI agree that that's probably overkill. I'm prepared to assume that\nanything defining int8 defines int16 and int32 as well --- but int64\nis just new enough that I don't want to make that extrapolation.\n\n> The previous patch grouped:\n> int8, int16 and int32\n> uint8, uint16 and uint32\n> int64 and uint64 <-- this grouping is wrong on AIX 4.3.3 and below\n\nOkay, int64 and uint64 must be tested for separately then.\n\n> If you prefer to make 4 groups out of this you could apply this patch.\n\nThis form of the patch looks reasonable to me.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 15 Nov 2001 11:30:16 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open Items (was: RE: [HACKERS] Beta going well) "
},
{
"msg_contents": "\nOK, I backed out your previous patch and applied this one.\n\nThanks.\n\n---------------------------------------------------------------------------\n\n> \n> > \"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> > \n> > /* Plain \"long int\" fits, use it */\n> > + #if SIZEOF_INT8 == 0\n> > typedef long int int64;\n> > + #endif\n> > + #if SIZEOF_UINT8 == 0\n> > typedef unsigned long int uint64;\n> > + #endif\n> > \n> > \n> > This coding appears to assume \"if the platform defines int8, then\n> > it will define int64 as well\". Seems mighty fragile to me.\n> \n> Well the absolute correct solution would involve all of:\n> int8, int16, int32, int64 and separately uint8, uint16, uint32, uint64\n> \n> The previous patch grouped:\n> int8, int16 and int32\n> uint8, uint16 and uint32\n> int64 and uint64 <-- this grouping is wrong on AIX 4.3.3 and below\n> \n> If you prefer to make 4 groups out of this you could apply this patch.\n> \n> Andreas\n\nContent-Description: int8-newpatch2\n\n[ Attachment, skipping... ]\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 15 Nov 2001 11:35:44 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open Items (was: RE: [HACKERS] Beta going well)"
},
{
"msg_contents": "I wanted to test the new patches on AIX 5L but the cvs server seem to\nreject me (it worked till yesterday). Sigh.\n--\nTatsuo Ishii\n\n> OK, I backed out your previous patch and applied this one.\n> \n> Thanks.\n> \n> ---------------------------------------------------------------------------\n> \n> > \n> > > \"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> > > \n> > > /* Plain \"long int\" fits, use it */\n> > > + #if SIZEOF_INT8 == 0\n> > > typedef long int int64;\n> > > + #endif\n> > > + #if SIZEOF_UINT8 == 0\n> > > typedef unsigned long int uint64;\n> > > + #endif\n> > > \n> > > \n> > > This coding appears to assume \"if the platform defines int8, then\n> > > it will define int64 as well\". Seems mighty fragile to me.\n> > \n> > Well the absolute correct solution would involve all of:\n> > int8, int16, int32, int64 and separately uint8, uint16, uint32, uint64\n> > \n> > The previous patch grouped:\n> > int8, int16 and int32\n> > uint8, uint16 and uint32\n> > int64 and uint64 <-- this grouping is wrong on AIX 4.3.3 and below\n> > \n> > If you prefer to make 4 groups out of this you could apply this patch.\n> > \n> > Andreas\n> \n> Content-Description: int8-newpatch2\n> \n> [ Attachment, skipping... ]\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n",
"msg_date": "Fri, 16 Nov 2001 13:37:51 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open Items (was: RE: [HACKERS] Beta going well)"
},
{
"msg_contents": "> I wanted to test the new patches on AIX 5L but the cvs server seem to\n> reject me (it worked till yesterday). Sigh.\n\nYes. This happened to me this morning. I fixed it by changing my\n.cvspass to use postgresql.org instead of cvs.postgresql.org and used\nCVS to attach to that.\n\nMarc, any idea why I needed to do that?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 15 Nov 2001 23:50:15 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Open Items (was: RE: [HACKERS] Beta going well)"
}
] |
[
{
"msg_contents": "\n> > New patch for open item: AIX compile (Peter E, Zeugswetter)\n> > (applies to today's snapshot)\n> >\n> > I now have a working autoconf, and was thus able to confirm, that\n> > Peter's SIZEOF_INT8 check works correctly on AIX.\n> \n> I'm confused a bit: In the previous message you added a check for\nuint64,\n> in this version you removed the u?int64 checks completely. I suppose\nit\n> doesn't matter, but is there a reason?\n\nAIX > 4.? and <= 4.3.3 has all four signed typedefs but no unsigned\nones.\nBEOS has all eight typedefs.\nSo I thought that grouping signed and unsigned was ok. \nTom complained, and so I sent in another patch just now that separates\nthe 64's.\n\nAndreas\n",
"msg_date": "Thu, 15 Nov 2001 17:29:50 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Open Items (was: RE: [HACKERS] Beta going well)"
}
] |
[
{
"msg_contents": "Hi!!\nWe are developing a project at the Universidad Nacional del Centro, in\nArgentina. Sergio Pili, who has communicated with you previously, is\nworking with us. We are interested in the feature he is implementing:\nrule activation and deactivation.\n\nWith respect to the safeness of this deactivation, we can say that:\n\n- It can be executed just only from the action of the rule.\n- The deactivated rule continues deactivated while the rewriting of the\nquery which executed that deactivation is done. This means that the\ndeactivation does not affect other queries. Moreover, the rule is\nautomatically reactivated when the rewrite process is finished.\n- This feature avoids recursive activation.\n\nExample:\n\nCREATE TABLE A (aa int primary key, a int, b int);\nCREATE TABLE B (bb int primary key,a int, b int);\n\nCREATE RULE upd_b AS ON UPDATE TO B\nWHERE\n NOT EXISTS (SELECT *\n FROM A\n WHERE A.a = NEW.a\n AND A.b = NEW.b )\nDO INSTEAD\n SELECT pg_abort_with_msg('No existen registros con a = '||\n NEW.a || ' b = ' || NEW.b || ' en la tabla A');\n\nCREATE RULE upd_a AS ON UPDATE TO A\nDO\nUPDATE B SET a = NEW.a, b = NEW.b\nWHERE a = OLD.a\nAND b = OLD.b;\n\nINSERT INTO A VALUES (1,1,2);\nINSERT INTO A VALUES (2,2,2);\nINSERT INTO A VALUES (3,1,2);\n\nINSERT INTO B VALUES (100,1,2);\nINSERT INTO B VALUES (110,1,2);\nINSERT INTO B VALUES (120,2,2);\nINSERT INTO B VALUES (130,2,2);\n\nUPDATE B SET a=4, b=4\nWHERE a=1 and b=2;\n#ERROR: �There are not records with a=4 b=4 in table A�\n\n(OK!!)\n\nUPDATE A SET a=4,b=4\nWHERE a=1 and b=2;\n#ERROR: �There are not records with a=4 b=4 in table A�\n\n(we don�t want this ...)\n\n\nWell, if we replace upd_a by\n\n\nCREATE RULE upd_a AS ON UPDATE TO A\nDO\n(\nDEACTIVATE RULE upd_b;\nUPDATE B SET a = NEW.a, b = NEW.b\nWHERE a = OLD.a\nAND b = OLD.b;\n)\n\nUPDATE A SET a=4,b=4\nWHERE a=1 and b=2;\n\n#2 rows updated\n\nSELECT * FROM A;\n\n1 4 4\n2 2 2\n3 4 4\n\nSELECT * FROM B;\n\n100 4 4\n110 4 4\n120 2 2\n130 2 2\n\n(OK!)\n\n\nregards,\nJorge H. Doorn. Full professor\nLaura C. Rivero. Associate professor.\n\n\nTom Lane wrote:\n> \n> Sergio Pili <sergiop@sinectis.com.ar> writes:\n> >> A) It is related with situations where more than one rule is involved\n> >> and the seccond one requires completion of the first one. In our sort\n> >> of problems this happens frequently. This can be solved adding the\n> >> notion of \"disablement\" of the first rule within the re-writing of\n> >> the second rule when the first rule is not required since the\n> >> knowledge of the action of the second rule allows it. To do this, the\n> >> addition of two new commands is proposed: DEACTIVATE/ACTIVATE RULE.\n> \n> You haven't made a case at all for why this is a good idea, nor whether\n> the result couldn't be accomplished with some cleaner approach (no,\n> I don't think short-term disablement of a rule is a clean approach...)\n> Please give some examples that show why you think such a feature is\n> useful.\n> \n> >> B) The lack of a transaction abortion clause. (Chapter 17 Section 5\n> >> PostgreSQL 7.1 Programmer�s Guide)\n> >> The addition of the function\n> >> pg_abort_with_msg(text)\n> >> wich can be called from a SELECT is proposed.\n> \n> This seems straightforward enough, but again I'm bemused why you'd want\n> such a thing. Rules are sufficiently nonprocedural that it's hard to\n> see the point of putting deliberate error traps into them --- it seems\n> too hard to control whether the error occurs or not. I understand\n> reporting errors in procedural languages ... but all our procedural\n> languages already have error-raising mechanisms. For example, you could\n> implement this function in plpgsql as\n> \n> regression=# create function pg_abort_with_msg(text) returns int as\n> regression-# 'begin\n> regression'# raise exception ''%'', $1;\n> regression'# return 0;\n> regression'# end;' language 'plpgsql';\n> CREATE\n> regression=# select pg_abort_with_msg('bogus');\n> ERROR: bogus\n> regression=#\n> \n> Again, a convincing example of a situation where this is an appropriate\n> solution would go a long way towards making me see why the feature is\n> needed.\n> \n> regards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n",
"msg_date": "Thu, 15 Nov 2001 16:35:02 -0300",
"msg_from": "Sergio Pili <sergiop@sinectis.com.ar>",
"msg_from_op": true,
"msg_subject": "WAS: [Fwd: PostgreSQL new commands proposal]"
},
{
"msg_contents": "\"Sorry, but no coments about this?\n\nTom?\n\nregards,\nSergio.\"\n\nSergio Pili wrote:\n> \n> Hi!!\n> We are developing a project at the Universidad Nacional del Centro, in\n> Argentina. Sergio Pili, who has communicated with you previously, is\n> working with us. We are interested in the feature he is implementing:\n> rule activation and deactivation.\n> \n> With respect to the safeness of this deactivation, we can say that:\n> \n> - It can be executed just only from the action of the rule.\n> - The deactivated rule continues deactivated while the rewriting of the\n> query which executed that deactivation is done. This means that the\n> deactivation does not affect other queries. Moreover, the rule is\n> automatically reactivated when the rewrite process is finished.\n> - This feature avoids recursive activation.\n> \n> Example:\n> \n> CREATE TABLE A (aa int primary key, a int, b int);\n> CREATE TABLE B (bb int primary key,a int, b int);\n> \n> CREATE RULE upd_b AS ON UPDATE TO B\n> WHERE\n> NOT EXISTS (SELECT *\n> FROM A\n> WHERE A.a = NEW.a\n> AND A.b = NEW.b )\n> DO INSTEAD\n> SELECT pg_abort_with_msg('No existen registros con a = '||\n> NEW.a || ' b = ' || NEW.b || ' en la tabla A');\n> \n> CREATE RULE upd_a AS ON UPDATE TO A\n> DO\n> UPDATE B SET a = NEW.a, b = NEW.b\n> WHERE a = OLD.a\n> AND b = OLD.b;\n> \n> INSERT INTO A VALUES (1,1,2);\n> INSERT INTO A VALUES (2,2,2);\n> INSERT INTO A VALUES (3,1,2);\n> \n> INSERT INTO B VALUES (100,1,2);\n> INSERT INTO B VALUES (110,1,2);\n> INSERT INTO B VALUES (120,2,2);\n> INSERT INTO B VALUES (130,2,2);\n> \n> UPDATE B SET a=4, b=4\n> WHERE a=1 and b=2;\n> #ERROR: �There are not records with a=4 b=4 in table A�\n> \n> (OK!!)\n> \n> UPDATE A SET a=4,b=4\n> WHERE a=1 and b=2;\n> #ERROR: �There are not records with a=4 b=4 in table A�\n> \n> (we don�t want this ...)\n> \n> Well, if we replace upd_a by\n> \n> CREATE RULE upd_a AS ON UPDATE TO A\n> DO\n> (\n> DEACTIVATE RULE upd_b;\n> UPDATE B SET a = NEW.a, b = NEW.b\n> WHERE a = OLD.a\n> AND b = OLD.b;\n> )\n> \n> UPDATE A SET a=4,b=4\n> WHERE a=1 and b=2;\n> \n> #2 rows updated\n> \n> SELECT * FROM A;\n> \n> 1 4 4\n> 2 2 2\n> 3 4 4\n> \n> SELECT * FROM B;\n> \n> 100 4 4\n> 110 4 4\n> 120 2 2\n> 130 2 2\n> \n> (OK!)\n> \n> regards,\n> Jorge H. Doorn. Full professor\n> Laura C. Rivero. Associate professor.\n> \n> Tom Lane wrote:\n> >\n> > Sergio Pili <sergiop@sinectis.com.ar> writes:\n> > >> A) It is related with situations where more than one rule is involved\n> > >> and the seccond one requires completion of the first one. In our sort\n> > >> of problems this happens frequently. This can be solved adding the\n> > >> notion of \"disablement\" of the first rule within the re-writing of\n> > >> the second rule when the first rule is not required since the\n> > >> knowledge of the action of the second rule allows it. To do this, the\n> > >> addition of two new commands is proposed: DEACTIVATE/ACTIVATE RULE.\n> >\n> > You haven't made a case at all for why this is a good idea, nor whether\n> > the result couldn't be accomplished with some cleaner approach (no,\n> > I don't think short-term disablement of a rule is a clean approach...)\n> > Please give some examples that show why you think such a feature is\n> > useful.\n> >\n> > >> B) The lack of a transaction abortion clause. (Chapter 17 Section 5\n> > >> PostgreSQL 7.1 Programmer�s Guide)\n> > >> The addition of the function\n> > >> pg_abort_with_msg(text)\n> > >> wich can be called from a SELECT is proposed.\n> >\n> > This seems straightforward enough, but again I'm bemused why you'd want\n> > such a thing. Rules are sufficiently nonprocedural that it's hard to\n> > see the point of putting deliberate error traps into them --- it seems\n> > too hard to control whether the error occurs or not. I understand\n> > reporting errors in procedural languages ... but all our procedural\n> > languages already have error-raising mechanisms. For example, you could\n> > implement this function in plpgsql as\n> >\n> > regression=# create function pg_abort_with_msg(text) returns int as\n> > regression-# 'begin\n> > regression'# raise exception ''%'', $1;\n> > regression'# return 0;\n> > regression'# end;' language 'plpgsql';\n> > CREATE\n> > regression=# select pg_abort_with_msg('bogus');\n> > ERROR: bogus\n> > regression=#\n> >\n> > Again, a convincing example of a situation where this is an appropriate\n> > solution would go a long way towards making me see why the feature is\n> > needed.\n> >\n> > regards, tom lane\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Don't 'kill -9' the postmaster\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n",
"msg_date": "Mon, 19 Nov 2001 14:38:26 -0300",
"msg_from": "Sergio Pili <sergiop@sinectis.com.ar>",
"msg_from_op": true,
"msg_subject": "Re: WAS: [Fwd: PostgreSQL new commands proposal]"
},
{
"msg_contents": "On Thu, 15 Nov 2001, Sergio Pili wrote:\n\n> We are developing a project at the Universidad Nacional del Centro, in\n> Argentina. Sergio Pili, who has communicated with you previously, is\n> working with us. We are interested in the feature he is implementing:\n> rule activation and deactivation.\n>\n> With respect to the safeness of this deactivation, we can say that:\n>\n> - It can be executed just only from the action of the rule.\n> - The deactivated rule continues deactivated while the rewriting of the\n> query which executed that deactivation is done. This means that the\n> deactivation does not affect other queries. Moreover, the rule is\n> automatically reactivated when the rewrite process is finished.\n> - This feature avoids recursive activation.\n>\n> Example:\n>\n> CREATE TABLE A (aa int primary key, a int, b int);\n> CREATE TABLE B (bb int primary key,a int, b int);\n>\n> CREATE RULE upd_b AS ON UPDATE TO B\n> WHERE\n> NOT EXISTS (SELECT *\n> FROM A\n> WHERE A.a = NEW.a\n> AND A.b = NEW.b )\n> DO INSTEAD\n> SELECT pg_abort_with_msg('No existen registros con a = '||\n> NEW.a || ' b = ' || NEW.b || ' en la tabla A');\n>\n> CREATE RULE upd_a AS ON UPDATE TO A\n> DO\n> UPDATE B SET a = NEW.a, b = NEW.b\n> WHERE a = OLD.a\n> AND b = OLD.b;\n\nSince you asked for comments, I don't think this is\na terribly compelling example. It looks alot like a\nmulticolumn foreign key with on update cascade to\nme except that it's defined against a non-unique\nkey (meaning the update rule may not do what you really\nwant if there are duplicate rows in a that are matched),\nthe error message is more specific, and it looks less\ntransaction safe than the current foreign key\nimplementation (imagine one transaction deleting\na row in A and another updating B to point to that\nrow). Also, turning off the rule in this case is\nwrong, since if something else (a before trigger\nfor example) modifies the row in A before it's inserted\nI'm pretty sure you end up with a row in B that\ndoesn't match. I think there are probably useful\napplications of turning off rule expansion, but\nthis isn't it.\n\n\n",
"msg_date": "Mon, 26 Nov 2001 06:36:13 -0800 (PST)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: WAS: [Fwd: PostgreSQL new commands proposal]"
},
{
"msg_contents": "> Since you asked for comments, I don't think this is\n> a terribly compelling example. It looks alot like a\n> multicolumn foreign key with on update cascade to\n> me except that it's defined against a non-unique\n> key (meaning the update rule may not do what you really\n> want if there are duplicate rows in a that are matched),\n\nGood, that is exactly what is. It is a case of inclusion dependence. The\ninclusion dependences can be based on key (foreign key) or not based on\nkey.\nThe implementation of the cases of inclusion dependences not based on\nkey (as well as other types of dependences) not still been standardized\nand they are study matter in the academic atmospheres. If you are\ninterested, I can mention bibliography and references on these topics.\nThe specification of this type of dependences is not supported by any\nDBMS.\n\n> the error message is more specific, and it looks less\n> transaction safe than the current foreign key\n> implementation (imagine one transaction deleting\n> a row in A and another updating B to point to that\n> row). Also, turning off the rule in this case is\n> wrong, since if something else (a before trigger\n> for example) modifies the row in A before it's inserted\n> I'm pretty sure you end up with a row in B that\n> doesn't match. \n\nI don�t know if I have understood well but these rules single was an\nexample in which was useful and necessary the deactivation of a rule.\nFor the complete control of the inclusion dependence it is necessary\nalso to create rules that control the deletes on A and the inserts on B.\nIf this explanation doesn't satisfy you, please explain to me with an\nexample the problem that you are mentioning.\n\n> I think there are probably useful\n> applications of turning off rule expansion, but\n> this isn't it.\n\nAnother application of the deactivation would be the possibility to\navoid the recursion, for example for the same case of the inclusion\ndependence, it would be possible to make:\n\nCREATE RULE upd_b AS ON UPDATE TO B\nWHERE\n NOT EXISTS (SELECT *\n FROM A\n WHERE A.a = NEW.a\n AND A.b = NEW.b )\nDO (DEACTIVATE RULE upd_b;\nUPDATE B SET a = NULL, b = NULL\nWHERE bb = OLD.bb;)\n\nRule that it would implement a possible \"SET NULL\" for an update on B.\nI suppose that avoiding the recursi�n could still have a much wider use.\n\nMany Thanks for the coments!\n\nbest regards,\n\nSergio.\n",
"msg_date": "Mon, 26 Nov 2001 19:35:21 -0300",
"msg_from": "Sergio Pili <sergiop@sinectis.com.ar>",
"msg_from_op": true,
"msg_subject": "Re: WAS: [Fwd: PostgreSQL new commands proposal]"
},
{
"msg_contents": "On Mon, 26 Nov 2001, Sergio Pili wrote:\n\n> > Since you asked for comments, I don't think this is\n> > a terribly compelling example. It looks alot like a\n> > multicolumn foreign key with on update cascade to\n> > me except that it's defined against a non-unique\n> > key (meaning the update rule may not do what you really\n> > want if there are duplicate rows in a that are matched),\n>\n> Good, that is exactly what is. It is a case of inclusion dependence. The\n> inclusion dependences can be based on key (foreign key) or not based on\n> key.\n>\n> The implementation of the cases of inclusion dependences not based on\n> key (as well as other types of dependences) not still been standardized\n> and they are study matter in the academic atmospheres. If you are\n> interested, I can mention bibliography and references on these topics.\n> The specification of this type of dependences is not supported by any\n> DBMS.\n\nI'd always be interested in interesting documents. :)\n\n> > the error message is more specific, and it looks less\n> > transaction safe than the current foreign key\n> > implementation (imagine one transaction deleting\n> > a row in A and another updating B to point to that\n> > row). Also, turning off the rule in this case is\n> > wrong, since if something else (a before trigger\n> > for example) modifies the row in A before it's inserted\n> > I'm pretty sure you end up with a row in B that\n> > doesn't match.\n>\n> I don�t know if I have understood well but these rules single was an\n> example in which was useful and necessary the deactivation of a rule.\n> For the complete control of the inclusion dependence it is necessary\n> also to create rules that control the deletes on A and the inserts on B.\n> If this explanation doesn't satisfy you, please explain to me with an\n> example the problem that you are mentioning.\n\nThe delete/update things is:\ntransaction 1 starts\ntransaction 2 starts\ntransaction 1 deletes a row from A\n -- There are no rows in B that can be seen by\n -- this transaction so you don't get any deletes.\ntransaction 2 updates a row in B\n -- The row in A can still be seen since it\n -- hasn't expired for transaction 2\ntransaction 1 commits\ntransaction 2 commits\n\nThe trigger thing is (I'm not 100% sure, but pretty sure this\nis what'll happen - given that a test rule with a\nfunction that prints a debugging statement gave me the\noriginally specified value not the final value)\ntransaction 1 starts\n you say update A key to 2,2\n - does cascade update of B as rule expansion to 2,2\n - before trigger on A sets NEW.key to 3,3\n - the row in A actually becomes 3,3\nYou'd no longer be checking the validity of the value\nof B and so you'd have a broken constraint.\n\n\n> > I think there are probably useful\n> > applications of turning off rule expansion, but\n> > this isn't it.\n>\n> Another application of the deactivation would be the possibility to\n> avoid the recursion, for example for the same case of the inclusion\n> dependence, it would be possible to make:\n>\n> CREATE RULE upd_b AS ON UPDATE TO B\n> WHERE\n> NOT EXISTS (SELECT *\n> FROM A\n> WHERE A.a = NEW.a\n> AND A.b = NEW.b )\n> DO (DEACTIVATE RULE upd_b;\n> UPDATE B SET a = NULL, b = NULL\n> WHERE bb = OLD.bb;)\n>\n> Rule that it would implement a possible \"SET NULL\" for an update on B.\n> I suppose that avoiding the recursi�n could still have a much wider use.\n\nAll in all I think you'd be better off with triggers than rules, but I\nunderstand what you're trying to accomplish.\n\n",
"msg_date": "Mon, 26 Nov 2001 14:50:21 -0800 (PST)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: WAS: [Fwd: PostgreSQL new commands proposal]"
},
{
"msg_contents": "Stephan Szabo wrote:\n> \n> On Mon, 26 Nov 2001, Sergio Pili wrote:\n> \n> > The implementation of the cases of inclusion dependences not based on\n> > key (as well as other types of dependences) not still been standardized\n> > and they are study matter in the academic atmospheres. If you are\n> > interested, I can mention bibliography and references on these topics.\n> > The specification of this type of dependences is not supported by any\n> > DBMS.\n> \n> I'd always be interested in interesting documents. :)\n\nCodd, E.: \"The Relational Model for Database Management\". Version 2.\nAddison Wesley Publishing Co. 1990\nAbiteboul, S.; Hull, R.; Vianu, V.: \"Foundations on Databases\". Addison\nWesley Publ. Co. 1995\nDate, C: \"Relational Databases, Selected Writings 1985-1989\". Addison\nWesley. Reprinted with corrections 1989.\nCasanova, M et al.: \"Optimization of relational schemes containing\ninclusion dependencies\". Proceedings of 15 VLDB Conference. Amsterdam,\n1989 pp.315-325.\n\n> The delete/update things is:\n> transaction 1 starts\n> transaction 2 starts\n> transaction 1 deletes a row from A\n> -- There are no rows in B that can be seen by\n> -- this transaction so you don't get any deletes.\n> transaction 2 updates a row in B\n> -- The row in A can still be seen since it\n> -- hasn't expired for transaction 2\n> transaction 1 commits\n> transaction 2 commits\n\n\nI understand. This happens because with the MVCC, the writings don't\nlock the readings...\nI don't like a lot this but the MVCC works this way.\n\n\n> \n> The trigger thing is (I'm not 100% sure, but pretty sure this\n> is what'll happen - given that a test rule with a\n> function that prints a debugging statement gave me the\n> originally specified value not the final value)\n> transaction 1 starts\n> you say update A key to 2,2\n> - does cascade update of B as rule expansion to 2,2\n> - before trigger on A sets NEW.key to 3,3\n> - the row in A actually becomes 3,3\n> You'd no longer be checking the validity of the value\n> of B and so you'd have a broken constraint.\n> \n\nIf this is true, does mean that the rules can be avoided\nusing before triggers?\nAre not the commands executed in the triggers passed through the\nre-writing system?\n\n\n> All in all I think you'd be better off with triggers than rules, but I\n> understand what you're trying to accomplish.\n\nWe fully agree with you in the sense that our examples and inclusion\ndependencies may be totally handled using triggers. In fact, we have\ndone this many times in several cases. The question here is not, for\nexample, �how to preserve an inclusion dependency� but �which is the\nbetter way to preserve inclusion dependencies�.\nWe are so insistent on this matter because the level of abstraction (and\ngenerality) of rules is higher than the triggers and thus it becomes\neasier to express a real world problem in a rule than in a trigger.\nPostgreSQL rules can \"almost\" be used for this sort of problems (we do\nnot bother you with the whole set of features that this approach will\nallow).\nIn this way, for just a minimum price, we may buy a new wide set of\ncapabilities. We ensure you that this is a very good deal. If you want\nto discuss which are those new capabilities, we can send you a large\nmore explicative document on the subject.\n\nRegards,\n\nSergio Pili\n",
"msg_date": "Sat, 01 Dec 2001 20:21:00 -0300",
"msg_from": "Sergio Pili <sergiop@sinectis.com.ar>",
"msg_from_op": true,
"msg_subject": "Re: WAS: [Fwd: PostgreSQL new commands proposal]"
},
{
"msg_contents": "On Sat, 1 Dec 2001, Sergio Pili wrote:\n\n> [documents snipped]\n\nThanks.\n\n> > The delete/update things is:\n> > transaction 1 starts\n> > transaction 2 starts\n> > transaction 1 deletes a row from A\n> > -- There are no rows in B that can be seen by\n> > -- this transaction so you don't get any deletes.\n> > transaction 2 updates a row in B\n> > -- The row in A can still be seen since it\n> > -- hasn't expired for transaction 2\n> > transaction 1 commits\n> > transaction 2 commits\n>\n> I understand. This happens because with the MVCC, the writings don't\n> lock the readings...\n> I don't like a lot this but the MVCC works this way.\n\nYou can get this by doing row level locks with for update or table\nlocks, but you have to be careful to make sure to do it and AFAIK\nfor update doesn't work in subselects and table locks are much\nmuch too strong (for update is too strong as well, but it's less\ntoo strong - see arguments about the fk locking ;) )\n\n> > The trigger thing is (I'm not 100% sure, but pretty sure this\n> > is what'll happen - given that a test rule with a\n> > function that prints a debugging statement gave me the\n> > originally specified value not the final value)\n> > transaction 1 starts\n> > you say update A key to 2,2\n> > - does cascade update of B as rule expansion to 2,2\n> > - before trigger on A sets NEW.key to 3,3\n> > - the row in A actually becomes 3,3\n> > You'd no longer be checking the validity of the value\n> > of B and so you'd have a broken constraint.\n> >\n>\n> If this is true, does mean that the rules can be avoided\n> using before triggers?\n> Are not the commands executed in the triggers passed through the\n> re-writing system?\n\nBefore triggers have the option of actually changing the *actual*\ntuple to insert/update as I understand it. It's not that the\nbefore trigger runs an update (which wouldn't work because the\nrow isn't there) but that the before trigger can change the row\nbeing inserted (for example to add a timestamp) or negate\nthe insert/deletion/update entirely (returning NULL) which would mean\nthat you'd have rule things going off when the original operation\nwas canceled by trigger I believe.\n\n> > All in all I think you'd be better off with triggers than rules, but I\n> > understand what you're trying to accomplish.\n>\n> We fully agree with you in the sense that our examples and inclusion\n> dependencies may be totally handled using triggers. In fact, we have\n> done this many times in several cases. The question here is not, for\n> example, �how to preserve an inclusion dependency� but �which is the\n> better way to preserve inclusion dependencies�.\n> We are so insistent on this matter because the level of abstraction (and\n> generality) of rules is higher than the triggers and thus it becomes\n> easier to express a real world problem in a rule than in a trigger.\n> PostgreSQL rules can \"almost\" be used for this sort of problems (we do\n> not bother you with the whole set of features that this approach will\n> allow).\n> In this way, for just a minimum price, we may buy a new wide set of\n> capabilities. We ensure you that this is a very good deal. If you want\n> to discuss which are those new capabilities, we can send you a large\n> more explicative document on the subject.\n\nWell, I'm not particularly the person you need to convince, since I don't\nhave a strong view on the functionality/patch in question :), I was just\npointing out that the example given wasn't likely to convince someone.\n\n",
"msg_date": "Sat, 1 Dec 2001 15:39:54 -0800 (PST)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: WAS: [Fwd: PostgreSQL new commands proposal]"
}
] |
[
{
"msg_contents": "The interval test fails with the following msg:\n--- 216,222 ----\n -- known to change the allowed input syntax for type interval without\n -- updating pg_aggregate.agginitval\n select avg(f1) from interval_tbl;\n! server closed the connection unexpectedly\n! This probably means the server terminated abnormally\n! before or while processing the request.\n! connection to server was lost\n\nNo core file and I don't know which of the many error messages in\npostmaster.log are normal and which are not.\nIt works as expected on x86 linux and QNX4.\n\n-- \nBernd Tegge mailto:tegge@repas-aeg.de\nTel: ++49-511-87449-12 repas AEG Automation GmbH\nFax: ++49-511-87449-20 GS Hannover, Germany\n\n",
"msg_date": "Thu, 15 Nov 2001 20:54:12 +0100",
"msg_from": "\"Tegge, Bernd\" <tegge@repas-aeg.de>",
"msg_from_op": true,
"msg_subject": "Regression fails on Alpha True64 V5.0 for todays cvs "
}
] |
[
{
"msg_contents": "It seemed like the discussion a couple days ago ended without any\ndefinitive agreement on what to do. Since we're about to push out\n7.2beta3, we need to decide whether we're going to change the code\nnow, or wait another release cycle before doing anything.\n\nI would like to formally propose removing the \"triggered data change\"\nerror check (for details, see the patch I posted to pgsql-patches on\nMonday). My reasoning is that the present code is:\n\n1. broken --- it doesn't implement the spec.\n2. slow --- it causes a *major* performance hit when a long transaction\n updates many rows more than once in a table having foreign keys.\n3. not likely to be the basis for a correct solution --- AFAICT,\n the correct interpretation of \"triggered data change\" is not\n trigger-specific; it would be better handled as part of what we\n call time qual checking.\n\nPoint #2 is affecting some real-world applications I know of, and so\nI'd rather not wait another release cycle or more to offer a fix.\n\nI don't believe that removing the error check can break any applications\nthat are currently working, and so I see no real downside to taking this\ncode out.\n\nAny objections out there?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 15 Nov 2001 19:01:00 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "So, do we want to remove the \"triggered data change\" code?"
},
{
"msg_contents": "On Thu, 15 Nov 2001, Tom Lane wrote:\n\n> Any objections out there?\n\nI have no argument against ripping it out.\n\n",
"msg_date": "Thu, 15 Nov 2001 16:31:53 -0800 (PST)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: So, do we want to remove the \"triggered data change\""
}
] |
[
{
"msg_contents": "I have in my code a SQL statement that does the following:\n\nselect period_start + interval('1 hour') from periods;\n\nThis worked in 7.1, but in 7.2 I am getting the following error:\n\nERROR: parser: parse error at or near \"'\"\n\nIf I remove the quotes then I get the following error:\n\nERROR: parser: parse error at or near \"hour\"\n\nWas this change from 7.1 to 7.2 intentional? If so, how should this be \ncoded in 7.2?\n\nthanks,\n--Barry\n\n",
"msg_date": "Thu, 15 Nov 2001 17:51:39 -0800",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": true,
"msg_subject": "bug or change in functionality in 7.2?"
},
{
"msg_contents": "Well, the way I've always constructed these queries is:\n\nselect period_start + interval '1 hour' from periods;\n\nTry that. In fact, I believe the above is the correct SQL standard syntax?\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Barry Lind\n> Sent: Friday, 16 November 2001 9:52 AM\n> To: pgsql-general@postgresql.org\n> Cc: PostgreSQL-development\n> Subject: [HACKERS] bug or change in functionality in 7.2?\n> \n> \n> I have in my code a SQL statement that does the following:\n> \n> select period_start + interval('1 hour') from periods;\n> \n> This worked in 7.1, but in 7.2 I am getting the following error:\n> \n> ERROR: parser: parse error at or near \"'\"\n> \n> If I remove the quotes then I get the following error:\n> \n> ERROR: parser: parse error at or near \"hour\"\n> \n> Was this change from 7.1 to 7.2 intentional? If so, how should this be \n> coded in 7.2?\n> \n> thanks,\n> --Barry\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n",
"msg_date": "Fri, 16 Nov 2001 10:58:06 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: bug or change in functionality in 7.2?"
},
{
"msg_contents": "Barry Lind <barry@xythos.com> writes:\n> select period_start + interval('1 hour') from periods;\n> This worked in 7.1, but in 7.2 I am getting the following error:\n> ERROR: parser: parse error at or near \"'\"\n\n\"interval\" is a more reserved word than it used to be (\"timestamp\"\nis too). This is because interval(n) is now a type name, not a\nfunction name, because we now support SQL92's notion of precision\nspecs for intervals and timestamps. That means using \"interval\"\nas an unquoted function name doesn't work anymore.\n\nI concur with Christopher's recommendation: use the syntax\n\tinterval '1 hour'\nOther possibilities are\n\tcast('1 hour' as interval)\n\t\"interval\"('1 hour')\n\t'1 hour'::interval\nThe last two are Postgres-isms, the first two are SQL92 standard\nnotations that we'll try not to break in future.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 16 Nov 2001 00:26:40 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] bug or change in functionality in 7.2? "
},
{
"msg_contents": "This needs to be highlighted in the release notes/history/migration\ndocs, whatever. both interval() and timestamp(), since that was a\n(wrong) way to do casts, in the past.\n\nRoss\n\nOn Fri, Nov 16, 2001 at 12:26:40AM -0500, Tom Lane wrote:\n> Barry Lind <barry@xythos.com> writes:\n> > select period_start + interval('1 hour') from periods;\n> > This worked in 7.1, but in 7.2 I am getting the following error:\n> > ERROR: parser: parse error at or near \"'\"\n> \n> \"interval\" is a more reserved word than it used to be (\"timestamp\"\n> is too). This is because interval(n) is now a type name, not a\n> function name, because we now support SQL92's notion of precision\n> specs for intervals and timestamps. That means using \"interval\"\n> as an unquoted function name doesn't work anymore.\n> \n> I concur with Christopher's recommendation: use the syntax\n> \tinterval '1 hour'\n> Other possibilities are\n> \tcast('1 hour' as interval)\n> \t\"interval\"('1 hour')\n> \t'1 hour'::interval\n> The last two are Postgres-isms, the first two are SQL92 standard\n> notations that we'll try not to break in future.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n",
"msg_date": "Fri, 16 Nov 2001 09:24:24 -0600",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: bug or change in functionality in 7.2?"
},
{
"msg_contents": "Thanks for the quick help. I have changed my code accordingly.\n\n--Barry\n\nTom Lane wrote:\n\n> Barry Lind <barry@xythos.com> writes:\n> \n>>select period_start + interval('1 hour') from periods;\n>>This worked in 7.1, but in 7.2 I am getting the following error:\n>>ERROR: parser: parse error at or near \"'\"\n>>\n> \n> \"interval\" is a more reserved word than it used to be (\"timestamp\"\n> is too). This is because interval(n) is now a type name, not a\n> function name, because we now support SQL92's notion of precision\n> specs for intervals and timestamps. That means using \"interval\"\n> as an unquoted function name doesn't work anymore.\n> \n> I concur with Christopher's recommendation: use the syntax\n> \tinterval '1 hour'\n> Other possibilities are\n> \tcast('1 hour' as interval)\n> \t\"interval\"('1 hour')\n> \t'1 hour'::interval\n> The last two are Postgres-isms, the first two are SQL92 standard\n> notations that we'll try not to break in future.\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n",
"msg_date": "Fri, 16 Nov 2001 09:29:00 -0800",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] bug or change in functionality in 7.2?"
},
{
"msg_contents": "On Fri, 16 Nov 2001 00:26:40 EST, Tom Lane wrote:\n> Barry Lind <barry@xythos.com> writes:\n> > select period_start + interval('1 hour') from periods;\n> > This worked in 7.1, but in 7.2 I am getting the following error:\n> > ERROR: parser: parse error at or near \"'\"\n> \n> \"interval\" is a more reserved word than it used to be (\"timestamp\"\n> is too). This is because interval(n) is now a type name, not a\n> function name, because we now support SQL92's notion of precision\n> specs for intervals and timestamps. That means using \"interval\"\n> as an unquoted function name doesn't work anymore.\n> \n> I concur with Christopher's recommendation: use the syntax\n> \tinterval '1 hour'\n> Other possibilities are\n> \tcast('1 hour' as interval)\n> \t\"interval\"('1 hour')\n> \t'1 hour'::interval\n> The last two are Postgres-isms, the first two are SQL92 standard\n> notations that we'll try not to break in future.\n\n In my readings on the standard, the first one is _not_ SQL92\nstandard notation. Indeed, I may be incorrect since I do not have an\nactual copy of the SQL92 standard. I am basing my statements on Date/\nDarwin's \"A guide to the SQL Standard\", fourth edition. In that tome,\nthey state:\n\n----- cut -----\nday-time:\n\n Written as the key word INTERVAL, followed by a (day-time) interval\nstring consisting of an opening single quote, an optional sign, a\ncontinuous nonempty subsequence of dd, hh, mm, and ss[.[nnnnnn]] (with\na space separator between dd and the rest, if dd is specified, and\ncolon separators elsewhere), and a closing single quote, followed by\nthe appropriate \"start [TO end]\" specification.\n\n Examples:\n\n INTERVAL '1' \tMINUTE\n INTERVAL '2 12' DAY TO HOUR\n INTERVAL '2:12:35' HOUR TO SECOND\n INTERVAL '-4.50' SECOND\n\n----- cut -----\n\n In my experiences with other databases, the notations indicated in\nthe Date/Darwin book do indeed work whereas the PostgreSQL notation\n(with the closing single quote following the start to end\nspecification) do not work.\n\nThanks,\nF Harvell\n\n-- \nMr. F Harvell Phone: +1.407.673.2529\nFTS International Data Systems, Inc. Cell: +1.407.467.1919\n7457 Aloma Ave, Suite 302 Fax: +1.407.673.4472\nWinter Park, FL 32792 mailto:fharvell@fts.net\n\n\n",
"msg_date": "Mon, 19 Nov 2001 11:16:49 -0500",
"msg_from": "F Harvell <fharvell@fts.net>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] bug or change in functionality in 7.2? "
},
{
"msg_contents": "F Harvell <fharvell@fts.net> writes:\n> In my experiences with other databases, the notations indicated in\n> the Date/Darwin book do indeed work whereas the PostgreSQL notation\n> (with the closing single quote following the start to end\n> specification) do not work.\n\nIn current sources:\n\nregression=# select INTERVAL '2:12:35' HOUR TO SECOND;\n interval\n----------\n 02:12:35\n(1 row)\n\nregression=# select INTERVAL '2:12:35 HOUR TO SECOND';\nERROR: Bad interval external representation '2:12:35 HOUR TO SECOND'\nregression=#\n\nLooks like Lockhart agrees with you ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 19 Nov 2001 11:24:08 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] bug or change in functionality in 7.2? "
},
{
"msg_contents": "On Mon, 19 Nov 2001 11:24:08 EST, Tom Lane wrote:\n> F Harvell <fharvell@fts.net> writes:\n> > In my experiences with other databases, the notations indicated in\n> > the Date/Darwin book do indeed work whereas the PostgreSQL notation\n> > (with the closing single quote following the start to end\n> > specification) do not work.\n> \n> In current sources:\n> \n> regression=# select INTERVAL '2:12:35' HOUR TO SECOND;\n> interval\n> ----------\n> 02:12:35\n> (1 row)\n> \n> regression=# select INTERVAL '2:12:35 HOUR TO SECOND';\n> ERROR: Bad interval external representation '2:12:35 HOUR TO SECOND'\n> regression=#\n> \n> Looks like Lockhart agrees with you ;-)\n> \n\nIf the above is true (i.e., errors on the second interval literal), it\nshould probably be mentioned in the release notes (HISTORY file?).\nWhile I eagerly anticipate the change and agree with it, it will break\na lot of my current code. I think this is (potentially) correct,\nhowever, it should be told to people who are using interval literals\nand anticipating to make the upgrade to 7.2.\n\n-- \nMr. F Harvell Phone: +1.407.673.2529\nFTS International Data Systems, Inc. Cell: +1.407.467.1919\n7457 Aloma Ave, Suite 302 Fax: +1.407.673.4472\nWinter Park, FL 32792 mailto:fharvell@fts.net\n\n\n",
"msg_date": "Mon, 19 Nov 2001 13:00:11 -0500",
"msg_from": "F Harvell <fharvell@fts.net>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] bug or change in functionality in 7.2? "
},
{
"msg_contents": "> If the above is true (i.e., errors on the second interval literal), it\n> should probably be mentioned in the release notes (HISTORY file?).\n> While I eagerly anticipate the change and agree with it, it will break\n> a lot of my current code. I think this is (potentially) correct,\n> however, it should be told to people who are using interval literals\n> and anticipating to make the upgrade to 7.2.\n\nCan I have some text for HISTORY?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 19 Nov 2001 16:06:39 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] bug or change in functionality in 7.2?"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> If the above is true (i.e., errors on the second interval literal), it\n>> should probably be mentioned in the release notes (HISTORY file?).\n\n> Can I have some text for HISTORY?\n\nThomas would be the authority, but AFAIK this is new stuff; it doesn't\nbreak anything that worked before.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 19 Nov 2001 16:39:44 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] bug or change in functionality in 7.2? "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> If the above is true (i.e., errors on the second interval literal), it\n> >> should probably be mentioned in the release notes (HISTORY file?).\n> \n> > Can I have some text for HISTORY?\n> \n> Thomas would be the authority, but AFAIK this is new stuff; it doesn't\n> break anything that worked before.\n\nOh, OK.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 19 Nov 2001 16:41:11 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] bug or change in functionality in 7.2?"
},
{
"msg_contents": "On Mon, 19 Nov 2001 16:41:11 EST, Bruce Momjian wrote:\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > >> If the above is true (i.e., errors on the second interval literal), it\n> > >> should probably be mentioned in the release notes (HISTORY file?).\n> > \n> > > Can I have some text for HISTORY?\n> > \n> > Thomas would be the authority, but AFAIK this is new stuff; it doesn't\n> > break anything that worked before.\n> \n> Oh, OK.\n\n Well, since I started this, I figured that I had best verify if\nthere is an issue. There appears to be _no_ issue. The exiting (7.1)\nfunctionality still works in 7.2. Sorry for the confusion.\n\n It might be reasonable, though, to mention in the types or\nenhancements section that the SQL92 interval literal syntax is now\nsupported. (It's implied but not spelled out as \"Add INTERVAL() YEAR\nTO MONTH (etc) syntax (Thomas)\".)\n\n BTW, many thanks to Thomas. This is a compatibility that I really\nappreciate.\n\n-- \nMr. F Harvell Phone: +1.407.673.2529\nFTS International Data Systems, Inc. Cell: +1.407.467.1919\n7457 Aloma Ave, Suite 302 Fax: +1.407.673.4472\nWinter Park, FL 32792 mailto:fharvell@fts.net\n\n\n",
"msg_date": "Tue, 20 Nov 2001 11:40:23 -0500",
"msg_from": "F Harvell <fharvell@fts.net>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] bug or change in functionality in 7.2? "
}
] |
[
{
"msg_contents": "ecpg's tests currently fail because the Makefile refers to a nonexistent\ntest file \"testdynalloc\". Did you forget to commit this file?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 15 Nov 2001 22:56:43 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "ecpg test problem"
},
{
"msg_contents": "On Thu, Nov 15, 2001 at 10:56:43PM -0500, Tom Lane wrote:\n> ecpg's tests currently fail because the Makefile refers to a nonexistent\n> test file \"testdynalloc\". Did you forget to commit this file?\n\nThat's not the only change that's missing. As Christof pointed out all\nchanges to the preproc dir were also missing. I have no idea what happened\nespecially because my check-outs did not show any problem.\n\nAnyway, I committed the missing pieces, so all should work again.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Fri, 16 Nov 2001 09:43:06 +0100",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: ecpg test problem"
},
{
"msg_contents": "Michael Meskes wrote:\n\n> On Thu, Nov 15, 2001 at 10:56:43PM -0500, Tom Lane wrote:\n> > ecpg's tests currently fail because the Makefile refers to a nonexistent\n> > test file \"testdynalloc\". Did you forget to commit this file?\n>\n> That's not the only change that's missing. As Christof pointed out all\n> changes to the preproc dir were also missing. I have no idea what happened\n> especially because my check-outs did not show any problem.\n>\n> Anyway, I committed the missing pieces, so all should work again.\n\n(I haven't seen your change to preproc.y on committers, yet)\n\n================== related problem\n\nHmm. Perhaps my problem (I can't see the changes for preproc.y in the CVS\ntree) is related to the fact that I have to use anoncvs.\nLet's test ...\n\n---------------------- (cvs status)\nFile: preproc.y Status: Up-to-date\n\n Working revision: 1.168\n Repository revision: 1.168\n/projects/cvsroot/pgsql/src/interfaces/ecpg/preproc/preproc.y,v\n Sticky Tag: (none)\n Sticky Date: (none)\n Sticky Options: (none)\n\n------------------------- (quoting committers)\n\nCVSROOT: /cvsroot\nModule name: pgsql\nChanges by: tgl@postgresql.org 01/11/15 23:08:34 <<<<<<<<<<\n\n------------ (cvs log preproc.y)\nrevision 1.168\ndate: 2001/11/10 22:31:49; author: tgl; state: Exp; lines: +406 -359\n<<<<<<<<\n\nOh dear.\nanoncvs seems to be somewhat behind.\n\n Christof\n\n\n",
"msg_date": "Fri, 16 Nov 2001 10:01:46 +0100",
"msg_from": "Christof Petig <christof@petig-baender.de>",
"msg_from_op": false,
"msg_subject": "Re: ecpg test problem"
},
{
"msg_contents": "Christof Petig wrote:\n\n> Michael Meskes wrote:\n>\n> > On Thu, Nov 15, 2001 at 10:56:43PM -0500, Tom Lane wrote:\n> > > ecpg's tests currently fail because the Makefile refers to a nonexistent\n> > > test file \"testdynalloc\". Did you forget to commit this file?\n> >\n> > That's not the only change that's missing. As Christof pointed out all\n> > changes to the preproc dir were also missing. I have no idea what happened\n> > especially because my check-outs did not show any problem.\n> >\n> > Anyway, I committed the missing pieces, so all should work again.\n>\n> (I haven't seen your change to preproc.y on committers, yet)\n\nmeskes@postgresql.org wrote:\n\n> CVSROOT: /cvsroot\n> Module name: pgsql\n> Changes by: meskes@postgresql.org 01/11/16 03:36:37\n>\n> Modified files:\n> src/interfaces/ecpg/preproc: extern.h preproc.y variable.c\n> Added files:\n> src/interfaces/ecpg/test: testdynalloc.pgc\n>\n> Log message:\n> Committed again to add the missing files/patches.\n\nWow.\nNow I have to wait for anoncvs.\n\nChristof\n\n\n",
"msg_date": "Fri, 16 Nov 2001 10:11:40 +0100",
"msg_from": "Christof Petig <christof@petig-baender.de>",
"msg_from_op": false,
"msg_subject": "Re: ecpg test problem"
},
{
"msg_contents": "On Fri, Nov 16, 2001 at 10:01:46AM +0100, Christof Petig wrote:\n> Hmm. Perhaps my problem (I can't see the changes for preproc.y in the CVS\n> tree) is related to the fact that I have to use anoncvs.\n> Let's test ...\n\nNo, I removed my files and did a new full checkout and they weren#t there\neither.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Fri, 16 Nov 2001 12:38:11 +0100",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: ecpg test problem"
},
{
"msg_contents": "Christof Petig <christof@petig-baender.de> writes:\n> anoncvs seems to be somewhat behind.\n\nI believe the anoncvs machine is supposed to sync with the master every\nhour. If you're seeing more than an hour's delay, complain to Marc.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 16 Nov 2001 10:17:18 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: ecpg test problem "
},
{
"msg_contents": "> ecpg's tests currently fail because the Makefile refers to a nonexistent\n> test file \"testdynalloc\". Did you forget to commit this file?\n\nHas this been solved? Does it need to be added to the open items list?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 16 Nov 2001 13:59:14 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ecpg test problem"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> ecpg's tests currently fail because the Makefile refers to a nonexistent\n>> test file \"testdynalloc\". Did you forget to commit this file?\n\n> Has this been solved? Does it need to be added to the open items list?\n\nIt's fixed.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 16 Nov 2001 14:03:23 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: ecpg test problem "
}
] |
[
{
"msg_contents": "Hi,\n I'm testing PostgreSQL, version 7.1.2, and I create a table, test1, with these kind of fields:\n\nCREATE TABLE test1 (\n id integer,\n content varchar\n);\n\nwith 800.000 records, and a btree index setted on the id field.\n\nI noticed the query uses the index for \"=\" and \"<\" operators, but if the value used for the \"<\" operator is higher (600.000 for example), the query makes a seq scan of the table, and the index never works with the \">\" operator. Am I managing the indexes well ?\n\nThanks for the help.\n\nCiao Paolo\n\n\n\n\n\nPaolo Cassago\n\nTalentManager\nMilan, Paris, Madrid\n\nTel: +39 02 83 11 23 1\nFax: +39 02 700 43 99 81\nMob: +39 348 82 155 81\n\n",
"msg_date": "16 Nov 2001 09:05:56 +0000",
"msg_from": "Paolo Cassago <paolo.cassago@talentmanager.com>",
"msg_from_op": true,
"msg_subject": "Btree doesn't work with \">\" condition"
},
{
"msg_contents": "\n\nPaolo Cassago wrote:\n\n>Hi,\n> I'm testing PostgreSQL, version 7.1.2, and I create a table, test1, with these kind of fields:\n>\n>CREATE TABLE test1 (\n> id integer,\n> content varchar\n>);\n>\n>with 800.000 records, and a btree index setted on the id field.\n>\n>I noticed the query uses the index for \"=\" and \"<\" operators, but if the value used for the \"<\" operator is higher (600.000 for example), the query makes a seq scan of the table, and the index never works with the \">\" operator. Am I managing the indexes well ?\n>\nHave you run VACUUM ANALYZE lately ?\n\nWhat does EXPLAIN think of the costs ?\n\n-------------\nHannu\n\n",
"msg_date": "Wed, 21 Nov 2001 00:54:30 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Btree doesn't work with \">\" condition"
}
] |
[
{
"msg_contents": "-------- Ursprüngliche Nachricht --------\nBetreff: Re: [HACKERS] import/export of large objects on server-side\nVon: \"Klaus Reger\" <K.Reger@twc.de>\nAn: <tgl@sss.pgh.pa.us>\n\n> Use the client-side LO import/export functions, instead.\n>\n>ok, i've read the config.h and the sources. I agree that this can be a\n>security hole. But for our application we need lo-access from\n>PL/PGSQL-Procedures (explicitly on the server). We have to check out\n>documents, work with them and then check the next version in.\n>\n>Whats about an configuration-file entry, in the matter\n>LO_DIR=/directory or none (which is the default).\n>For our product we want to be compatible with the original sources of Pg,\n>avoiding own patches in every new version.\n\nHi,\n\nI've made a patch, that introduces an entry in the PostgreSQL-config file.\nYou can set a drirectory, where all imports/exports can happen. If nothing\nis set (the default), no imports/exports on the server-side are allowed.\n\nTo enhance the security, no reading/writung is allowed from/to non-regular\nfiles (block-devs, symlinks, etc.)\n\nI hope, that this patch is secure enough and will be integrated.\n\nRegards, Klaus",
"msg_date": "Fri, 16 Nov 2001 13:33:51 +0100 (CET)",
"msg_from": "\"Klaus Reger\" <K.Reger@twc.de>",
"msg_from_op": true,
"msg_subject": "Re: import/export of large objects on server-side"
},
{
"msg_contents": "\"Klaus Reger\" <K.Reger@twc.de> writes:\n> I've made a patch, that introduces an entry in the PostgreSQL-config file.\n> You can set a drirectory, where all imports/exports can happen. If nothing\n> is set (the default), no imports/exports on the server-side are allowed.\n> To enhance the security, no reading/writung is allowed from/to non-regular\n> files (block-devs, symlinks, etc.)\n\nThis is trivially defeatable, assuming that the \"import/export\"\ndirectory is world writable (if it isn't, importing will be tough).\nExample: say imp/exp directory is\n\n\t/var/spool/impexp\n\nBad guy wants to read/write Postgres-owned file, say\n\n\t/usr/local/pgsql/data/pg_hba.conf\n\nAll he need do is\n\n\tln -s /usr/local/pgsql/data /var/spool/impexp/link\n\nand then ask to lo_read or lo_write\n\n\t/var/spool/impexp/link/pg_hba.conf\n\nwhich will be allowed since it's a regular file.\n\nOr, even simpler, ask to read/write\n\n\t/var/spool/impexp/../../../usr/local/pgsql/data/pg_hba.conf\n\nWhile you could patch around these particular attacks by further\nrestricting the filenames, the bottom line is that server-side LO\noperations are just inherently insecure.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 16 Nov 2001 10:29:27 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: import/export of large objects on server-side "
},
{
"msg_contents": "> \"Klaus Reger\" <K.Reger@twc.de> writes:\n>> I've made a patch, that introduces an entry in the PostgreSQL-config\n>> file. You can set a drirectory, where all imports/exports can happen.\n>> If nothing is set (the default), no imports/exports on the server-side\n>> are allowed. To enhance the security, no reading/writung is allowed\n>> from/to non-regular files (block-devs, symlinks, etc.)\n>\n> This is trivially defeatable, assuming that the \"import/export\"\n> directory is world writable (if it isn't, importing will be tough).\n...\n> While you could patch around these particular attacks by further\n> restricting the filenames, the bottom line is that server-side LO\n> operations are just inherently insecure.\n>\n> \t\t\tregards, tom lane\n\nOk, you're right, but is it acceptable, to configure this, using the\nconfigfile, rather than with a compile-option?\n\nRegards, Klaus\n\n\n",
"msg_date": "Fri, 16 Nov 2001 17:02:13 +0100 (CET)",
"msg_from": "\"Klaus Reger\" <K.Reger@twc.de>",
"msg_from_op": true,
"msg_subject": "Re: import/export of large objects on server-side"
},
{
"msg_contents": "\"Klaus Reger\" <K.Reger@twc.de> writes:\n> Ok, you're right, but is it acceptable, to configure this, using the\n> configfile, rather than with a compile-option?\n\nThe patch as given isn't any more secure than just enabling\nALLOW_DANGEROUS_LO_FUNCTIONS, so I for one will vote against\napplying it.\n\nI'm still unconvinced that there's any need to create a server-side LO\nimport/export loophole. Client-side LO operations are inherently safer,\nand that's the direction you should be looking in for a solution.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 16 Nov 2001 11:12:28 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: import/export of large objects on server-side "
},
{
"msg_contents": "On Fri, Nov 16, 2001 at 05:02:13PM +0100, Klaus Reger wrote:\n> > \"Klaus Reger\" <K.Reger@twc.de> writes:\n> >> I've made a patch, that introduces an entry in the PostgreSQL-config\n> >> file. You can set a drirectory, where all imports/exports can happen.\n> >> If nothing is set (the default), no imports/exports on the server-side\n> >> are allowed. To enhance the security, no reading/writung is allowed\n> >> from/to non-regular files (block-devs, symlinks, etc.)\n> >\n> > This is trivially defeatable, assuming that the \"import/export\"\n> > directory is world writable (if it isn't, importing will be tough).\n>\n> ...\n> > While you could patch around these particular attacks by further\n> > restricting the filenames, the bottom line is that server-side LO\n> > operations are just inherently insecure.\n> >\n> > \t\t\tregards, tom lane\n> \n> Ok, you're right, but is it acceptable, to configure this, using the\n> configfile, rather than with a compile-option?\n\n You can always use client-site LO operations without this restriction.\n IMHO server-site LO operations is needless and a little dirty feature.\n\n May by add to our privilege system support for LO operations too. But\n our current privilege system is very inflexible for changes1...\n\n Karel\n \n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Tue, 20 Nov 2001 09:45:31 +0100",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": false,
"msg_subject": "Re: import/export of large objects on server-side"
}
] |
[
{
"msg_contents": "\n> It seemed like the discussion a couple days ago ended without any\n> definitive agreement on what to do. Since we're about to push out\n> 7.2beta3, we need to decide whether we're going to change the code\n> now, or wait another release cycle before doing anything.\n> \n> I would like to formally propose removing the \"triggered data change\"\n> error check (for details, see the patch I posted to pgsql-patches on\n> Monday). My reasoning is that the present code is:\n> \n> 1. broken --- it doesn't implement the spec.\n> 2. slow --- it causes a *major* performance hit when a long\ntransaction\n> updates many rows more than once in a table having foreign keys.\n> 3. not likely to be the basis for a correct solution --- AFAICT,\n> the correct interpretation of \"triggered data change\" is not\n> trigger-specific; it would be better handled as part of what we\n> call time qual checking.\n\nyes, to all above.\n\n> \n> Point #2 is affecting some real-world applications I know of, and so\n> I'd rather not wait another release cycle or more to offer a fix.\n\nI have tried hard to fully understand the issue, and think that your \nproposed patch is correct.\n\nThe whole check was only done for tuples modified inside this\ntransaction,\nthus while I have the feeling that Tatsuo's concerns and above point 3\nare \nvalid, they are not affected by this patch.\n\nI thus think you should apply the patch.\n\nAndreas\n",
"msg_date": "Fri, 16 Nov 2001 13:57:15 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: So, do we want to remove the \"triggered data change\" code?"
}
] |
[
{
"msg_contents": "It is sort of discouraging that Postgres' default configuration is so\nconservative. A tweek here and there can make a big difference. It seems to me\nthat the default postgresql.conf should not be used for a dedicated server. In\nfact, it can't because TCP/IP is disabled.\n\nIn my projects I have done the default stuff, increase buffers, sort memory,\nand so on, however, some of the tunable parameters seem a bit arcane and are\nnot completely clear what they do or the effect they may have. (some have no\nnoticable effect, eventhough it looks as if they should.) I think most users,\nparticularly those new to SQL databases in general, would find it difficult to\ntune Postgres.\n\nDoes anyone think it is a good idea, to make a postgresql.conf cookbook sort of\nthing? Gather a number of tuned config files, annotated as to why the settings\nare set the way they are, and the machine on which they run.\n\nParticularly, I'd like to see if someone has been able to really understand and\nmanipulate the planner COST options successfully.\n\nAlternatively, it should be possible to write a program that analyzes a target\nsystem, asks questions like: \"Is this a dedicated server?\" \"How much ram do you\nhave?\" \"On which volume will the database be installed?\" Then perform some\ntests that mimic the cost values, and create a new postgresql.conf with the\noptions tuned.\n",
"msg_date": "Fri, 16 Nov 2001 08:45:27 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Super Optimizing Postgres"
},
{
"msg_contents": "Sure, that'd be useful. Especially since we all can discuss what things\nwork, what doesn't work. \n\n\nOn Fri, 16 Nov 2001, mlw wrote:\n\n> It is sort of discouraging that Postgres' default configuration is so\n> conservative. A tweek here and there can make a big difference. It seems to me\n> that the default postgresql.conf should not be used for a dedicated server. In\n> fact, it can't because TCP/IP is disabled.\n> \n> In my projects I have done the default stuff, increase buffers, sort\n> memory, and so on, however, some of the tunable parameters seem a bit\n> arcane and are not completely clear what they do or the effect they\n> may have. (some have no noticable effect, eventhough it looks as if\n> they should.) I think most users, particularly those new to SQL\n> databases in general, would find it difficult to tune Postgres.\n> \n> Does anyone think it is a good idea, to make a postgresql.conf\n> cookbook sort of thing? Gather a number of tuned config files,\n> annotated as to why the settings are set the way they are, and the\n> machine on which they run.\n> \n> Particularly, I'd like to see if someone has been able to really\n> understand and manipulate the planner COST options successfully.\n> \n> Alternatively, it should be possible to write a program that analyzes\n> a target system, asks questions like: \"Is this a dedicated server?\"\n> \"How much ram do you have?\" \"On which volume will the database be\n> installed?\" Then perform some tests that mimic the cost values, and\n> create a new postgresql.conf with the options tuned.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n> \n\n",
"msg_date": "Fri, 16 Nov 2001 11:26:11 -0500 (EST)",
"msg_from": "Alex Pilosov <alex@pilosoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Super Optimizing Postgres"
},
{
"msg_contents": "Didn't Bruce do a document on this exact topic? I'm not sure it's everything \nyou are looking for, but it might help you get started.\n\n> Sure, that'd be useful. Especially since we all can discuss what things\n> work, what doesn't work.\n>\n> On Fri, 16 Nov 2001, mlw wrote:\n> > It is sort of discouraging that Postgres' default configuration is so\n> > conservative. A tweek here and there can make a big difference. It seems\n> > to me that the default postgresql.conf should not be used for a dedicated\n> > server. In fact, it can't because TCP/IP is disabled.\n> >\n> > In my projects I have done the default stuff, increase buffers, sort\n> > memory, and so on, however, some of the tunable parameters seem a bit\n> > arcane and are not completely clear what they do or the effect they\n> > may have. (some have no noticable effect, eventhough it looks as if\n> > they should.) I think most users, particularly those new to SQL\n> > databases in general, would find it difficult to tune Postgres.\n> >\n> > Does anyone think it is a good idea, to make a postgresql.conf\n> > cookbook sort of thing? Gather a number of tuned config files,\n> > annotated as to why the settings are set the way they are, and the\n> > machine on which they run.\n> >\n> > Particularly, I'd like to see if someone has been able to really\n> > understand and manipulate the planner COST options successfully.\n> >\n> > Alternatively, it should be possible to write a program that analyzes\n> > a target system, asks questions like: \"Is this a dedicated server?\"\n> > \"How much ram do you have?\" \"On which volume will the database be\n> > installed?\" Then perform some tests that mimic the cost values, and\n> > create a new postgresql.conf with the options tuned.\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n",
"msg_date": "Fri, 16 Nov 2001 13:52:06 -0600",
"msg_from": "matthew@zeut.net",
"msg_from_op": false,
"msg_subject": "Re: Super Optimizing Postgres"
},
{
"msg_contents": "> Didn't Bruce do a document on this exact topic? I'm not sure it's everything \n> you are looking for, but it might help you get started.\n\nSure:\n\n\thttp://techdocs.postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 16 Nov 2001 15:43:46 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Super Optimizing Postgres"
},
{
"msg_contents": "----- Original Message ----- \nFrom: mlw <markw@mohawksoft.com>\nSent: Friday, November 16, 2001 8:45 AM\n\n> Does anyone think it is a good idea, to make a postgresql.conf cookbook sort of\n> thing? Gather a number of tuned config files, annotated as to why the settings\n> are set the way they are, and the machine on which they run.\n\nSounds like a very good idea, but... who's gonna be in charge of it?\n\n> Alternatively, it should be possible to write a program that analyzes a target\n> system, asks questions like: \"Is this a dedicated server?\" \"How much ram do you\n> have?\" \"On which volume will the database be installed?\" Then perform some\n> tests that mimic the cost values, and create a new postgresql.conf with the\n> options tuned.\n\nThis program sounds like a little rule-based expert system-like software :)\n\"Tune your PostgreSQL server with a pgTune expert system!\" an ad\nwould be voicing out loud...\n\n--\nSerguei A. Mokhov\n\n",
"msg_date": "Fri, 16 Nov 2001 16:03:53 -0500",
"msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>",
"msg_from_op": false,
"msg_subject": "Re: Super Optimizing Postgres"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > Didn't Bruce do a document on this exact topic? I'm not sure it's everything\n> > you are looking for, but it might help you get started.\n> \n> Sure:\n> \n> http://techdocs.postgresql.org\n\nQuestion:\n\nDoes sort memory come out of shared? I don't think so (would it need too?), but\n\"Cache Size and Sort Size \" seems to imply that it does.\n\nAlso, you don't go into the COST variables. If what is documented about them is\ncorrect, they are woefully incorrect with a modern machine.\n\nWould a 1.3 ghz Athlon really have a cpu_operator_cost of 0.0025? That would\nimply that that computer could process 2500 conditionals in the time it would\ntake to make a sequential read. If Postgres is run on a 10K RPM disk vs a 5.4K\nRPM disk on two different machines with the same processor and speed, these\nnumbers can't hope to be right, one should be about twice as high as the other.\n\nThat said, do these numbers really affect the planner all that much?\n",
"msg_date": "Fri, 16 Nov 2001 17:06:38 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Super Optimizing Postgres"
},
{
"msg_contents": "> Does sort memory come out of shared? I don't think so (would it\n> need too?), but \"Cache Size and Sort Size \" seems to imply that\n> it does.\n\nSort comes from per-backend memory, not shared. Of course, both\nper-backend and shared memory come from the same pool of RAM, if that's\nwhat you mean. Could it be made clearer?\n\n> Also, you don't go into the COST variables. If what is documented\n> about them is correct, they are woefully incorrect with a modern\n> machine.\n\nYou mean:\n\t\n\t#random_page_cost = 4\n\t#cpu_tuple_cost = 0.01\n\t#cpu_index_tuple_cost = 0.001\n\t#cpu_operator_cost = 0.0025\n\nThos are relative, of course. We are always looking for better numbers.\n\n> Would a 1.3 ghz Athlon really have a cpu_operator_cost of 0.0025?\n> That would imply that that computer could process 2500 conditionals\n> in the time it would take to make a sequential read. If Postgres\n> is run on a 10K RPM disk vs a 5.4K RPM disk on two different\n> machines with the same processor and speed, these numbers can't\n> hope to be right, one should be about twice as high as the other.\n\nAgain, are the correct relative to each other.\n\n> That said, do these numbers really affect the planner all that\n> much?\n\nSure do effect the planner. That is how index scan vs sequential and\njoin type are determined.\n\n--\n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 16 Nov 2001 17:13:56 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Super Optimizing Postgres"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > Does sort memory come out of shared? I don't think so (would it\n> > need too?), but \"Cache Size and Sort Size \" seems to imply that\n> > it does.\n> \n> Sort comes from per-backend memory, not shared. Of course, both\n> per-backend and shared memory come from the same pool of RAM, if that's\n> what you mean. Could it be made clearer?\n\nActually, in most cases, RAM is ram, and shared ram is just the same ram.\nHowever, on some cluster environments, shared ram is a different memory pool\nthan process ram.\n\nIn your section: \"Cache Size and Sort Size\" You talk about both and shared\nmemory, but make no distinction about which uses what. I would suggest an\nexplicit sentence about how Cache comes from the shared memory pool and Sort\ncomes from the process memory pool.\n\n\n> \n> > Also, you don't go into the COST variables. If what is documented\n> > about them is correct, they are woefully incorrect with a modern\n> > machine.\n> \n> You mean:\n> \n> #random_page_cost = 4\n> #cpu_tuple_cost = 0.01\n> #cpu_index_tuple_cost = 0.001\n> #cpu_operator_cost = 0.0025\n> \n> Thos are relative, of course. We are always looking for better numbers.\n> \n> > Would a 1.3 ghz Athlon really have a cpu_operator_cost of 0.0025?\n> > That would imply that that computer could process 2500 conditionals\n> > in the time it would take to make a sequential read. If Postgres\n> > is run on a 10K RPM disk vs a 5.4K RPM disk on two different\n> > machines with the same processor and speed, these numbers can't\n> > hope to be right, one should be about twice as high as the other.\n> \n> Again, are the correct relative to each other.\n\nThey can't possibly be correct. If We have two identical machines where the\nonly difference is the disk subsystem, one has a 10K RPM SCSI system, and the\nother is a 5.4K RPM IDE disk. There is no way these settings can be accurate.\n\n> \n> > That said, do these numbers really affect the planner all that\n> > much?\n> \n> Sure do effect the planner. That is how index scan vs sequential and\n> join type are determined.\n\nOK, then it should be fairly straight forward to make a profiler for Postgres\nto set these parameters.\n\nSequential and random read test, these are a no brainer.\n\nThe cpu costs are not so easy. I don't have a very good idea about what they\n\"really\" mean. I have a guess, but not enough to make a benchmark routine.\n\nIf someone who REALLY knows could detail a test routine for each of the cpu\ncost types. I could write a program that will spit out what the numbers should\nbe.\n\nI envision:\n\npgprofile /u01/postgres/test.file\n\nAnd that would output something like:\n\nrandom_page_cost = 2\ncpu_tuple_cost = 0.00344\ncpu_index_tuple_cost = 0.00234\ncpu_operator_cost = 0.00082\n",
"msg_date": "Fri, 16 Nov 2001 18:36:11 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Super Optimizing Postgres"
},
{
"msg_contents": "mlw <markw@mohawksoft.com> writes:\n> Also, you don't go into the COST variables. If what is documented\n> about them is correct, they are woefully incorrect with a modern\n> machine.\n\nThe numbers seemed in the right ballpark when I experimented with them\na year or two ago. Keep in mind that all these things are quite fuzzy,\ngiven that we never know for sure whether a read() request to the kernel\nis going to cause actual I/O or be satisfied from kernel cache. One\nshould not mistake \"operator\" for \"addition instruction\", either ---\nat the very least, there are several levels of function call overhead\ninvolved. And using one cost number for all Postgres operators is\nobviously a simplification of reality anyhow.\n\n> Would a 1.3 ghz Athlon really have a cpu_operator_cost of 0.0025? That\n> would imply that that computer could process 2500 conditionals in the\n> time it would take to make a sequential read. If Postgres is run on a\n> 10K RPM disk vs a 5.4K RPM disk on two different machines with the\n> same processor and speed, these numbers can't hope to be right, one\n> should be about twice as high as the other.\n\nWe've talked in the past about autoconfiguring these numbers, but I have\nnot seen any proposals for automatically deriving trustworthy numbers\nin a reasonable period of time. There's too much uncertainty and noise\nin any simple test. (I spent literally weeks convincing myself that the\ncurrent numbers were reasonable.)\n\nBut having said all that, it's true that CPU speed has been increasing\nmuch faster than disk speed over the last few years. If you feel like\nreducing the CPU cost numbers, try it and see what happens.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 16 Nov 2001 18:45:44 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Super Optimizing Postgres "
},
{
"msg_contents": "On Fri, 16 Nov 2001, mlw wrote:\n\n> Sequential and random read test, these are a no brainer.\n> \n> The cpu costs are not so easy. I don't have a very good idea about what they\n> \"really\" mean. I have a guess, but not enough to make a benchmark routine.\n> \n> If someone who REALLY knows could detail a test routine for each of the cpu\n> cost types. I could write a program that will spit out what the numbers should\n> be.\n> \n> I envision:\n> \n> pgprofile /u01/postgres/test.file\n> \n> And that would output something like:\n> \n> random_page_cost = 2\n> cpu_tuple_cost = 0.00344\n> cpu_index_tuple_cost = 0.00234\n> cpu_operator_cost = 0.00082\n\nActually, it could be done if the 'EXPLAIN EXACTLY' was implemented. Such\na command would give you same output as explain plus precise timings each\nstep took. Idea was floated in the list awhile ago. I think the problem\nwith it was properly separating borders of queries, but still, it'd cool\n\n-alex\n\n",
"msg_date": "Fri, 16 Nov 2001 18:47:47 -0500 (EST)",
"msg_from": "Alex Pilosov <alex@pilosoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Super Optimizing Postgres"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n<snip> \n> You mean:\n> \n> #random_page_cost = 4\n> #cpu_tuple_cost = 0.01\n> #cpu_index_tuple_cost = 0.001\n> #cpu_operator_cost = 0.0025\n> \n> Thos are relative, of course. We are always looking for better numbers.\n> \n> > Would a 1.3 ghz Athlon really have a cpu_operator_cost of 0.0025?\n> > That would imply that that computer could process 2500 conditionals\n> > in the time it would take to make a sequential read. If Postgres\n> > is run on a 10K RPM disk vs a 5.4K RPM disk on two different\n> > machines with the same processor and speed, these numbers can't\n> > hope to be right, one should be about twice as high as the other.\n> \n> Again, are the correct relative to each other.\n\nI think it's an interesting thought of having a program which will test\na system and work out the Accurate and Correct values for this.\n\nIt could become of enormous tuning help. Another thought is to have\nPostgreSQL tune these parameters as it goes.\n\n+ Justin\n\n<snip>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Sat, 17 Nov 2001 15:12:39 +1100",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Super Optimizing Postgres"
},
{
"msg_contents": "Justin Clift <justin@postgresql.org> writes:\n> I think it's an interesting thought of having a program which will test\n> a system and work out the Accurate and Correct values for this.\n\nI think if you start out with the notion that there is an Accurate\nand Correct value for these parameters, you've already lost the game.\nThey're inherently fuzzy numbers because they are parameters of an\n(over?) simplified model of reality.\n\nIt would be interesting to try to fit the model to reality on a wide\nvariety of queries, machines & operating environments, and see what\nnumbers we come up with. But there's always going to be a huge fuzz\nfactor involved. Because of that, I'd be *real* wary of any automated\ntuning procedure. Without a good dollop of human judgement in the loop,\nan automated parameter-setter will likely go off into never-never land\n:-(\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 17 Nov 2001 00:14:34 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Super Optimizing Postgres "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Justin Clift <justin@postgresql.org> writes:\n> > I think it's an interesting thought of having a program which will test\n> > a system and work out the Accurate and Correct values for this.\n> \n> I think if you start out with the notion that there is an Accurate\n> and Correct value for these parameters, you've already lost the game.\n\nPardon me, but this is a very scary statement. If you believe that this is\ntrue, then the planner/optimizer is inherently flawed.\n\nIf the numbers are meaningless, they should not be used. If the numbers are not\nmeaningless, then they must be able to be tuned.\n\nIn my example, two computers with exactly the same hardware, except one has a\n5400 RPM IDE drive, the other has a 10,000 RPM IDE drive. These machines should\nnot use the same settings, it is obvious that a sequential scan block read on\none will be faster than the other.\n\n> They're inherently fuzzy numbers because they are parameters of an\n> (over?) simplified model of reality.\n\nVery true, this also scares me. Relating processing time with disk I/O seems\nlike a very questionable approach these days. Granted, this strategy was\nprobably devised when computers systems were a lot simpler, but today with\ninternal disk caching, cpu instruction caches, pipelining, L2 caching\ntechniques, clock multiplication, RAID controllers, and so on, the picture is\nfar more complicated.\n\nThat being said, a running system should have a \"consistent\" performance which\nshould be measurable.\n\n> \n> It would be interesting to try to fit the model to reality on a wide\n> variety of queries, machines & operating environments, and see what\n> numbers we come up with. But there's always going to be a huge fuzz\n> factor involved. Because of that, I'd be *real* wary of any automated\n> tuning procedure. Without a good dollop of human judgement in the loop,\n> an automated parameter-setter will likely go off into never-never land\n> :-(\n\nIt is an interesting problem:\n\nIt appears that \"sequential scan\" is the root measure for the system.\nEverything is biased off that. A good \"usable\" number for sequential scan would\nneed to be created.\n\nWorking on the assumption that a server will have multiple back-ends running,\nwe will start (n) threads (Or forks). (n) will be tunable by the admin based on\nthe expected concurrency of their system. There would two disk I/O test\nroutines, one which performs a sequential scan, one which performs a series of\nrandom page reads.\n\n(n)/2 threads would perform sequential scans.\n(n)/2 threads would perform random page reads.\n\n(Perhaps we can even have the admin select the ratio between random and\nsequential? Or let the admin choose how many of each?)\n\nThe result of the I/O profiling would be to get a reasonable average number of\nmicroseconds it takes to do a sequential scan and a ratio to random page reads.\nThis will be done on files who's size is larger than the available memory of\nthe machine to ensure the files do not stay in the OS cache. Each routine will\nmake several iterations before quitting.\n\nThis should produce a reasonable picture of the user's system in action.\n\nWe could then take standard code profiling techniques against representative\ntest routines to do each of the cpu_xxx modules and compare the profile of the\nroutines against the evaluated time of a sequential scan.\n\nThe real trick is the code profiling \"test routines.\" What kind of processing\ndo the cpu_xxx settings represent?\n",
"msg_date": "Sat, 17 Nov 2001 09:24:58 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Super Optimizing Postgres"
},
{
"msg_contents": "mlw wrote:\n> \n> Tom Lane wrote:\n> >\n> > Justin Clift <justin@postgresql.org> writes:\n> > > I think it's an interesting thought of having a program which will test\n> > > a system and work out the Accurate and Correct values for this.\n> >\n> > I think if you start out with the notion that there is an Accurate\n> > and Correct value for these parameters, you've already lost the game.\n\nI believe we can evolve and refine the model so it becomes more\naccurate. We at least have to be willing to try otherwise I think THAT\nis where we lose the game. :)\n\n<snip>\n> In my example, two computers with exactly the same hardware, except one has a\n> 5400 RPM IDE drive, the other has a 10,000 RPM IDE drive. These machines should\n> not use the same settings, it is obvious that a sequential scan block read on\n> one will be faster than the other.\n\nIf we're going to do this bit properly, then we'll have to take into\nconsideration many database objects will need their own individual\nstatistics. For example, lets say we have a database with a bunch of\n10k rpm SCSI drives which the tables are on, and the system also has one\nor more 15k rpm SCSI drives (lets say a Seagate Cheetah II drives) on\nwhich the indices have been placed. With the 10k rpm drives, the tables\nneeding the fastest throughput or having the highest usage are put on\nthe outer edges of the disk media, and the rest of the tables are placed\nin the available space.\n\nOn this theoretical system, we will be better off measuring the\nperformance of each table and index in turn then generating and storing\ncosts for each one which are as \"accurate as possible at this point in\ntime\". A model like this would probably have these costs re-calculated\neach time the ANALYZE command is run to ensure their accuracy through\ndatabase growth and changes.\n\nI think this would be decently accurate, and RAID systems would be\naccurately analysed. Don't know how to take into account large cache\nsizes though. :)\n\nRegards and best wishes,\n\nJustin Clift\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Sun, 18 Nov 2001 02:26:48 +1100",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Super Optimizing Postgres"
},
{
"msg_contents": "\n\nJustin Clift wrote:\n\n>>In my example, two computers with exactly the same hardware, except one has a\n>>5400 RPM IDE drive, the other has a 10,000 RPM IDE drive. These machines should\n>>not use the same settings, it is obvious that a sequential scan block read on\n>>one will be faster than the other.\n>>\n>\n>If we're going to do this bit properly, then we'll have to take into\n>consideration many database objects will need their own individual\n>statistics. For example, lets say we have a database with a bunch of\n>10k rpm SCSI drives which the tables are on, and the system also has one\n>or more 15k rpm SCSI drives (lets say a Seagate Cheetah II drives) on\n>which the indices have been placed. With the 10k rpm drives, the tables\n>needing the fastest throughput or having the highest usage are put on\n>the outer edges of the disk media, and the rest of the tables are placed\n>in the available space.\n>\n>On this theoretical system, we will be better off measuring the\n>performance of each table and index in turn then generating and storing\n>costs for each one which are as \"accurate as possible at this point in\n>time\".\n>\nThat would mean that these statistic values must be stored in pg_class \nand not be SET\nvariables at all.\nThis will probably have the added benefit that some cacheing effects of \nsmall/big tables\nwill be accounted for automatically so you dont have to do that in \noptimizer.\n\n> A model like this would probably have these costs re-calculated\n>each time the ANALYZE command is run to ensure their accuracy through\n>database growth and changes.\n>\nThen the ANALYZE should be run on both tables and indexes. AFAIK we \ncurrently\nanalyze only real data.\n\n>\n>I think this would be decently accurate, and RAID systems would be\n>accurately analysed. Don't know how to take into account large cache\n>sizes though. :)\n>\nMaybe some volatile statistict on how much of table \"may be\" cached in \nthe disk/fs\ncaches, assuming that we currently know how much of each is in shared \nmemory.\n\n-----------\nHannu\n\n\n\n\n\n\n\n\n\n",
"msg_date": "Sun, 18 Nov 2001 11:39:11 +0500",
"msg_from": "Hannu Krosing <hannu@sid.tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Super Optimizing Postgres"
}
] |
[
{
"msg_contents": "Tom Lane suggested I look at EXPLAIN output, which showed\n that both the catalog (fast delete case) and toasted text\ntable (slow delete case) were using sequential scans when\ndeleting any significant number of records.\\\n\nBut even with sequential scan, the catalog entries are\ndeleted quickly (30K records in just a couple of seconds),\nvice slow deletes (2 per second) for the toasted text.\n\nThe catalog entries are about 200 bytes (integers, timestamps,\na couple of short fixed length strings), while the toasted\ntext table has one short text field, one timestamp, and one\nlong (2K to 20K bytes) toasted text field.\n\nBoth will use index scans when a very small number (< 1%)\nof records would be selected. But relative delete performance\nstays the same.\n-- \nP. J. \"Josh\" Rovero Sonalysts, Inc.\nEmail: rovero@sonalysts.com www.sonalysts.com 215 Parkway North\nWork: (860)326-3671 or 442-4355 Waterford CT 06385\n***********************************************************************\n\n",
"msg_date": "Fri, 16 Nov 2001 08:59:03 -0500",
"msg_from": "\"P.J. \\\"Josh\\\" Rovero\" <rovero@sonalysts.com>",
"msg_from_op": true,
"msg_subject": "Delete Performance"
},
{
"msg_contents": "\"P.J. \\\"Josh\\\" Rovero\" <rovero@sonalysts.com> writes:\n> [ complains that deletes are slow in table containing toasted data ]\n\nI did some experimentation here and found a rather surprising\ndependency: the time to delete a bunch of data is pretty much\ndirectly proportional to the disk space it occupies. This says\nthat we're paying through the nose for having XLOG make copies\nof about-to-be-modified pages.\n\nI did:\n\ncreate table foo (f1 text);\ninsert into foo values ('a short entry');\ninsert into foo select * from foo;\n-- repeat enough times to build up 32K rows total\ndelete from foo;\n\nThe \"delete\" took about 2 seconds. I then did it over with the\n'value' being a 5K chunk of text, which according to octet_length\ngot compressed to 3900 bytes. (This'd require two rows in the TOAST\ntable.) This time the delete took 127 seconds. I was expecting\nabout a 3X penalty since we needed to delete three rows not one,\nbut what I got was a 60X penalty.\n\nTrying to understand this, I did some profiling and found that most\nof the time was going into XLogInsert and XLOG I/O. That's when I\nremembered that the actual data volume involved is considerably\ndifferent in the two cases. Allowing for tuple header overhead and\nso forth, the small-data case involves about 1.8MB, the large-data\ncase about 131MB, or about 70 times as much data.\n\nI believe this indicates that what's determining the runtime is the fact\nthat the XLOG code writes out an image of each page modified in the\ntransaction. These page images will be the bulk of the XLOG traffic\nfor the TOAST table (since there are only four or so tuples on each\nTOAST page, the actual XLOG delete records take little space by\ncomparison).\n\nI've worried for some time that the decision to XLOG page images was\ncosting us a lot more performance than could be justified...\n\nOne trick we could perhaps pull is to postpone deletion of TOAST tuples\nuntil VACUUM, so that the bulk of the work is done in a noncritical path\n(from the point of view of the application anyway). I'm not sure how\nthis interacts with the way that we re-use a TOAST entry when other\nfields in the row are updated, however. It might be too difficult for\nVACUUM to tell when to delete a TOAST item.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 16 Nov 2001 19:53:43 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "TOAST performance (was Re: [GENERAL] Delete Performance)"
},
{
"msg_contents": "Tom Lane wrote\n\n>\n>I did some experimentation here and found a rather surprising\n>dependency: the time to delete a bunch of data is pretty much\n>directly proportional to the disk space it occupies. This says\n>that we're paying through the nose for having XLOG make copies\n>of about-to-be-modified pages.\n>\nAt least now I know I wasn't imagining things.... :-)\n\nWhich brings up the question, what is the best way to deal with many\nthousands of variable-length binary chunks. Net input == net output\nover the course of a day. The new vacuum should help (both lo_ and\ntoasted tables take a long time to vacuum full), but I'm running into\nthe \"Hotel California\" situation. Data goes in fast, but can't be\ndeleted fast enough to keep the database from continuously growing\nin size.\n\n\n\n\n",
"msg_date": "Sat, 17 Nov 2001 01:44:24 +0000",
"msg_from": "Josh Rovero <rovero@sonalysts.com>",
"msg_from_op": false,
"msg_subject": "Re: TOAST performance (was Re: [GENERAL] Delete Performance)"
},
{
"msg_contents": "> The \"delete\" took about 2 seconds. I then did it over with the\n> 'value' being a 5K chunk of text, which according to octet_length\n> got compressed to 3900 bytes. (This'd require two rows in the TOAST\n> table.) This time the delete took 127 seconds. I was expecting\n> about a 3X penalty since we needed to delete three rows not one,\n> but what I got was a 60X penalty.\n\nWow. Can someone remind me why we take page images on delete? We\naren't really writing anything special to the page except a transction\nid.\n\n> I've worried for some time that the decision to XLOG page images was\n> costing us a lot more performance than could be justified...\n\nIs it because we take a snapshot of the page before we write it in case\nwe only write part of the page?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 16 Nov 2001 21:01:51 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: TOAST performance (was Re: [GENERAL] Delete Performance)"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Is it because we take a snapshot of the page before we write it in case\n> we only write part of the page?\n\nAFAIR, the partial-page-write problem is the entire reason for doing it.\nIf we could be certain that writes to datafile pages were atomic, we'd\nnot need this.\n\nOf course we can't be certain of that. But I'm wondering if there isn't\na cheaper solution.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 16 Nov 2001 21:07:54 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: TOAST performance (was Re: [GENERAL] Delete Performance) "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Is it because we take a snapshot of the page before we write it in case\n> > we only write part of the page?\n> \n> AFAIR, the partial-page-write problem is the entire reason for doing it.\n> If we could be certain that writes to datafile pages were atomic, we'd\n> not need this.\n> \n> Of course we can't be certain of that. But I'm wondering if there isn't\n> a cheaper solution.\n\nCould we add code to detect a partial write when we recover from one\nusing WAL so we can know if these partial writes are ever\nhappening?\n\n\nI am with you on this. There has to be a better way.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 16 Nov 2001 21:11:24 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: TOAST performance (was Re: [GENERAL] Delete Performance)"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Could we add code to detect a partial write when we recover from one\n> using WAL so we can know if these partial writes are ever\n> happening?\n\nWhat's your point? It clearly *can* happen during power-failure\nscenarios. All the monitoring in the world won't disprove that.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 16 Nov 2001 21:15:20 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: TOAST performance (was Re: [GENERAL] Delete Performance) "
},
{
"msg_contents": "\"P.J. \\\"Josh\\\" Rovero\" <rovero@sonalysts.com> writes:\n> But even with sequential scan, the catalog entries are\n> deleted quickly (30K records in just a couple of seconds),\n> vice slow deletes (2 per second) for the toasted text.\n\n> The catalog entries are about 200 bytes (integers, timestamps,\n> a couple of short fixed length strings), while the toasted\n> text table has one short text field, one timestamp, and one\n> long (2K to 20K bytes) toasted text field.\n\nI observed over in pg-hackers that deletion speed seems to be\nproportional to total volume of data deleted, but that's not enough\nto explain your results. You're reporting a 10000X speed difference\nwith only 10-100X difference in data volume, so there's still a large\nfactor to be accounted for.\n\nAre you sure you don't have any rules, triggers, foreign keys involving\nthe slower table?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 16 Nov 2001 21:21:34 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Delete Performance "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Could we add code to detect a partial write when we recover from one\n> > using WAL so we can know if these partial writes are ever\n> > happening?\n> \n> What's your point? It clearly *can* happen during power-failure\n> scenarios. All the monitoring in the world won't disprove that.\n\nMy point is uh, um, eh, I think it is a very important point that I\nshould make ... um. :-)\n\nSeriously, how do OS's handle partial page write, especially to\ndirectories?\n\nAnother item I was considering is that INSERT and UPDATE, because they\nappend to the tables, don't really cause lots of pre-page writes, while\nDELETE could affect all page in a table and would require pre-page\nwrites on all of them. \n\nHowever, deletes are only marking the XID status of the rows. \nUnfortunately I can't think of a way of recording those new XID's in WAL\nand preventing a possible failure while the XID's are written to the\npage. Can someone help me here?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 16 Nov 2001 21:44:22 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: TOAST performance (was Re: [GENERAL] Delete Performance)"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Seriously, how do OS's handle partial page write, especially to\n> directories?\n\n... fsck ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 17 Nov 2001 00:02:49 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: TOAST performance (was Re: [GENERAL] Delete Performance)"
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Seriously, how do OS's handle partial page write, especially to\n> > directories?\n> \n> ... fsck ...\n\nBut how can it handle partial writes to a directory when many files\nexist in that single block?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 17 Nov 2001 00:08:16 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: TOAST performance (was Re: [GENERAL] Delete Performance)"
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Seriously, how do OS's handle partial page write, especially to\n> > directories?\n\n\nI realize UPDATE also requires pre-page writes for the old tuples. What\nbothers me is that unlike INSERT and UPDATE of new rows, DELETE and\nUPDATE of old rows is not writing new data but just setting transaction\nID's. I wish there was a way to store those XID's somewhere else so the\npage wouldn't have to be modified.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 17 Nov 2001 00:09:58 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: TOAST performance (was Re: [GENERAL] Delete Performance)"
},
{
"msg_contents": "Tom Lane wrote:\n\n\n> \n> I observed over in pg-hackers that deletion speed seems to be\n> proportional to total volume of data deleted, but that's not enough\n> to explain your results. You're reporting a 10000X speed difference\n> with only 10-100X difference in data volume, so there's still a large\n> factor to be accounted for.\n> \n> Are you sure you don't have any rules, triggers, foreign keys involving\n> the slower table?\n\n\nHmm, there is a foreign key defined in the \"fast\" table:\n\nCREATE TABLE grib_catalog (\n edition INTEGER NOT NULL CHECK(edition IN(1, 2)),\n discipline INTEGER,\n generating_center INTEGER NOT NULL CHECK(generating_center \nBETWEEN 7 AND 99),\n sub_center INTEGER NOT NULL,\n scale_factor INTEGER,\n grib_product_id INTEGER REFERENCES grib_product,\n prod_category INTEGER CHECK (prod_category BETWEEN 0 AND 19),\n grib_model_id INTEGER REFERENCES grib_model,\n run_time TIMESTAMP NOT NULL,\n fcst_time INTEGER NOT NULL CHECK(fcst_time >= 0),\n grib_region_id INTEGER REFERENCES grib_region,\n level INTEGER NOT NULL,\n level_units CHAR(8) NOT NULL,\n projection CHAR(16) NOT NULL,\n bmp_usage BOOLEAN NOT NULL,\n wx_usage BOOLEAN NOT NULL,\n gds_usage BOOLEAN NOT NULL,\n file_name TEXT ,\n parse_time TIMESTAMP ,\n gds_offset INTEGER CHECK(gds_offset >= 0),\n pds_offset INTEGER NOT NULL CHECK(pds_offset >= 0),\n drs_offset INTEGER CHECK(drs_offset >= 0),\n ds_offset INTEGER NOT NULL CHECK(ds_offset >= 0),\n bms_offset INTEGER CHECK(bms_offset >= 0),\n PRIMARY \nKEY(discipline,generating_center,sub_center,grib_product_id,grib_model_id,\n run_time,fcst_time,grib_region_id,level,bmp_usage,gds_usage),\n FOREIGN KEY (file_name,parse_time) REFERENCES grib_file\n);\n\nwhich results in pg_dump reporting an unnamed delete trigger. I guess this\nmeans that a delete on grib_file refers back to grib_catalog\n\nCREATE CONSTRAINT TRIGGER \"<unnamed>\" AFTER DELETE ON \"grib_file\" FROM \n\"grib_catalog\" NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE \nPROCEDURE \"RI_FKey_noaction_del\" ('<unnamed>', 'grib_catalog', \n'grib_file', 'UNSPECIFIED', 'file_name', 'name', 'parse_time', \n'parse_time');\n\nWill reformulate without the foreign key and see if this helps.\n\n-- \nP. J. \"Josh\" Rovero Sonalysts, Inc.\nEmail: rovero@sonalysts.com www.sonalysts.com 215 Parkway North\nWork: (860)326-3671 or 442-4355 Waterford CT 06385\n***********************************************************************\n\n",
"msg_date": "Mon, 19 Nov 2001 08:59:02 -0500",
"msg_from": "\"P.J. \\\"Josh\\\" Rovero\" <rovero@sonalysts.com>",
"msg_from_op": true,
"msg_subject": "Re: Delete Performance"
},
{
"msg_contents": "\n\nJosh Rovero wrote:\n\n> Tom Lane wrote\n>\n>>\n>> I did some experimentation here and found a rather surprising\n>> dependency: the time to delete a bunch of data is pretty much\n>> directly proportional to the disk space it occupies. This says\n>> that we're paying through the nose for having XLOG make copies\n>> of about-to-be-modified pages.\n>\nCan't we somehow WAL only metadata and not the actual pages for\nDELETEs - as delete is essentially (though currently not technically)\njust metadata it should be a possible thing to do.\n\n>> ------------------\n>\nHannu\n\n\n",
"msg_date": "Tue, 20 Nov 2001 02:11:22 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: TOAST performance (was Re: [GENERAL] Delete Performance)"
},
{
"msg_contents": "On Tue, 2001-11-20 at 10:11, Hannu Krosing wrote:\n> \n> > Tom Lane wrote\n> >\n> >> I did some experimentation here and found a rather surprising\n> >> dependency: the time to delete a bunch of data is pretty much\n> >> directly proportional to the disk space it occupies. This says\n> >> that we're paying through the nose for having XLOG make copies\n> >> of about-to-be-modified pages.\n> >\n> Can't we somehow WAL only metadata and not the actual pages for\n> DELETEs - as delete is essentially (though currently not technically)\n> just metadata it should be a possible thing to do.\n\nIs it possible to do ordered writes, the way ext3 does?\n\nhttp://www-106.ibm.com/developerworks/linux/library/l-fs7/\n\nIs an interesting article discussing the approach.\n\nRegards,\n\t\t\t\t\tAndrew.\n-- \n--------------------------------------------------------------------\nAndrew @ Catalyst .Net.NZ Ltd, PO Box 11-053, Manners St, Wellington\nWEB: http://catalyst.net.nz/ PHYS: Level 2, 150-154 Willis St\nDDI: +64(4)916-7201 MOB: +64(21)635-694 OFFICE: +64(4)499-2267\n\n",
"msg_date": "20 Nov 2001 14:23:24 +1300",
"msg_from": "Andrew McMillan <andrew@catalyst.net.nz>",
"msg_from_op": false,
"msg_subject": "Re: TOAST performance (was Re: [GENERAL] Delete"
},
{
"msg_contents": "Andrew McMillan wrote:\n> \n> On Tue, 2001-11-20 at 10:11, Hannu Krosing wrote:\n> >\n> > > Tom Lane wrote\n> > >\n> > >> I did some experimentation here and found a rather surprising\n> > >> dependency: the time to delete a bunch of data is pretty much\n> > >> directly proportional to the disk space it occupies. This says\n> > >> that we're paying through the nose for having XLOG make copies\n> > >> of about-to-be-modified pages.\n> > >\n> > Can't we somehow WAL only metadata and not the actual pages for\n> > DELETEs - as delete is essentially (though currently not technically)\n> > just metadata it should be a possible thing to do.\n> \n> Is it possible to do ordered writes, the way ext3 does?\n\nI remember it being discussed on this list that you have very little \ncontrol over writing order if you operate above filesystem/cache level.\n\n> http://www-106.ibm.com/developerworks/linux/library/l-fs7/\n\nI guess that is the article that sparked the idea of journalling only \nmetadata for deletes (including the delete half of update)\n\nUsing the Journaling Block Device described there could actually be \na good (though currently not portable) solution if you run linux.\n\n-------------\nHannu\n",
"msg_date": "Tue, 20 Nov 2001 09:31:19 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: TOAST performance (was Re: [GENERAL] Delete"
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Could we add code to detect a partial write when we recover from one\n> > using WAL so we can know if these partial writes are ever\n> > happening?\n> \n> What's your point? It clearly *can* happen during power-failure\n> scenarios. All the monitoring in the world won't disprove that.\n\nWhat bothers me about this is that we have the original page with the\nold data. It would be nice if we could write the new page in a\ndifferent location, make the new page active and recycle the old page at\nsome later time.\n\nWe are storing the pre-page image in WAL, but it seems like a waste\nbecause we already have a pre-image. It is just that we are overwriting\nit.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Nov 2001 19:11:08 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: TOAST performance (was Re: [GENERAL] Delete Performance)"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> What bothers me about this is that we have the original page with the\n> old data. It would be nice if we could write the new page in a\n> different location, make the new page active and recycle the old page at\n> some later time.\n\nI don't see how that reduces the total amount of disk traffic?\n\nIt's also kind of unclear how to do it without doubling (or worse) the\namount of table space used in many common scenarios. I doubt many\npeople will be happy if \"DELETE FROM foo\" requires transient space equal\nto twice the original size of foo.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 21 Nov 2001 19:16:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: TOAST performance (was Re: [GENERAL] Delete Performance) "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > What bothers me about this is that we have the original page with the\n> > old data. It would be nice if we could write the new page in a\n> > different location, make the new page active and recycle the old page at\n> > some later time.\n> \n> I don't see how that reduces the total amount of disk traffic?\n> \n> It's also kind of unclear how to do it without doubling (or worse) the\n> amount of table space used in many common scenarios. I doubt many\n> people will be happy if \"DELETE FROM foo\" requires transient space equal\n> to twice the original size of foo.\n\nWell, right now we write the pre-image to WAL, then write the new page\nover the old one. In my case, you just write the new, and somewhere\nrecord that the old page is no longer active. Sounds a little like\nVACUUM, but for pages.\n\nWith DELETE FROM foo, let's suppose you have 10 pages in the table. To\nmodify page 1, you write to page 11, then record in WAL that page 1 is\ninactive. To write page 2, you write to page 1 and record page 2 as\ninactive, etc. You basically are writing your new data one behind.\n\nOne problem I see is that you don't really know the pages are on disk so\nI am not sure how to be safe when over-writing the inactive pages.\n\nOf course, I am just throwing out ideas, looking for a solution. Help!\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Nov 2001 19:25:17 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: TOAST performance (was Re: [GENERAL] Delete Performance)"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> I don't see how that reduces the total amount of disk traffic?\n\n> Well, right now we write the pre-image to WAL, then write the new page\n> over the old one. In my case, you just write the new, and somewhere\n> record that the old page is no longer active.\n\nThe devil is in the details of that last little bit. How is \"mark a\npage inactive\" cheaper than \"mark a tuple dead\"? More specifically,\nhow do you propose to avoid WAL-logging the page you are going to do\nthis marking in? Seems you still end up with a WAL page image for\nsomething.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 21 Nov 2001 19:37:26 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: TOAST performance (was Re: [GENERAL] Delete Performance) "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> I don't see how that reduces the total amount of disk traffic?\n> \n> > Well, right now we write the pre-image to WAL, then write the new page\n> > over the old one. In my case, you just write the new, and somewhere\n> > record that the old page is no longer active.\n> \n> The devil is in the details of that last little bit. How is \"mark a\n> page inactive\" cheaper than \"mark a tuple dead\"? More specifically,\n> how do you propose to avoid WAL-logging the page you are going to do\n> this marking in? Seems you still end up with a WAL page image for\n> something.\n\nI was thinking of just throwing the inactive page number into WAL. Much\nsmaller than the entire page image. You don't touch the page. Does\nthat help?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Nov 2001 19:49:14 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: TOAST performance (was Re: [GENERAL] Delete Performance)"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I was thinking of just throwing the inactive page number into WAL. Much\n> smaller than the entire page image. You don't touch the page. Does\n> that help?\n\nI don't think so. Somehow you have to tell the other backends that that\npage is dead; merely recording it in WAL doesn't do that.\n\nMore to the point, you can't recycle (overwrite) that page until you've\ncheckpointed or WAL-logged the replacement page; so you still end up\nwith disk I/O for the replacement.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 21 Nov 2001 20:40:46 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: TOAST performance (was Re: [GENERAL] Delete Performance) "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> I don't see how that reduces the total amount of disk traffic?\n> \n> > Well, right now we write the pre-image to WAL, then write the new page\n> > over the old one. In my case, you just write the new, and somewhere\n> > record that the old page is no longer active.\n> \n> The devil is in the details of that last little bit. How is \"mark a\n> page inactive\" cheaper than \"mark a tuple dead\"? More specifically,\n> how do you propose to avoid WAL-logging the page you are going to do\n> this marking in? Seems you still end up with a WAL page image for\n> something.\n\nAssuming that we WAL with the granularity of disk sector (512b) I think \nthat restructuring of database heap page (8kb) would be a big win for \ndelete/update.\n\nThe idea is to move metadata (oid,tableoid,xmin,cmin,xmax,cmax,ctid) \nto the beginning of heap page to the same space with tuple pointers. \nIt's easy (<grin>) as all of it is fixed length.\nThen a change in metadata like setting xmax for deleted/updated tuple\nwill dirty only the first disk page and not all of them.\n\nThe new structure of ItemId will be (\n itemId-pointer nbits\n itemId-flags 32-n bits\n oid,\n tableoid,\n xmin,\n cmin,\n xmax,\n cmax,\n ctid\n)\n\nAssuming that we do account of dirty pages and WAL with the granularity \nof database page we may get a big win by just moving to smaller\ngramularity.\n\nThe win from increasing cranularity was not very big before WAL, as the \ndatabase pages are continuous on disk, but will be significant when we \nhave to log all dirty pages.\n\n------------------\nHannu\n",
"msg_date": "Thu, 22 Nov 2001 11:34:50 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: TOAST performance (was Re: [GENERAL] Delete Performance)"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > What bothers me about this is that we have the original page with the\n> > old data. It would be nice if we could write the new page in a\n> > different location, make the new page active and recycle the old page at\n> > some later time.\n> \n> I don't see how that reduces the total amount of disk traffic?\n> \n> It's also kind of unclear how to do it without doubling (or worse) the\n> amount of table space used in many common scenarios. I doubt many\n> people will be happy if \"DELETE FROM foo\" requires transient space equal\n> to twice the original size of foo.\n\nIIRC the double space requrement is what has kept us from implementing \nDROP COLUMN.\n\n-----------\nHannu\n",
"msg_date": "Thu, 22 Nov 2001 11:49:19 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: TOAST performance (was Re: [GENERAL] Delete Performance)"
},
{
"msg_contents": "> > It's also kind of unclear how to do it without doubling (or worse) the\n> > amount of table space used in many common scenarios. I doubt many\n> > people will be happy if \"DELETE FROM foo\" requires transient space equal\n> > to twice the original size of foo.\n>\n> IIRC the double space requrement is what has kept us from implementing\n> DROP COLUMN.\n\nThe correct solution then, according methinks to my old Human Computer\nInteraction lecturer, is to implement the feature anyway, and warn the DBA\nwhat the consequences are. That way, the DBA can do it if she wants, unlike\nthe current situation where it's next to impossible (with lots of\nreferencing foreign keys).\n\nChris\n\n",
"msg_date": "Fri, 23 Nov 2001 09:37:52 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: TOAST performance (was Re: [GENERAL] Delete Performance)"
},
{
"msg_contents": "> > > It's also kind of unclear how to do it without doubling (or worse) the\n> > > amount of table space used in many common scenarios. I doubt many\n> > > people will be happy if \"DELETE FROM foo\" requires transient space equal\n> > > to twice the original size of foo.\n> >\n> > IIRC the double space requrement is what has kept us from implementing\n> > DROP COLUMN.\n> \n> The correct solution then, according methinks to my old Human Computer\n> Interaction lecturer, is to implement the feature anyway, and warn the DBA\n> what the consequences are. That way, the DBA can do it if she wants, unlike\n> the current situation where it's next to impossible (with lots of\n> referencing foreign keys).\n\nYes, I personally am going to try this for 7.3, as well as fix CLUSTER. \nI think someone has already started on CLUSTER anyway.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 22 Nov 2001 20:43:59 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: TOAST performance (was Re: [GENERAL] Delete Performance)"
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Is it because we take a snapshot of the page before we write it in case\n> > we only write part of the page?\n> \n> AFAIR, the partial-page-write problem is the entire reason for doing it.\n> If we could be certain that writes to datafile pages were atomic, we'd\n> not need this.\n> \n> Of course we can't be certain of that. But I'm wondering if there isn't\n> a cheaper solution.\n\nI have added these TODO items to summarize this discussion:\n\n* Reduce number of pre-page WAL writes; they exist only to gaurd against\n partial page writes\n* Turn off pre-page writes if fsync is disabled (?)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 20 Dec 2001 17:00:57 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: TOAST performance (was Re: [GENERAL] Delete Performance)"
},
{
"msg_contents": "If \"pre-page WAL write\" means the value of the page before the current\nchanges, then there is generally another reason for writing it out.\n\nImagine this sequence of events:\n1. transaction A starts\n2. transaction B starts\n3. tran A makes a change\n4. tran B makes a change\n5. tran A commits\n6. all changes get written to disk (this can happen even without fsync,\n for example tran C might do a full table scan which fills the buffer cache\n before B commits)\n7. the system crashes\n\nWhen the system comes back up, we need to do a rollback on\ntransaction B since it did not commit and we need the \"pre-page\"\nto know how to undo the change for B that got saved in step 6 above.\n\nAt least this is what happens in most DBMSs...\n\nBrian Beuning\n\n\n\nBruce Momjian wrote:\n\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > Is it because we take a snapshot of the page before we write it in case\n> > > we only write part of the page?\n> >\n> > AFAIR, the partial-page-write problem is the entire reason for doing it.\n> > If we could be certain that writes to datafile pages were atomic, we'd\n> > not need this.\n> >\n> > Of course we can't be certain of that. But I'm wondering if there isn't\n> > a cheaper solution.\n>\n> I have added these TODO items to summarize this discussion:\n>\n> * Reduce number of pre-page WAL writes; they exist only to gaurd against\n> partial page writes\n> * Turn off pre-page writes if fsync is disabled (?)\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n\n",
"msg_date": "Thu, 20 Dec 2001 18:33:49 -0500",
"msg_from": "Brian Beuning <bbeuning@mindspring.com>",
"msg_from_op": false,
"msg_subject": "Re: TOAST performance (was Re: [GENERAL] Delete Performance)"
},
{
"msg_contents": "> If \"pre-page WAL write\" means the value of the page before the current\n> changes, then there is generally another reason for writing it out.\n> \n> Imagine this sequence of events:\n> 1. transaction A starts\n> 2. transaction B starts\n> 3. tran A makes a change\n> 4. tran B makes a change\n> 5. tran A commits\n> 6. all changes get written to disk (this can happen even without fsync,\n> for example tran C might do a full table scan which fills the buffer cache\n> before B commits)\n> 7. the system crashes\n> \n> When the system comes back up, we need to do a rollback on\n> transaction B since it did not commit and we need the \"pre-page\"\n> to know how to undo the change for B that got saved in step 6 above.\n> \n> At least this is what happens in most DBMSs...\n\nBecause we have a non-overwriting storage manager, I don't think this\nissue applies to us.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 20 Dec 2001 22:54:18 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: TOAST performance (was Re: [GENERAL] Delete Performance)"
}
] |
[
{
"msg_contents": "I think it's a fantastic idea. Every time I've tried to muck about in that\nconfig file I've been entirely frustrated and not gotten done what I've\nneeded to. This database _needs_ some optimization as it has grown large and\nI'm starting to see serious performance hits. I can't really help with the\nactual cookbook as I am unfamiliar with the file myself. However, if you'd\nlike some large scale (http://scorch.posixnap.net/pics/puterpics/) testing\ndone, I'm happy to help. I've got a ton of resources to throw at it. (that\npage may not load properly in netscape. sorry.)\n\nalex\n\n",
"msg_date": "Fri, 16 Nov 2001 09:00:46 -0500",
"msg_from": "Alex Avriette <a_avriette@acs.org>",
"msg_from_op": true,
"msg_subject": "Re: Super Optimizing Postgres"
}
] |
[
{
"msg_contents": "Hi,\n I'm testing PostgreSQL, version 7.1.2, and I create a table, test1, with these kind of fields:\n\nCREATE TABLE test1 (\n id integer,\n content varchar\n);\n\nwith 800.000 records, and a btree index setted on the id field.\n\nI noticed the query uses the index for \"=\" and \"<\" operators, but if the value used for the \"<\" operator is higher (600.000 for example), the query makes a seq scan of the table, and the index never works with the \">\" operator. Am I managing the indexes well ?\n\nThanks for the help.\n\nCiao Paolo\n\nPaolo Cassago\n\nTalentManager\nMilan, Paris, Madrid\n\nTel: +39 02 83 11 23 1\nFax: +39 02 700 43 99 81\nMob: +39 348 82 155 81\n\n",
"msg_date": "16 Nov 2001 14:08:31 +0000",
"msg_from": "Paolo Cassago <paolo.cassago@talentmanager.com>",
"msg_from_op": true,
"msg_subject": "Fwd: Btree doesn't work with \">\" condition"
},
{
"msg_contents": "On 16 Nov 2001, Paolo Cassago wrote:\n\n> I'm testing PostgreSQL, version 7.1.2, and I create a table,\n> test1, with these kind of fields:\n>\n> CREATE TABLE test1 (\n> id integer,\n> content varchar\n> );\n>\n> with 800.000 records, and a btree index setted on the id field.\n>\n> I noticed the query uses the index for \"=\" and \"<\" operators, but if\n> the value used for the \"<\" operator is higher (600.000 for example),\n> the query makes a seq scan of the table, and the index never works\n> with the \">\" operator. Am I managing the indexes well ?\n\nLet's go through the standard things. :) Have you vacuum analyzed\nthe table? What does explain show for the queries (particularly the\nrow count). If the number of rows grabbed is a reasonable percentage\nof the table, currently sequence scan *may* be faster than an index\nscan, so this may not be incorrect.\n\n",
"msg_date": "Fri, 16 Nov 2001 08:28:59 -0800 (PST)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: Btree doesn't work with \">\" condition"
}
] |
[
{
"msg_contents": "> > The only problem I have now is that odbc/md5.h needs those unsigned\n> > defines and it can't probe the results of queries by configure. odbc\n> > allows for stand-alone compile.\n> \n> ODBC uses all kinds of other configure results, so it can use this one as\n> well. You only need to make sure you hard-code the test result for\n> Windows somewhere.\n\nGood point, Peter. I had not seen that psqlodbc.h does conditionally\ninclude pg_config.h. I have added this test to md5.h:\n\n\t/* Also defined in include/c.h */\n\t#if SIZEOF_UINT8 == 0\n\ttypedef unsigned char uint8; /* == 8 bits */\n\ttypedef unsigned short uint16; /* == 16 bits */\n\ttypedef unsigned int uint32; /* == 32 bits */\n\t#endif /* SIZEOF_UINT8 == 0 */\n\nIn the case of WIN32, SIZEOF_UINT8 is not defined, so it should compare\nequal to zero, and should include those defines, as needed. It now\nmatches c.h. Is that OK?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 16 Nov 2001 13:32:26 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Open Items (was: RE: [HACKERS] Beta going well)"
}
] |
[
{
"msg_contents": "In answering the question about SIGHUP and pg_hba.conf, I was surprised\nto learn from Tom and postgresql.conf is reloaded by backends on\npostmaster SIGHUP, at least for certain configuration settings.\n\nIt this new behavior? I thought postgresql.conf changes only were seen by\nnewly spawned children.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 16 Nov 2001 14:06:16 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "backends rereading postgresql.conf"
}
] |
[
{
"msg_contents": "Seems I had another email problem last night, bouncing back lots of email\nas rejected. I had not added my reverse hostname, candle.navpoint.com,\nto my sendmail config file. I had removed it years ago because I didn't\nthink I needed it. Obviously, I needed it last night because the email\nwas coming in addressed to pgman@candle.navpoint.com and I bounced it\nall back. Sorry.\n\nI think this all happened because I was fiddling with DNS here to try\nand get the postgresql.org domains to resolve properly.\n\nI have read the \"Fresh postings\" archive at fts.postgresql.org so I\nthink I am all caught up. BTW, our FTS archive and Mail Xware software\nis quite amazing. Very fast, and it has this feature where you can\nhighlight words _in_ _your_ _browser_, and click on \"Search for\nSelection\" to find email matching that patter. Quite amazing.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 16 Nov 2001 14:17:24 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Apology for email problems, again"
}
] |
[
{
"msg_contents": "Hi!\n\nWe are using postgres as RDBMS in an web application that is translated to \nabout a dozen different languages. Some users get puzzled about the sorting \norder in lists, since we have to choose only one locale for all ORDER BY \nqueries. I am dreaming of a SET LC_COLLATE or simliar command that will \nonly affect my session, not all other users.\n\nI know this is not implemented in postgres. How impossible is it to add \nthis feature, and what implications would pg suffer? All discussions \nregarding locale problems in postgres are about LIKE indexing. For us, \ncollating is more important. Can we help?\n\n/Palle\n\n\n",
"msg_date": "Sat, 17 Nov 2001 17:50:38 +0100",
"msg_from": "Palle Girgensohn <girgen@partitur.se>",
"msg_from_op": true,
"msg_subject": "Multilingual application, ORDER BY w/ different locales?"
},
{
"msg_contents": "Palle Girgensohn <girgen@partitur.se> writes:\n> I am dreaming of a SET LC_COLLATE or simliar command that will \n> only affect my session, not all other users.\n> I know this is not implemented in postgres. How impossible is it to add \n> this feature, and what implications would pg suffer?\n\nActually, what the SQL spec suggests is that LOCALE be attached to\nindividual table columns. A SET command to cause LOCALE to change\non the fly within a session is quite impractical: that would mean\nthat the sort ordering of existing columns changes, which would mean\nthat any indexes on those columns are broken.\n\nPer-column LOCALE is on the to-do list. In my mind the main difficulty\nwith it is that the standard C library doesn't really support concurrent\nuse of multiple locales: it's built around the assumption that you set\nyour locale once at program startup. setlocale() is, typically, not\na fast operation. To get around this it seems we'd need to write our\nown set of locale library routines, which is a daunting amount of work.\n\nI think the last time this came up, someone mentioned that there's an\nopen BSD-license locale library being worked on, which possibly we could\nadapt instead of reinventing this wheel for ourselves. But I don't\nrecall more than that. Check the archives.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 17 Nov 2001 13:39:36 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Multilingual application, ORDER BY w/ different locales? "
},
{
"msg_contents": "--On Saturday, November 17, 2001 13:39:36 -0500 Tom Lane \n<tgl@sss.pgh.pa.us> wrote:\n\n> Palle Girgensohn <girgen@partitur.se> writes:\n>> I am dreaming of a SET LC_COLLATE or simliar command that will\n>> only affect my session, not all other users.\n>> I know this is not implemented in postgres. How impossible is it to add\n>> this feature, and what implications would pg suffer?\n>\n> Actually, what the SQL spec suggests is that LOCALE be attached to\n> individual table columns. A SET command to cause LOCALE to change\n> on the fly within a session is quite impractical: that would mean\n> that the sort ordering of existing columns changes, which would mean\n> that any indexes on those columns are broken.\n\nOK, indexes and sort ordering are coupled, and must be? In that case, I see \nthe problem.\n\n> Per-column LOCALE is on the to-do list.\n\nMy need is really to get different sorting on *the same* column, depending \non which locale the present user prefers. Collation can be quite different \nin Swedish, English, German or Frencn, for example. Our users can chose the \nlanguage they prefer from a list, and since it is a web app, all languages \nare used simultaneously on the same system, and since we use a database \nsession pool, different langs can be preferred att different times in the \nsame database session. So, in this case there is no need for per-column \nlocale; we really need to be able to shift sorting order (ORDER BY only) \n\"on-the-fly\". I guess this is not even supported by the SQL standard, or \nany other RDBMS for that matter, right?\n\n> In my mind the main difficulty\n> with it is that the standard C library doesn't really support concurrent\n> use of multiple locales: it's built around the assumption that you set\n> your locale once at program startup. setlocale() is, typically, not\n> a fast operation. To get around this it seems we'd need to write our\n> own set of locale library routines, which is a daunting amount of work.\n>\n> I think the last time this came up, someone mentioned that there's an\n> open BSD-license locale library being worked on, which possibly we could\n> adapt instead of reinventing this wheel for ourselves. But I don't\n> recall more than that. Check the archives.\n\nThanks, I will.\n\nCheers,\nPalle\n\n",
"msg_date": "Sat, 17 Nov 2001 20:46:03 +0100",
"msg_from": "Palle Girgensohn <girgen@partitur.se>",
"msg_from_op": true,
"msg_subject": "Re: Multilingual application, ORDER BY w/ different"
},
{
"msg_contents": "Palle Girgensohn <girgen@partitur.se> writes:\n>> Actually, what the SQL spec suggests is that LOCALE be attached to\n>> individual table columns. A SET command to cause LOCALE to change\n>> on the fly within a session is quite impractical: that would mean\n>> that the sort ordering of existing columns changes, which would mean\n>> that any indexes on those columns are broken.\n\n> OK, indexes and sort ordering are coupled, and must be?\n\nWell, the sort ordering of any particular index has to be well-defined,\nwhich means that there has to be a fixed locale associated with it.\n\n> My need is really to get different sorting on *the same* column, depending \n> on which locale the present user prefers.\n> ... I guess this is not even supported by the SQL standard, or \n> any other RDBMS for that matter, right?\n\nI believe SQL regards the locale as essentially a property of a\ndatatype, which means that in theory you should be able to cast a column\nvalue to type text-with-locale-X and then ORDER BY that. It'd be an\non-the-fly sort, not able to exploit any indexes, but it sounds like\nthat's acceptable to you.\n\nLooking at the SQL92 spec, the name they actually give to this notion\nis COLLATE, not locale, but it does look like you can label a string\nexpression with the collation type you want it to be sorted by.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 17 Nov 2001 14:57:23 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Multilingual application, ORDER BY w/ different locales? "
},
{
"msg_contents": "\nOn Sat, 17 Nov 2001, Tom Lane wrote:\n\n> Palle Girgensohn <girgen@partitur.se> writes:\n> > My need is really to get different sorting on *the same* column, depending\n> > on which locale the present user prefers.\n> > ... I guess this is not even supported by the SQL standard, or\n> > any other RDBMS for that matter, right?\n>\n> I believe SQL regards the locale as essentially a property of a\n> datatype, which means that in theory you should be able to cast a column\n> value to type text-with-locale-X and then ORDER BY that. It'd be an\n> on-the-fly sort, not able to exploit any indexes, but it sounds like\n> that's acceptable to you.\n\nWould it be possible to make a function in plpgsql or whatever that\nwrapped the collate changes and then order by that and make functional\nindexes? Would the system use it?\n\n",
"msg_date": "Sat, 17 Nov 2001 13:13:15 -0800 (PST)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: Multilingual application, ORDER BY w/ different locales?"
},
{
"msg_contents": "Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> Would it be possible to make a function in plpgsql or whatever that\n> wrapped the collate changes and then order by that and make functional\n> indexes? Would the system use it?\n\nIIRC, we were debating whether we should consider collation to be an\nattribute of the datatype (think typmod) or an attribute of individual\nvalues (think field added to values of textual types). In the former\ncase, a function like this would only work if we allowed its result to\nbe declared as having the right collate attribute. Which is not\nimpossible, but we don't currently associate any typmod with function\narguments or results, and so I'm not sure how painful it would be.\nWith the field-in-data-value approach it's easy to see how it would\nwork. But another byte or word per text value might be a high price\nto pay ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 17 Nov 2001 16:21:58 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Multilingual application, ORDER BY w/ different locales? "
},
{
"msg_contents": "\nOn Sat, 17 Nov 2001, Tom Lane wrote:\n\n> Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> > Would it be possible to make a function in plpgsql or whatever that\n> > wrapped the collate changes and then order by that and make functional\n> > indexes? Would the system use it?\n>\n> IIRC, we were debating whether we should consider collation to be an\n> attribute of the datatype (think typmod) or an attribute of individual\n> values (think field added to values of textual types). In the former\n> case, a function like this would only work if we allowed its result to\n> be declared as having the right collate attribute. Which is not\n> impossible, but we don't currently associate any typmod with function\n> arguments or results, and so I'm not sure how painful it would be.\n> With the field-in-data-value approach it's easy to see how it would\n> work. But another byte or word per text value might be a high price\n> to pay ...\n\nTrue. Although I wonder how things like substring would work in the\nmodel with typmods if the collation isn't attached in any fashion to\nthe return values since I think the substring collation is supposed\nto be the same as the input string's, whereas for something like\nconvert it's a different collation based on a parameter. I wonder if\nas a temporary thing, you could use a function that did something\nsimilar to strxfrm as long as you only used that for sorting purposes.\n\n\n",
"msg_date": "Sat, 17 Nov 2001 17:04:29 -0800 (PST)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: Multilingual application, ORDER BY w/ different locales?"
},
{
"msg_contents": "\n\nStephan Szabo wrote:\n\n>On Sat, 17 Nov 2001, Tom Lane wrote:\n>\n>>Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n>>\n>>>Would it be possible to make a function in plpgsql or whatever that\n>>>wrapped the collate changes and then order by that and make functional\n>>>indexes? Would the system use it?\n>>>\n>>IIRC, we were debating whether we should consider collation to be an\n>>attribute of the datatype (think typmod) or an attribute of individual\n>>values (think field added to values of textual types). In the former\n>>case, a function like this would only work if we allowed its result to\n>>be declared as having the right collate attribute. Which is not\n>>impossible, but we don't currently associate any typmod with function\n>>arguments or results, and so I'm not sure how painful it would be.\n>>With the field-in-data-value approach it's easy to see how it would\n>>work. But another byte or word per text value might be a high price\n>>to pay ...\n>>\n>\n>True. Although I wonder how things like substring would work in the\n>model with typmods if the collation isn't attached in any fashion to\n>the return values since I think the substring collation is supposed\n>to be the same as the input string's, whereas for something like\n>convert it's a different collation based on a parameter. I wonder if\n>as a temporary thing, you could use a function that did something\n>similar to strxfrm as long as you only used that for sorting purposes.\n>\nThat would mean a new datatype that such function returns\n\nCREATE FUNCTION text_with_collation(text,collation) RETURNS \ntext_with_collation\n\nThat would be sorted using the rules of that collation.\n\nThis can currently be added in contrib, but should eventually go into core.\n\nThe function itself is quite easy, but the collation is the part that \ncan either be done by\na) writing our own library\n\nb) using system locale (i think that locale switching is slow in default \nglibc , so the\n following can be slow too\n ORDER BY text_with_collation(t1,'et_EE'), text_with_collation(t1,'fr_CA')\n but I doubt anybody uses it.\n\nc) using a third party library - at least IBM has one which is almost as \nbig as whole postgreSQL ;)\n\nassuming that one backend needs mostl one locale at a time, I think that \nb) will be the easiest to\nimplement, but this will clash with current locale support if it is \ncompiled in so you have to be\nrapidly swithcing LC_COLLATE between the default and that of the current \ndatum.\n\nso what we actually need is a system that will _not_ use locale-aware \nfunctions unless specifically\ntold to do so by feeding it with text_with_locale values.\n\n---------------\nHannu\n\n\n\n\n\n\n\n\n\n\n----------------\nHannu\n\n\n",
"msg_date": "Sun, 18 Nov 2001 11:56:46 +0500",
"msg_from": "Hannu Krosing <hannu@sid.tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Multilingual application, ORDER BY w/ different locales?"
},
{
"msg_contents": "\n> --On Saturday, November 17, 2001 13:39:36 -0500 Tom Lane \n> <tgl@sss.pgh.pa.us> wrote:\n>\n>> In my mind the main difficulty\n>> with it is that the standard C library doesn't really support concurrent\n>> use of multiple locales: it's built around the assumption that you set\n>> your locale once at program startup. setlocale() is, typically, not\n>> a fast operation. To get around this it seems we'd need to write our\n>> own set of locale library routines, which is a daunting amount of work.\n>>\n>> I think the last time this came up, someone mentioned that there's an\n>> open BSD-license locale library being worked on, which possibly we could\n>> adapt instead of reinventing this wheel for ourselves. But I don't\n>> recall more than that. Check the archives. \n>\nI guess it must have been IBM's International Classes for Unicode at\nhttp://oss.software.ibm.com/icu/\n\nIt is quite big:\n\n\n Download\n\nFile Size Description\nicu-1.8.1.zip \n<http://oss.software.ibm.com/icu/download/1.8.1/icu-1.8.1.zip> 7.3 MB \n ZIP file for Windows platforms\nicu-1.8.1.tgz \n<http://oss.software.ibm.com/icu/download/1.8.1/icu-1.8.1.tgz> 6.4 MB \n gzipped tar archive for Unix and other platforms\nicu-1.8.1-docs.zip \n<http://oss.software.ibm.com/icu/download/1.8.1/icu-1.8.1-docs.zip> \n1.1 MB ZIP file with the API documentation\nicu-1.8.1-docs.tgz \n<http://oss.software.ibm.com/icu/download/1.8.1/icu-1.8.1-docs.tgz> \n0.9 MB gzipped tar archive with the API documentation\n\n\nbut I suspect that it would otherways be the easiest way to get a good \ninternationalisation support.\n\n---------------\nHannu\n\n\n\n",
"msg_date": "Sun, 18 Nov 2001 12:24:45 +0500",
"msg_from": "Hannu Krosing <hannu@sid.tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Multilingual application, ORDER BY w/ different"
},
{
"msg_contents": "> IIRC, we were debating whether we should consider collation to be an\n> attribute of the datatype (think typmod) or an attribute of individual\n> values (think field added to values of textual types). In the former\n> case, a function like this would only work if we allowed its result to\n> be declared as having the right collate attribute. Which is not\n> impossible, but we don't currently associate any typmod with function\n> arguments or results, and so I'm not sure how painful it would be.\n> With the field-in-data-value approach it's easy to see how it would\n> work. But another byte or word per text value might be a high price\n> to pay ...\n\nI think the price is not so high. To give the collation info to text\ndata types, it's enough to store the info in the\npg_attribute. ie. only additional several bytes per column are\nrequired, not per instance. Of course we would need to add some extra\nbytes to the in-memory string data, it's just a temporary data anyway.\n--\nTatsuo Ishii\n\n",
"msg_date": "Sun, 18 Nov 2001 23:17:48 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Multilingual application, ORDER BY w/ different"
}
] |
[
{
"msg_contents": "I noticed OCTET_LENGTH will return the size of the data after TOAST may\nhave compressed it. While this could be useful information, this\nbehaviour has no basis in the SQL standard and it's not what is\ndocumented. Moreover, it eliminates the standard useful behaviour of\nOCTET_LENGTH, which is to show the length in bytes of a multibyte string.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Sat, 17 Nov 2001 18:33:02 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "OCTET_LENGTH is wrong"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> I noticed OCTET_LENGTH will return the size of the data after TOAST may\n> have compressed it. While this could be useful information, this\n> behaviour has no basis in the SQL standard and it's not what is\n> documented. Moreover, it eliminates the standard useful behaviour of\n> OCTET_LENGTH, which is to show the length in bytes of a multibyte string.\n\nI wondered about that too, the first time I noticed it. On the other\nhand, knowing the compressed length is kinda useful too, at least for\nhacking and DBA purposes. (One might also like to know whether a value\nhas been moved out of line, which is not currently determinable.)\n\nI don't want to force an initdb at this stage, at least not without\ncompelling reason, so adding more functions right now is not feasible.\nMaybe a TODO item for next time.\n\nThat leaves us with the question whether to change OCTET_LENGTH now\nor leave it for later. Anyone?\n\nBTW, I noticed that textlength() is absolutely unreasonably slow when\nMULTIBYTE is enabled --- yesterday I was trying to profile TOAST\noverhead, and soon discovered that what I was looking at was nothing\nbut pg_mblen() calls. It really needs a short-circuit path for\nsingle-byte encodings.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 17 Nov 2001 13:46:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OCTET_LENGTH is wrong "
},
{
"msg_contents": "> Peter Eisentraut <peter_e@gmx.net> writes:\n> > I noticed OCTET_LENGTH will return the size of the data after TOAST may\n> > have compressed it. While this could be useful information, this\n> > behaviour has no basis in the SQL standard and it's not what is\n> > documented. Moreover, it eliminates the standard useful behaviour of\n> > OCTET_LENGTH, which is to show the length in bytes of a multibyte string.\n> \n> I wondered about that too, the first time I noticed it. On the other\n> hand, knowing the compressed length is kinda useful too, at least for\n> hacking and DBA purposes. (One might also like to know whether a value\n> has been moved out of line, which is not currently determinable.)\n> \n> I don't want to force an initdb at this stage, at least not without\n> compelling reason, so adding more functions right now is not feasible.\n> Maybe a TODO item for next time.\n> \n> That leaves us with the question whether to change OCTET_LENGTH now\n> or leave it for later. Anyone?\n\nI am unconcerned about showing people the actual toasted length. Seems\nwe should get octet_length() computed on the un-TOASTED length, if we\ncan.\n\n> BTW, I noticed that textlength() is absolutely unreasonably slow when\n> MULTIBYTE is enabled --- yesterday I was trying to profile TOAST\n> overhead, and soon discovered that what I was looking at was nothing\n> but pg_mblen() calls. It really needs a short-circuit path for\n> single-byte encodings.\n\nAdded to TODO:\n\n\t* Optimize textlength(), etc. for single-byte encodings\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 17 Nov 2001 14:28:59 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OCTET_LENGTH is wrong"
},
{
"msg_contents": "> Peter Eisentraut <peter_e@gmx.net> writes:\n> > I noticed OCTET_LENGTH will return the size of the data after TOAST may\n> > have compressed it. While this could be useful information, this\n> > behaviour has no basis in the SQL standard and it's not what is\n> > documented. Moreover, it eliminates the standard useful behaviour of\n> > OCTET_LENGTH, which is to show the length in bytes of a multibyte string.\n> \n> I wondered about that too, the first time I noticed it. On the other\n> hand, knowing the compressed length is kinda useful too, at least for\n> hacking and DBA purposes. (One might also like to know whether a value\n> has been moved out of line, which is not currently determinable.)\n\nIt seems the behavior of OCTET_LENGTH varies acording to the\ncorresponding data type:\n\nTEXT: returns the size of data AFTER TOAST\nVARCHAR and CHAR: returns the size of data BEFORE TOAST\n\nI think we should fix at least these inconsistencies but am not sure\nif it's totally wrong that OCTET_LENGTH returns the length AFTER\nTOAST. The SQL standard does not have any idea about TOAST of course.\nAlso, I tend to agree with Tom's point about hackers and DBAs.\n\n> I don't want to force an initdb at this stage, at least not without\n> compelling reason, so adding more functions right now is not feasible.\n> Maybe a TODO item for next time.\n>\n> That leaves us with the question whether to change OCTET_LENGTH now\n> or leave it for later. Anyone?\n\nMy opinion is leaving it for 7.3, with the idea (adding new\nfunctions).\n\n> BTW, I noticed that textlength() is absolutely unreasonably slow when\n> MULTIBYTE is enabled --- yesterday I was trying to profile TOAST\n> overhead, and soon discovered that what I was looking at was nothing\n> but pg_mblen() calls. It really needs a short-circuit path for\n> single-byte encodings.\n\nIt's easy to optimize that. However I cannot access CVS anymore after\nthe IP address change. Will post patches later...\n--\nTatsuo Ishii\n\n",
"msg_date": "Sun, 18 Nov 2001 15:08:28 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: OCTET_LENGTH is wrong "
},
{
"msg_contents": "\n\nTom Lane wrote:\n\n>Peter Eisentraut <peter_e@gmx.net> writes:\n>\n>>... Moreover, it eliminates the standard useful behaviour of\n>>OCTET_LENGTH, which is to show the length in bytes of a multibyte string.\n>>\n>\n>While I don't necessarily dispute this, I do kinda wonder where you\n>derive the statement. AFAICS, SQL92 defines OCTET_LENGTH in terms\n>of BIT_LENGTH:\n>\n>6.6 General Rule 5:\n>\n> a) Let S be the <string value expression>. If the value of S is\n> not the null value, then the result is the smallest integer\n> not less than the quotient of the division (BIT_LENGTH(S)/8).\n> b) Otherwise, the result is the null value.\n>\n>and BIT_LENGTH is defined in the next GR:\n>\n> a) Let S be the <string value expression>. If the value of S is\n> not the null value, then the result is the number of bits in\n> the value of S.\n> b) Otherwise, the result is the null value.\n>\n>While SQL92 is pretty clear about <bit string>, I'm damned if I can see\n>anywhere that they define how many bits are in a character string value\n>So who's to say what representation is to be used to count the bits?\n>If, say, UTF-16 and UTF-8 are equally reasonable choices, then why\n>shouldn't a compressed representation be reasonable too?\n>\nOne objection I have to this, is the fact that nobody uses the compressed\nrepresentation in client libraries whrereas they do use both UTF-16 and \nUTF-8.\nAt least UTF-8 is available as client encoding.\n\nAnd probably it is possible that the length of the \"possibly compressed\" \nrepresentation\ncan change without the underlying data changing (for example when you \nset a bit\nsomewhere that disables compression and UPDATE some other field in the \ntuple)\nmaking the result of OCTET_LENGTH dependent on other things than the \nargument\nstring.\n\nI also like the propery of _uncompressed_ OCTET_LENGTH that\nOCTET_LENGTH(s||s) == 2 * OCTET_LENGTH(s)\nwhich is almost never true for compressed length\n\n----------------\nHannu\n\n\n",
"msg_date": "Sun, 18 Nov 2001 11:22:09 +0500",
"msg_from": "Hannu Krosing <hannu@sid.tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: OCTET_LENGTH is wrong"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> ... Moreover, it eliminates the standard useful behaviour of\n> OCTET_LENGTH, which is to show the length in bytes of a multibyte string.\n\nWhile I don't necessarily dispute this, I do kinda wonder where you\nderive the statement. AFAICS, SQL92 defines OCTET_LENGTH in terms\nof BIT_LENGTH:\n\n6.6 General Rule 5:\n\n a) Let S be the <string value expression>. If the value of S is\n not the null value, then the result is the smallest integer\n not less than the quotient of the division (BIT_LENGTH(S)/8).\n b) Otherwise, the result is the null value.\n\nand BIT_LENGTH is defined in the next GR:\n\n a) Let S be the <string value expression>. If the value of S is\n not the null value, then the result is the number of bits in\n the value of S.\n b) Otherwise, the result is the null value.\n\nWhile SQL92 is pretty clear about <bit string>, I'm damned if I can see\nanywhere that they define how many bits are in a character string value.\nSo who's to say what representation is to be used to count the bits?\nIf, say, UTF-16 and UTF-8 are equally reasonable choices, then why\nshouldn't a compressed representation be reasonable too?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 18 Nov 2001 01:40:37 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OCTET_LENGTH is wrong "
},
{
"msg_contents": "> > BTW, I noticed that textlength() is absolutely unreasonably slow when\n> > MULTIBYTE is enabled --- yesterday I was trying to profile TOAST\n> > overhead, and soon discovered that what I was looking at was nothing\n> > but pg_mblen() calls. It really needs a short-circuit path for\n> > single-byte encodings.\n> \n> It's easy to optimize that. However I cannot access CVS anymore after\n> the IP address change. Will post patches later...\n\nSeems I got the cvs access again (I was asked my pass phrase again)\nand I have committed changes for this.\n\nModified functions are:\n\nbpcharlen\ntextlen\nvarcharlen\n--\nTatsuo Ishii\n",
"msg_date": "Sun, 18 Nov 2001 21:05:05 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: OCTET_LENGTH is wrong "
},
{
"msg_contents": "> > > BTW, I noticed that textlength() is absolutely unreasonably slow when\n> > > MULTIBYTE is enabled --- yesterday I was trying to profile TOAST\n> > > overhead, and soon discovered that what I was looking at was nothing\n> > > but pg_mblen() calls. It really needs a short-circuit path for\n> > > single-byte encodings.\n> > \n> > It's easy to optimize that. However I cannot access CVS anymore after\n> > the IP address change. Will post patches later...\n> \n> Seems I got the cvs access again (I was asked my pass phrase again)\n> and I have committed changes for this.\n> \n> Modified functions are:\n> \n> bpcharlen\n> textlen\n> varcharlen\n\nDid you go with the pre or post-TOAST length for these types? I vote\nfor pre-TOAST because it seems much more useful to ordinary users.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 18 Nov 2001 10:30:58 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OCTET_LENGTH is wrong"
},
{
"msg_contents": "> > > BTW, I noticed that textlength() is absolutely unreasonably slow when\n> > > MULTIBYTE is enabled --- yesterday I was trying to profile TOAST\n> > > overhead, and soon discovered that what I was looking at was nothing\n> > > but pg_mblen() calls. It really needs a short-circuit path for\n> > > single-byte encodings.\n> > \n> > It's easy to optimize that. However I cannot access CVS anymore after\n> > the IP address change. Will post patches later...\n> \n> Seems I got the cvs access again (I was asked my pass phrase again)\n> and I have committed changes for this.\n> \n> Modified functions are:\n> \n> bpcharlen\n> textlen\n> varcharlen\n\nOK, sorry, I see you did the optimization, not changed the length\nfunctio.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 18 Nov 2001 10:32:16 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OCTET_LENGTH is wrong"
},
{
"msg_contents": "Tom Lane writes:\n\n> a) Let S be the <string value expression>. If the value of S is\n> not the null value, then the result is the number of bits in\n> the value of S.\n> b) Otherwise, the result is the null value.\n>\n> While SQL92 is pretty clear about <bit string>, I'm damned if I can see\n> anywhere that they define how many bits are in a character string value.\n> So who's to say what representation is to be used to count the bits?\n> If, say, UTF-16 and UTF-8 are equally reasonable choices, then why\n> shouldn't a compressed representation be reasonable too?\n\nI think \"the value of S\" implies \"the user-accessible representation of\nthe value of S\", in the sense, \"How much memory do I need to allocate to\nstore this value\".\n\nFurthermore, the size of the TOAST representation that is returned now is\njust one particular of several intermediate representations. For\ninstance, it does not include the VARHDRSZ and it does not include the\nsize of the tuple headers when it's stored externally. Thus, this size is\nheavily skewed toward low numbers and doesn't tell you much about either\nthe disk end or the user's end.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Sun, 18 Nov 2001 18:17:49 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: OCTET_LENGTH is wrong "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> I think \"the value of S\" implies \"the user-accessible representation of\n> the value of S\", in the sense, \"How much memory do I need to allocate to\n> store this value\".\n\nIf I take that argument seriously, I have to conclude that OCTET_LENGTH\nshould return the string length measured in the current client encoding\n(which may have little to do with its size in the server, if the\nserver's encoding is different). If the client actually retrieves the\nstring then that's how much memory he'll need.\n\nI presume that where you want to come out is OCTET_LENGTH = uncompressed\nlength in the server's encoding ... but so far no one has really made\na convincing argument why that answer is better or more spec-compliant\nthan any other answer. In particular, it's not obvious to me why\n\"number of bytes we're actually using on disk\" is wrong.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 18 Nov 2001 14:06:40 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OCTET_LENGTH is wrong "
},
{
"msg_contents": "> I think \"the value of S\" implies \"the user-accessible representation of\n> the value of S\", in the sense, \"How much memory do I need to allocate to\n> store this value\".\n> \n> Furthermore, the size of the TOAST representation that is returned now is\n> just one particular of several intermediate representations. For\n> instance, it does not include the VARHDRSZ and it does not include the\n> size of the tuple headers when it's stored externally. Thus, this size is\n> heavily skewed toward low numbers and doesn't tell you much about either\n> the disk end or the user's end.\n\nYes, good arguments. If we want to implement storage_length at some\nlater time, I think the compressed length may be appropriate, but for\ngeneral use, I think we need to return the uncompressed length,\nespecially considering that multibyte makes the ordinary 2length return\nnumber of characters, so users need a way to get byte length.\n\nAttached is a patch that makes text return the same value type as char()\nand varchar() already do. As Tatsuo pointed out, they were\ninconsistent. All the other octet_length() functions look fine so it\nwas only text that had this problem.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: src/backend/utils/adt/varlena.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/utils/adt/varlena.c,v\nretrieving revision 1.74\ndiff -c -r1.74 varlena.c\n*** src/backend/utils/adt/varlena.c\t2001/10/25 05:49:46\t1.74\n--- src/backend/utils/adt/varlena.c\t2001/11/18 19:11:52\n***************\n*** 273,284 ****\n Datum\n textoctetlen(PG_FUNCTION_ARGS)\n {\n! \tstruct varattrib *t = (struct varattrib *) PG_GETARG_RAW_VARLENA_P(0);\n \n! \tif (!VARATT_IS_EXTERNAL(t))\n! \t\tPG_RETURN_INT32(VARATT_SIZE(t) - VARHDRSZ);\n! \n! \tPG_RETURN_INT32(t->va_content.va_external.va_extsize);\n }\n \n /*\n--- 273,281 ----\n Datum\n textoctetlen(PG_FUNCTION_ARGS)\n {\n! \ttext *arg = PG_GETARG_VARCHAR_P(0);\n \n! \tPG_RETURN_INT32(VARSIZE(arg) - VARHDRSZ);\n }\n \n /*",
"msg_date": "Sun, 18 Nov 2001 14:17:07 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OCTET_LENGTH is wrong"
},
{
"msg_contents": "On Sun, 18 Nov 2001, Tom Lane wrote:\n\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > I think \"the value of S\" implies \"the user-accessible representation of\n> > the value of S\", in the sense, \"How much memory do I need to allocate to\n> > store this value\".\n>\n> If I take that argument seriously, I have to conclude that OCTET_LENGTH\n> should return the string length measured in the current client encoding\n> (which may have little to do with its size in the server, if the\n> server's encoding is different). If the client actually retrieves the\n> string then that's how much memory he'll need.\n>\n> I presume that where you want to come out is OCTET_LENGTH = uncompressed\n> length in the server's encoding ... but so far no one has really made\n> a convincing argument why that answer is better or more spec-compliant\n> than any other answer. In particular, it's not obvious to me why\n> \"number of bytes we're actually using on disk\" is wrong.\n\nI'm not sure, but if we say that the on disk representation is the\nvalue of the character value expression whose size is being checked,\nwouldn't that be inconsistent with the other uses of the character value\nexpression in places like substr where we don't use the on disk\nrepresentation? Unless you're saying that the string value expression\nthat is that character value expression is the compressed one and\nthe character value expression is the uncompressed one.\n\n\n",
"msg_date": "Sun, 18 Nov 2001 11:40:59 -0800 (PST)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: OCTET_LENGTH is wrong "
},
{
"msg_contents": "Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> On Sun, 18 Nov 2001, Tom Lane wrote:\n>> I presume that where you want to come out is OCTET_LENGTH = uncompressed\n>> length in the server's encoding ... but so far no one has really made\n>> a convincing argument why that answer is better or more spec-compliant\n>> than any other answer. In particular, it's not obvious to me why\n>> \"number of bytes we're actually using on disk\" is wrong.\n\n> I'm not sure, but if we say that the on disk representation is the\n> value of the character value expression whose size is being checked,\n> wouldn't that be inconsistent with the other uses of the character value\n\nYeah, it would be and is. In fact, the present code has some\ninteresting behaviors: if foo.x is a text value long enough to be\ntoasted, then you get different results from\n\n\tSELECT OCTET_LENGTH(x) FROM foo;\n\n\tSELECT OCTET_LENGTH(x || '') FROM foo;\n\nsince the result of the concatenation expression won't be compressed.\n\nI'm not actually here to defend the existing code; in fact I believe the\nXXX comment on textoctetlen questioning its correctness is mine. What\nI am trying to point out is that the spec is so vague that it's not\nclear what the correct answer is.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 18 Nov 2001 14:56:09 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OCTET_LENGTH is wrong "
},
{
"msg_contents": "> Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> > On Sun, 18 Nov 2001, Tom Lane wrote:\n> >> I presume that where you want to come out is OCTET_LENGTH = uncompressed\n> >> length in the server's encoding ... but so far no one has really made\n> >> a convincing argument why that answer is better or more spec-compliant\n> >> than any other answer. In particular, it's not obvious to me why\n> >> \"number of bytes we're actually using on disk\" is wrong.\n> \n> > I'm not sure, but if we say that the on disk representation is the\n> > value of the character value expression whose size is being checked,\n> > wouldn't that be inconsistent with the other uses of the character value\n> \n> Yeah, it would be and is. In fact, the present code has some\n> interesting behaviors: if foo.x is a text value long enough to be\n> toasted, then you get different results from\n> \n> \tSELECT OCTET_LENGTH(x) FROM foo;\n> \n> \tSELECT OCTET_LENGTH(x || '') FROM foo;\n> \n> since the result of the concatenation expression won't be compressed.\n> \n> I'm not actually here to defend the existing code; in fact I believe the\n> XXX comment on textoctetlen questioning its correctness is mine. What\n> I am trying to point out is that the spec is so vague that it's not\n> clear what the correct answer is.\n\nWell, if the standard is unclear, we should assume to return the most\nreasonable answer, which has to be non-compressed length. \n\nIn multibyte encodings, when we started returning length() in\n_characters_ instead of bytes, I assumed the major use for octet_length\nwas to return the number of bytes needed to hold the value on the client\nside.\n\nIn single byte encodings, octet_length is the same as length() so\nreturning a compressed length may make sense, but I don't think we want\ndifferent meanings for the function for single and multi-byte encodings.\n\nI guess the issue is that for single-byte encodings, octet_length is\npretty useless because it is the same as length, but for multi-byte\nencodings, octet_length is invaluable and almost has to return\nnon-compress bytes because uncompressed is that the client sees.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 18 Nov 2001 16:23:16 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OCTET_LENGTH is wrong"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> octet_length is invaluable and almost has to return\n> non-compress bytes because uncompressed is that the client sees.\n\nWhat about encoding?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 18 Nov 2001 17:35:26 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OCTET_LENGTH is wrong "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > octet_length is invaluable and almost has to return\n> > non-compress bytes because uncompressed is that the client sees.\n ^^^^\n what\n\n> What about encoding?\n\nSingle-byte encodings have the same character and byte lengths. Only\nmulti-byte encodings are different, right?\n\nIn thinking about it, I think the function is called octet_length()\nto emphasize is returns the length in octets (bytes) rather than the\nlength in characters.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 18 Nov 2001 17:58:29 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OCTET_LENGTH is wrong"
},
{
"msg_contents": "> > What about encoding?\n> \n> Single-byte encodings have the same character and byte lengths. Only\n> multi-byte encodings are different, right?\n> \n> In thinking about it, I think the function is called octet_length()\n> to emphasize is returns the length in octets (bytes) rather than the\n> length in characters.\n\nI think Tom's point is whether octet_length() should regard input text\nbeing encoded in the client side encoding or not.\n\nMy vote is octet_length() assumes database encodeding.\nIf you need client side encoded text length, you could do something\nlike:\n\nselect octet_length(convert('foo',pg_client_encoding()));\n\nNote that there was a nasty bug in convert() which prevents above\nworking. I have committed fixes.\n--\nTatsuo Ishii\n",
"msg_date": "Mon, 19 Nov 2001 15:48:43 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: OCTET_LENGTH is wrong"
},
{
"msg_contents": "Tom Lane writes:\n\n> What\n> I am trying to point out is that the spec is so vague that it's not\n> clear what the correct answer is.\n\nI guess the authors of SQL92 never imagined someone would question what\n\"value of S\" means. In SQL99 they included it:\n\nSQL 99 Part 1, 4.4.3.2.\n\n A value of character type is a string (sequence) of characters\n drawn from some character repertoire.\n\nJust to be sure...\n\nSQL 99 Part 1, 3.1 q)\n\n q) sequence: An ordered collection of objects that are not\n necessarily distinct.\n\nI don't have a set theory text available, but I think this should give a\nfair indication that the number of bits in the value of S is the sum of\nthe bits in each individual character (which is in turn vaguely defined\nelsewhere in SQL99) -- at least in Euclidean memory architectures.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Mon, 19 Nov 2001 13:39:01 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: OCTET_LENGTH is wrong "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> I don't have a set theory text available, but I think this should give a\n> fair indication that the number of bits in the value of S is the sum of\n> the bits in each individual character (which is in turn vaguely defined\n> elsewhere in SQL99) -- at least in Euclidean memory architectures.\n\nBut \"how many bits in a character?\" is exactly the question at this\npoint. To be fair, I don't think our notion of on-the-fly encoding\ntranslation is envisioned anywhere in the SQL spec, so perhaps we\nshouldn't expect it to tell us which encoding to count the bits in.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 19 Nov 2001 10:45:18 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OCTET_LENGTH is wrong "
},
{
"msg_contents": "Tom,\n\nWhile the text datatypes have additional issues with encodings, that is \nnot true for the bytea type. I think it does make sense that a client \nbe able to get the size in bytes that the bytea type value will return \nto the client. If you are storing files in a bytea column getting the \nfile size by calling octet_length would be very useful.\n\nthanks,\n--Barry\n\n\nTom Lane wrote:\n\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> \n>>I think \"the value of S\" implies \"the user-accessible representation of\n>>the value of S\", in the sense, \"How much memory do I need to allocate to\n>>store this value\".\n>>\n> \n> If I take that argument seriously, I have to conclude that OCTET_LENGTH\n> should return the string length measured in the current client encoding\n> (which may have little to do with its size in the server, if the\n> server's encoding is different). If the client actually retrieves the\n> string then that's how much memory he'll need.\n> \n> I presume that where you want to come out is OCTET_LENGTH = uncompressed\n> length in the server's encoding ... but so far no one has really made\n> a convincing argument why that answer is better or more spec-compliant\n> than any other answer. In particular, it's not obvious to me why\n> \"number of bytes we're actually using on disk\" is wrong.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n> \n\n\n",
"msg_date": "Mon, 19 Nov 2001 09:43:56 -0800",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": false,
"msg_subject": "Re: OCTET_LENGTH is wrong"
},
{
"msg_contents": "\nOK, I have applied this patch so text octet_length returns\nnon-compressed length of data, to match octet_length of other types.\n\nI also removed the XXX comments added by Tom.\n\n---------------------------------------------------------------------------\n\n> > I think \"the value of S\" implies \"the user-accessible representation of\n> > the value of S\", in the sense, \"How much memory do I need to allocate to\n> > store this value\".\n> > \n> > Furthermore, the size of the TOAST representation that is returned now is\n> > just one particular of several intermediate representations. For\n> > instance, it does not include the VARHDRSZ and it does not include the\n> > size of the tuple headers when it's stored externally. Thus, this size is\n> > heavily skewed toward low numbers and doesn't tell you much about either\n> > the disk end or the user's end.\n> \n> Yes, good arguments. If we want to implement storage_length at some\n> later time, I think the compressed length may be appropriate, but for\n> general use, I think we need to return the uncompressed length,\n> especially considering that multibyte makes the ordinary 2length return\n> number of characters, so users need a way to get byte length.\n> \n> Attached is a patch that makes text return the same value type as char()\n> and varchar() already do. As Tatsuo pointed out, they were\n> inconsistent. All the other octet_length() functions look fine so it\n> was only text that had this problem.\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n> Index: src/backend/utils/adt/varlena.c\n> ===================================================================\n> RCS file: /cvsroot/pgsql/src/backend/utils/adt/varlena.c,v\n> retrieving revision 1.74\n> diff -c -r1.74 varlena.c\n> *** src/backend/utils/adt/varlena.c\t2001/10/25 05:49:46\t1.74\n> --- src/backend/utils/adt/varlena.c\t2001/11/18 19:11:52\n> ***************\n> *** 273,284 ****\n> Datum\n> textoctetlen(PG_FUNCTION_ARGS)\n> {\n> ! \tstruct varattrib *t = (struct varattrib *) PG_GETARG_RAW_VARLENA_P(0);\n> \n> ! \tif (!VARATT_IS_EXTERNAL(t))\n> ! \t\tPG_RETURN_INT32(VARATT_SIZE(t) - VARHDRSZ);\n> ! \n> ! \tPG_RETURN_INT32(t->va_content.va_external.va_extsize);\n> }\n> \n> /*\n> --- 273,281 ----\n> Datum\n> textoctetlen(PG_FUNCTION_ARGS)\n> {\n> ! \ttext *arg = PG_GETARG_VARCHAR_P(0);\n> \n> ! \tPG_RETURN_INT32(VARSIZE(arg) - VARHDRSZ);\n> }\n> \n> /*\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 19 Nov 2001 13:28:31 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OCTET_LENGTH is wrong"
},
{
"msg_contents": "Barry Lind <barry@xythos.com> writes:\n> While the text datatypes have additional issues with encodings, that is \n> not true for the bytea type. I think it does make sense that a client \n> be able to get the size in bytes that the bytea type value will return \n> to the client.\n\nbytea does that already. It's only text that has (or had, till a few\nminutes ago) the funny behavior.\n\nI'm not set on the notion that octet_length should return on-disk size;\nthat's clearly not what's contemplated by SQL92, so I'm happy to agree\nthat if we want that we should add a new function to get it.\n(\"storage_length\", maybe.) What's bothering me right now is the\ndifference between client and server encodings. It seems that the only\nplausible use for octet_length is to do memory allocation on the client\nside, and for that purpose the length ought to be measured in the client\nencoding. People seem to be happy with letting octet_length take the\neasy way out (measure in the server encoding), and I'm trying to get\nsomeone to explain to me why that's the right behavior. I don't see it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 19 Nov 2001 13:38:03 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OCTET_LENGTH is wrong "
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> ! \ttext *arg = PG_GETARG_VARCHAR_P(0);\n\nEr, shouldn't that be PG_GETARG_TEXT_P?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 19 Nov 2001 13:39:04 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OCTET_LENGTH is wrong "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> ! \ttext *arg = PG_GETARG_VARCHAR_P(0);\n> \n> Er, shouldn't that be PG_GETARG_TEXT_P?\n\nSorry, fixed. Cut/paste error.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 19 Nov 2001 14:15:11 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OCTET_LENGTH is wrong"
},
{
"msg_contents": "Summary:\n\nThere have been three ideas of what octet_length() sould return:\n\n\t1) compressed on-disk storage length\n\t2) byte length in server-side encoding\n\t3) byte length in client-side encoding\n\n7.3 will do #2 for all data types. We didn't have text type doing #2 in\n7.1.X, but it appears that is the only release where octet_length(text)\nreturned #1. This is the patch that made octet_length(text) return #1\nin 7.1.X:\n\n Revision 1.62 / (download) - annotate - [select for diffs] , Wed Jul 5\n 23:11:35 2000 UTC (16 months, 2 weeks ago) by tgl \n Changes since 1.61: +12 -20 lines\n Diff to previous 1.61 \n\n Update textin() and textout() to new fmgr style. This is just phase\n one of updating the whole text datatype, but there are so dang many\n calls of these two routines that it seems worth a separate commit.\n\nThe open question is whether we should be doing #3. If you want to use\noctet_length to allocate space on the client side, #3 is really the\nproper value, as Tom has argued. Tatsuo is happy with #2.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 19 Nov 2001 14:34:56 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OCTET_LENGTH is wrong"
},
{
"msg_contents": "\n\nTom Lane wrote:\n\n>Barry Lind <barry@xythos.com> writes:\n>\n>>While the text datatypes have additional issues with encodings, that is \n>>not true for the bytea type. I think it does make sense that a client \n>>be able to get the size in bytes that the bytea type value will return \n>>to the client.\n>>\n>\n>bytea does that already. It's only text that has (or had, till a few\n>minutes ago) the funny behavior.\n>\n>I'm not set on the notion that octet_length should return on-disk size;\n>that's clearly not what's contemplated by SQL92, so I'm happy to agree\n>that if we want that we should add a new function to get it.\n>(\"storage_length\", maybe.) What's bothering me right now is the\n>difference between client and server encodings. It seems that the only\n>plausible use for octet_length is to do memory allocation on the client\n>side,\n>\nAllocating memory seems for me to be drivers (libpq, JDBC, ODBC,...) \nproblem and\nnot something to be done by client code beforehand - at least for libpq \n(AFAIK) we\ndon't have any means of giving it a pre-allocated storage area for one \nfield.\n\nThere is enough information in wire protocol for allocating right-sized \nchunks at the\ntime query result is read. An additional call of \"SELECT \nOCTET_LENGTH(someCol)\"\nseems orders of magnitude slower than doing it at the right time in the \ndriver .\n\n>and for that purpose the length ought to be measured in the client\n>encoding. People seem to be happy with letting octet_length take the\n>easy way out (measure in the server encoding), and I'm trying to get\n>someone to explain to me why that's the right behavior. I don't see it.\n>\nperhaps we need another function \"OCTET_LENGTH(someCol, encoding)\" for\ngetting what we want and also client_encoding() and server_encoding() \nfor supplying\nit some universal defaults ?\n\nOTOH, from reading on Unicode I've came to a conlusion that there are \noften several\nways for expressing the same string in Unicode, so for server encoding \nnot unicode and\nclient requesting unicode (say UTF-8) there can be several different \nways to express\nthe same string. Thus there is no absolute OCTET_LENGTH for \nclient_encoding for\nall cases. Thus giving the actual uncompressed length seems most reasonable.\n\nFor unicode both in backend and frontend we could also make OCTET_LENGTH\nreturn not int but an integer-interval of shortest and longest possible \nencoding ;)\n\n------------------\nHannu\n\n\n\n\n",
"msg_date": "Tue, 20 Nov 2001 01:40:35 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: OCTET_LENGTH is wrong"
},
{
"msg_contents": "> It seems the behavior of OCTET_LENGTH varies acording to the\n> corresponding data type:\n> \n> TEXT: returns the size of data AFTER TOAST\n> VARCHAR and CHAR: returns the size of data BEFORE TOAST\n\nFixed in CVS. TEXT now like CHAR/VARCHAR.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 19 Nov 2001 16:27:41 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OCTET_LENGTH is wrong"
},
{
"msg_contents": "On Mon, Nov 19, 2001 at 02:34:56PM -0500, Bruce Momjian wrote:\n> Summary:\n> \n> There have been three ideas of what octet_length() sould return:\n> \n> \t1) compressed on-disk storage length\n> \t2) byte length in server-side encoding\n> \t3) byte length in client-side encoding\n\n Very nice is possibility of choice... What add everything:\n\n octet_length_storage()\n octet_length_server()\n octet_length_client()\n \n and problem of right choice put to user. And the standard \n octet_length() make as alias to 1) or 2) or 3) -- depend \n on result of this discussion.\n\n> The open question is whether we should be doing #3. If you want to use\n> octet_length to allocate space on the client side, #3 is really the\n\n If Tom needs be sure, he can uses octet_length_client().\n\n> proper value, as Tom has argued. Tatsuo is happy with #2.\n\n...and Tatsuo can uses octet_length_server(). The important thing\nis that both will still happy :-)\n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Tue, 20 Nov 2001 10:12:25 +0100",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": false,
"msg_subject": "Re: OCTET_LENGTH is wrong"
},
{
"msg_contents": "> > There have been three ideas of what octet_length() sould return:\n> > \n> > \t1) compressed on-disk storage length\n> > \t2) byte length in server-side encoding\n> > \t3) byte length in client-side encoding\n> \n> Very nice is possibility of choice... What add everything:\n> \n> octet_length_storage()\n> octet_length_server()\n> octet_length_client()\n\nWe only need one of octet_length_server() or octet_length_client().\nWe could emulate the rest using convert() etc.\n--\nTatsuo Ishii\n",
"msg_date": "Tue, 20 Nov 2001 18:53:47 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: OCTET_LENGTH is wrong"
},
{
"msg_contents": "> > There have been three ideas of what octet_length() sould return:\n> > 1) compressed on-disk storage length\n> > 2) byte length in server-side encoding\n> > 3) byte length in client-side encoding\n...\n> > The open question is whether we should be doing #3.\n\nThere is no question in my mind that (3) must be the result of\noctet_length(). Any of the other options may give an interesting result,\nbut of no practical use to a client trying to retrieve data. And\neverything is a client!\n\n - Thomas\n",
"msg_date": "Tue, 20 Nov 2001 13:52:35 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: OCTET_LENGTH is wrong"
},
{
"msg_contents": "> > > There have been three ideas of what octet_length() sould return:\n> > > 1) compressed on-disk storage length\n> > > 2) byte length in server-side encoding\n> > > 3) byte length in client-side encoding\n> ...\n> > > The open question is whether we should be doing #3.\n> \n> There is no question in my mind that (3) must be the result of\n> octet_length(). Any of the other options may give an interesting result,\n> but of no practical use to a client trying to retrieve data. And\n> everything is a client!\n\nAdded to TODO:\n\n\t* Add octet_length_server() and octet_length_client() (Thomas, Tatsuo)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 20 Nov 2001 10:27:31 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OCTET_LENGTH is wrong"
},
{
"msg_contents": "> > > There have been three ideas of what octet_length() sould return:\n> > > 1) compressed on-disk storage length\n> > > 2) byte length in server-side encoding\n> > > 3) byte length in client-side encoding\n> ...\n> > > The open question is whether we should be doing #3.\n> \n> There is no question in my mind that (3) must be the result of\n> octet_length(). Any of the other options may give an interesting result,\n> but of no practical use to a client trying to retrieve data. And\n> everything is a client!\n\nAlso added to TODO:\n\n\t* Make octet_length_client the same as octet_length() \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 20 Nov 2001 10:28:32 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OCTET_LENGTH is wrong"
},
{
"msg_contents": "\n\nThomas Lockhart wrote:\n\n>>>There have been three ideas of what octet_length() sould return:\n>>> 1) compressed on-disk storage length\n>>> 2) byte length in server-side encoding\n>>> 3) byte length in client-side encoding\n>>>\n>...\n>\n>>>The open question is whether we should be doing #3.\n>>>\n>\n>There is no question in my mind that (3) must be the result of\n>octet_length(). Any of the other options may give an interesting result,\n>but of no practical use to a client trying to retrieve data.\n>\nWhat practical use does #3 give ;) do you really envision a program that \ndoes 2\nseparate queries to retrieve some string, first to query its storage \nlength and then\nto actually read it, instead of just reading it ?\n\nI don't think we evan have a interface in any of our libs where we can \ngive a\npre-allocated buffer to a client library to fill in. Or do we ?\n\n>And everything is a client!\n>\nSo in a PL/PgSQL function doing some data manipulation through SPI the \n\"client\" is\nwho - the server or the client or some third party ?\n\n---------------\nHannu\n\n",
"msg_date": "Wed, 21 Nov 2001 00:52:16 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: OCTET_LENGTH is wrong"
},
{
"msg_contents": "\n\nTatsuo Ishii wrote:\n\n>>>There have been three ideas of what octet_length() sould return:\n>>>\n>>>\t1) compressed on-disk storage length\n>>>\t2) byte length in server-side encoding\n>>>\t3) byte length in client-side encoding\n>>>\n>> Very nice is possibility of choice... What add everything:\n>>\n>> octet_length_storage()\n>> octet_length_server()\n>> octet_length_client()\n>>\n>\n>We only need one of octet_length_server() or octet_length_client().\n>\nAnd i guess that octet_length_server() is cheaper as it does not do a \nconvert()\nwhen not needed.\n\n>\n>We could emulate the rest using convert() etc.\n>--\n>Tatsuo Ishii\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 6: Have you searched our list archives?\n>\n>http://archives.postgresql.org\n>\n\n\n",
"msg_date": "Wed, 21 Nov 2001 00:57:00 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: OCTET_LENGTH is wrong"
},
{
"msg_contents": "\n\nBruce Momjian wrote:\n\n>>>>There have been three ideas of what octet_length() sould return:\n>>>> 1) compressed on-disk storage length\n>>>> 2) byte length in server-side encoding\n>>>> 3) byte length in client-side encoding\n>>>>\n>>...\n>>\n>>>>The open question is whether we should be doing #3.\n>>>>\n>>There is no question in my mind that (3) must be the result of\n>>octet_length(). Any of the other options may give an interesting result,\n>>but of no practical use to a client trying to retrieve data. And\n>>everything is a client!\n>>\n>\n>Also added to TODO:\n>\n>\t* Make octet_length_client the same as octet_length() \n>\nWill this break backward compatibility ?\n\n-------------\nHannu\n\n",
"msg_date": "Wed, 21 Nov 2001 02:55:38 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: OCTET_LENGTH is wrong"
},
{
"msg_contents": "> >>>>There have been three ideas of what octet_length() sould return:\n> >>>> 1) compressed on-disk storage length\n> >>>> 2) byte length in server-side encoding\n> >>>> 3) byte length in client-side encoding\n> >>>>\n> >>...\n> >>\n> >>>>The open question is whether we should be doing #3.\n> >>>>\n> >>There is no question in my mind that (3) must be the result of\n> >>octet_length(). Any of the other options may give an interesting result,\n> >>but of no practical use to a client trying to retrieve data. And\n> >>everything is a client!\n> >>\n> >\n> >Also added to TODO:\n> >\n> >\t* Make octet_length_client the same as octet_length() \n> >\n> Will this break backward compatibility ?\n\nWell, sort of. 7.1 had text returning compressed length. We changed\nthat to server-side encoding in 7.2. Changing that to client encoding\nwill break clients, but what meaningful thing could they do with the\nserver-side encoding?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 20 Nov 2001 20:10:51 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OCTET_LENGTH is wrong"
},
{
"msg_contents": "> Bruce Momjian writes:\n> \n> > Also added to TODO:\n> >\n> > \t* Make octet_length_client the same as octet_length()\n> \n> Have we decided on that one yet?\n\nUh, Thomas said he was certain about it. I will add a question mark to\nthe TODO item.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Nov 2001 13:48:45 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OCTET_LENGTH is wrong"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> Also added to TODO:\n>\n> \t* Make octet_length_client the same as octet_length()\n\nHave we decided on that one yet?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Wed, 21 Nov 2001 19:54:38 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: OCTET_LENGTH is wrong"
},
{
"msg_contents": "Tom Lane writes:\n\n> What's bothering me right now is the difference between client and\n> server encodings. It seems that the only plausible use for\n> octet_length is to do memory allocation on the client side, and for\n> that purpose the length ought to be measured in the client encoding.\n\nOCTET_LENGTH returns the size of its argument, not the size of some\npossible future shape of that argument. There is absolutely no guarantee\nthat the string that is processed by OCTET_LENGTH will ever reach any kind\nof client. There are procedural languages, for instance, or CREATE TABLE\nAS.\n\nWhether or not this behaviour is most likely or most useful is a different\nquestion, but let's not silently readopt standard functions for\nnon-standard purposes -- we've just gotten past that one.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Wed, 21 Nov 2001 19:55:20 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: OCTET_LENGTH is wrong "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> What's bothering me right now is the difference between client and\n>> server encodings.\n\n> OCTET_LENGTH returns the size of its argument, not the size of some\n> possible future shape of that argument.\n\nThat would serve equally well as an argument for returning the\ncompressed length of the string, I think. You'll need to do better.\n\nMy take on it is that when a particular client encoding is specified,\nPostgres does its best to provide the illusion that your data actually\nis stored in that encoding. If we don't make OCTET_LENGTH agree, then\nwe're breaking the illusion.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 21 Nov 2001 14:00:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OCTET_LENGTH is wrong "
},
{
"msg_contents": "> Tom Lane writes:\n> \n> > What's bothering me right now is the difference between client and\n> > server encodings. It seems that the only plausible use for\n> > octet_length is to do memory allocation on the client side, and for\n> > that purpose the length ought to be measured in the client encoding.\n> \n> OCTET_LENGTH returns the size of its argument, not the size of some\n> possible future shape of that argument. There is absolutely no guarantee\n> that the string that is processed by OCTET_LENGTH will ever reach any kind\n> of client. There are procedural languages, for instance, or CREATE TABLE\n> AS.\n\nYes, agreed. I argued that server-side octet_length would be valuable\nfor server-side functions. However, others felt client-side was more\nimportant.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Nov 2001 14:31:18 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OCTET_LENGTH is wrong"
},
{
"msg_contents": "> OCTET_LENGTH returns the size of its argument, not the size of some\n> possible future shape of that argument. There is absolutely no guarantee\n> that the string that is processed by OCTET_LENGTH will ever reach any kind\n> of client. There are procedural languages, for instance, or CREATE TABLE\n> AS.\n> \n> Whether or not this behaviour is most likely or most useful is a different\n> question, but let's not silently readopt standard functions for\n> non-standard purposes -- we've just gotten past that one.\n\nI think the essential problem with OCTET_LENGTH(and with any other\ntext functions) is we currently do not have a way to associate\nencoding information with each text object. Probably we could solve\nthis after implementation of CREATE CHARACTER SET stuffs.\n--\nTatsuo Ishii\n",
"msg_date": "Thu, 22 Nov 2001 08:36:20 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: OCTET_LENGTH is wrong "
},
{
"msg_contents": "Tom Lane writes:\n\n> > OCTET_LENGTH returns the size of its argument, not the size of some\n> > possible future shape of that argument.\n>\n> That would serve equally well as an argument for returning the\n> compressed length of the string, I think. You'll need to do better.\n\nTOAST is not part of the conceptual computational model. The fact that\nthe compressed representation is available to functions at all is somewhat\npeculiar (although I'm not questioning it). I've already attempted to\nshow that returning the size of the compressed representation doesn't fit\nthe letter of the standard.\n\n> My take on it is that when a particular client encoding is specified,\n> Postgres does its best to provide the illusion that your data actually\n> is stored in that encoding. If we don't make OCTET_LENGTH agree, then\n> we're breaking the illusion.\n\nThe way I've seen it we consider the encoding conversion to happen \"on the\nwire\" while both the server and the client run in their own encoding. In\nthat model it's appropriate that computations in the server use the\nencoding in the server.\n\nHowever, if the model is that it should appear to clients that the entire\nsetup magically runs in \"their\" encoding then the other behaviour would be\nbetter. In that case the database encoding is really only an optimization\nhint because the actual encoding in the server is of no matter. This\nmodel would certainly be attractive as well, but there could be a few\nproblems. For instance, I don't know if the convert() function would make\nsense then. (Does it even make sense now?)\n\nAlso, we do need to consider carefully how to interface this \"illusion\" to\noperations contained strictly within the server (e.g., CREATE TABLE AS,\ncolumn defaults) and to procedural languages that may or may not come with\nencoding ideas of their own.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 22 Nov 2001 17:29:57 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: OCTET_LENGTH is wrong "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> However, if the model is that it should appear to clients that the entire\n> setup magically runs in \"their\" encoding then the other behaviour would be\n> better. In that case the database encoding is really only an optimization\n> hint because the actual encoding in the server is of no matter. This\n> model would certainly be attractive as well, but there could be a few\n> problems. For instance, I don't know if the convert() function would make\n> sense then. (Does it even make sense now?)\n\nI'm not sure that it does; it seems not to fit the model well at all.\nFor example, if I do \"SELECT convert(somestring, someencoding)\" where\nsomeencoding is anything but the server's encoding, then I will get\nbogus results, because when the data is returned to the client it\nwill get an inappropriate server-to-client-encoding translation\napplied to it. Even if I ask to convert to the client encoding,\nI will get wrong answers (two passes of the conversion). Whatever you\nmight expect convert to do, that wouldn't seem to be it.\n\n> Also, we do need to consider carefully how to interface this \"illusion\" to\n> operations contained strictly within the server (e.g., CREATE TABLE AS,\n> column defaults) and to procedural languages that may or may not come with\n> encoding ideas of their own.\n\nTrue. I think that pltcl has now got this more or less licked, but\nplperl hasn't ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 22 Nov 2001 11:57:58 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OCTET_LENGTH is wrong "
},
{
"msg_contents": "> problems. For instance, I don't know if the convert() function would make\n> sense then. (Does it even make sense now?)\n\nYes. Consider you have UNICODE database and want to sort by French or\nwhatever LATIN locale.\n\n\t SELECT * FROM t1 ORDER BY convert(text_column,'LATIN1');\n\nwould be the only way to accomplish that.\n--\nTatsuo Ishii\n",
"msg_date": "Fri, 23 Nov 2001 16:34:39 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: OCTET_LENGTH is wrong "
},
{
"msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> Yes. Consider you have UNICODE database and want to sort by French or\n> whatever LATIN locale.\n> \t SELECT * FROM t1 ORDER BY convert(text_column,'LATIN1');\n> would be the only way to accomplish that.\n\nThat in itself would not get the job done; how is the sort operator\nto know what collation order you want?\n\nThe SQL92 spec suggests that the syntax should be\n\n\t... ORDER BY text_column COLLATE French;\n\n(note collation names are not standardized AFAICT). Seems to me it\nshould then be the system's responsibility to make this happen,\nincluding any encoding conversion that might be needed before the\ncomparisons could be done.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 23 Nov 2001 10:50:14 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OCTET_LENGTH is wrong "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > Yes. Consider you have UNICODE database and want to sort by French or\n> > whatever LATIN locale.\n> > SELECT * FROM t1 ORDER BY convert(text_column,'LATIN1');\n> > would be the only way to accomplish that.\n> \n> That in itself would not get the job done; how is the sort operator\n> to know what collation order you want?\n> \n> The SQL92 spec suggests that the syntax should be\n> \n> ... ORDER BY text_column COLLATE French;\n> \n> (note collation names are not standardized AFAICT). Seems to me it\n> should then be the system's responsibility to make this happen,\n> including any encoding conversion that might be needed before the\n> comparisons could be done.\n\nThanks to postgreSQL's flexibility you can currently make a contrib \nfunction convert(text_column,'LATIN1',locale) that returns a (new) \ntext_with_locale type that has locale_aware comparison operators.\n\n--------------\nHannu\n",
"msg_date": "Fri, 23 Nov 2001 18:12:53 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: OCTET_LENGTH is wrong"
},
{
"msg_contents": "Tatsuo Ishii writes:\n\n> > problems. For instance, I don't know if the convert() function would make\n> > sense then. (Does it even make sense now?)\n>\n> Yes. Consider you have UNICODE database and want to sort by French or\n> whatever LATIN locale.\n>\n> \t SELECT * FROM t1 ORDER BY convert(text_column,'LATIN1');\n>\n> would be the only way to accomplish that.\n\nI don't think so. The sort order is independent of the character\nencoding, and vice versa. It must be, because\n\n1) One language can be represented in different encodings and should\nobviously still sort the same.\n\n2) One encoding can serve for plenty of languages, which all sort\ndifferently.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Fri, 23 Nov 2001 22:58:12 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: OCTET_LENGTH is wrong "
},
{
"msg_contents": "> > Yes. Consider you have UNICODE database and want to sort by French or\n> > whatever LATIN locale.\n> >\n> > \t SELECT * FROM t1 ORDER BY convert(text_column,'LATIN1');\n> >\n> > would be the only way to accomplish that.\n> \n> I don't think so. The sort order is independent of the character\n> encoding, and vice versa. It must be, because\n> \n> 1) One language can be represented in different encodings and should\n> obviously still sort the same.\n> \n> 2) One encoding can serve for plenty of languages, which all sort\n> differently.\n\nI assume you are talking about the concept of SQL92/99's COLLATE\nsyntax. But I just talked about what we could do in 7.2 which\napprarently does not have the SQL92's COLLATE syntax.\n\nBTW,\n\n> I don't think so. The sort order is independent of the character\n> encoding, and vice versa. It must be, because\n\nThis seems different from SQL's CREATE COLLATION syntax.\n>From SQL99's CREATE COLLATION definition:\n\n CREATE COLLATION <collation name> FOR\n <character set specification>\n FROM <existing collation name>\n [ <pad characteristic> ]\n\nSo it seems a collation depends on a character set.\n--\nTatsuo Ishii\n",
"msg_date": "Sat, 24 Nov 2001 09:32:22 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: OCTET_LENGTH is wrong "
},
{
"msg_contents": "> > Yes. Consider you have UNICODE database and want to sort by French or\n> > whatever LATIN locale.\n> > \t SELECT * FROM t1 ORDER BY convert(text_column,'LATIN1');\n> > would be the only way to accomplish that.\n> \n> That in itself would not get the job done; how is the sort operator\n> to know what collation order you want?\n\nI assume the locale support enabled of course.\n\n> The SQL92 spec suggests that the syntax should be\n> \n> \t... ORDER BY text_column COLLATE French;\n> \n> (note collation names are not standardized AFAICT). Seems to me it\n> should then be the system's responsibility to make this happen,\n> including any encoding conversion that might be needed before the\n> comparisons could be done.\n\nI'm not talking about our (hopefully) upcoming implementation of SQL92\nCOLLATE syntax. It's ideal and should be our goal, but what I have\nshown is how we could do the job in 7.2 now.\n--\nTatsuo Ishii\n",
"msg_date": "Sat, 24 Nov 2001 09:33:17 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: OCTET_LENGTH is wrong "
},
{
"msg_contents": "Tatsuo Ishii writes:\n\n> > I don't think so. The sort order is independent of the character\n> > encoding, and vice versa. It must be, because\n>\n> This seems different from SQL's CREATE COLLATION syntax.\n> >From SQL99's CREATE COLLATION definition:\n>\n> CREATE COLLATION <collation name> FOR\n> <character set specification>\n> FROM <existing collation name>\n> [ <pad characteristic> ]\n>\n> So it seems a collation depends on a character set.\n\nI see. But that really doesn't have anything to do with reality. In\nfact, it completely undermines the transparency of the character set\nencoding that we're probably trying to achieve.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Sun, 25 Nov 2001 23:31:23 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: OCTET_LENGTH is wrong "
},
{
"msg_contents": "Peter Eisentraut wrote:\n> \n> Tatsuo Ishii writes:\n> \n> > > I don't think so. The sort order is independent of the character\n> > > encoding, and vice versa. It must be, because\n> >\n> > This seems different from SQL's CREATE COLLATION syntax.\n> > >From SQL99's CREATE COLLATION definition:\n> >\n> > CREATE COLLATION <collation name> FOR\n> > <character set specification>\n> > FROM <existing collation name>\n> > [ <pad characteristic> ]\n> >\n> > So it seems a collation depends on a character set.\n> \n> I see. But that really doesn't have anything to do with reality. In\n> fact, it completely undermines the transparency of the character set\n> encoding that we're probably trying to achieve.\n\nCOLLATION being independent of character set is a separate problem \nfrom COLLATION being _defined_ on character set - without a known \ncharacter set I can't see how you can define it. \ni.e. \"COLLACTION for any 8-bit charset\" just does not make sense.\n\n-----------------\nHannu\n",
"msg_date": "Mon, 26 Nov 2001 09:50:40 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: OCTET_LENGTH is wrong"
},
{
"msg_contents": "> > I see. But that really doesn't have anything to do with reality. In\n> > fact, it completely undermines the transparency of the character set\n> > encoding that we're probably trying to achieve.\n> \n> COLLATION being independent of character set is a separate problem \n> from COLLATION being _defined_ on character set - without a known \n> character set I can't see how you can define it. \n> i.e. \"COLLACTION for any 8-bit charset\" just does not make sense.\n\nCorrect. IGNORE_CASE collation will not apply to some languages those\ndo not have upper/lower case concept such as Japanese.\n--\nTatsuo Ishii\n",
"msg_date": "Mon, 26 Nov 2001 17:45:48 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: OCTET_LENGTH is wrong"
}
] |
[
{
"msg_contents": "The current beta tarballs don't contain any documentation. Is someone\ngoing to reinstate the old documentation build on postgresql.org?\n\nAlso, currently the sub-tarballs don't get build correctly. This is\nactually due to the above problem. I also suggest that the release\nbuilding script do some error checking, because these hickups around\nrelease time just keep piling up.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Sat, 17 Nov 2001 18:39:37 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "No documentation in beta tarballs"
},
{
"msg_contents": "> The current beta tarballs don't contain any documentation. Is someone\n> going to reinstate the old documentation build on postgresql.org?\n\nI was wondering about that. :-)\n\n> Also, currently the sub-tarballs don't get build correctly. This is\n> actually due to the above problem. I also suggest that the release\n> building script do some error checking, because these hickups around\n> release time just keep piling up.\n\nThere is more. As someone pointed out, and I can confirm, the snapshots\nhave the beta1 inside the tarball:\n\n#$ lf /wrk/tmp/postgresql-snapshot/\nCOPYRIGHT README contrib/\nGNUmakefile.in aclocal.m4 doc/\nHISTORY config/ postgresql-7.2b1/\n\n ^^^^^^^^^^^^^^^^^\n\nINSTALL configure* register.txt\nMakefile configure.in src/\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 17 Nov 2001 13:17:34 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: No documentation in beta tarballs"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> There is more. As someone pointed out, and I can confirm, the snapshots\n> have the beta1 inside the tarball:\n\nThe distribution build errored out (because the documentation was missing)\nand therefore this directory didn't get cleaned up. Later, the version\nnumber was changed so that this directory was no longer recognized as\nbelonging under build process control.\n\nI guess this directory should be removed by make clean, but that doesn't\nguard against the second problem, you simply need to watch for that.\n\nBtw., I think each labeled distribution should be build from a clean cvs\ncheckout or cvs export. If you keep building releases from the same tree\nyou're liable to pile up core files and other garbage as hickups happen.\nYou'd need to start with providing consistent tags, however.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Mon, 19 Nov 2001 15:39:42 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: No documentation in beta tarballs"
}
] |
[
{
"msg_contents": " P O S T G R E S Q L\n\n 7 . 2 O P E N I T E M S\n\n\nCurrent at ftp://candle.pha.pa.us/pub/postgresql/open_items.\n\nSource Code Changes\n-------------------\nFix geometry expected files\nComplete timestamp/current changes\nPLPython security fix\nRemove beta1 directory from snapshot\nFix octet_length() for TOAST values\n\nDocumentation Changes\n---------------------\nBuild manual pages\nComplete timestamp/current changes\nFix documentation build\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 17 Nov 2001 15:19:31 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Open items"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> PLPython security fix\n\nThat's done, unless Kevin or Brad come back with corrections to my\nmerge of their patches.\n\nAnother thing I'm concerned about is Bernd Tegge's report of regression\ntest failure on Alpha/Tru64, but since I failed to duplicate the problem\non Alpha/Linux, there's not much I can do about it at the moment.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 17 Nov 2001 16:27:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Open items "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > PLPython security fix\n> \n> That's done, unless Kevin or Brad come back with corrections to my\n> merge of their patches.\n\nUpdated. I forgot to remove it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 17 Nov 2001 20:46:36 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Open items"
},
{
"msg_contents": "Tom Lane wrote:\n\n> Another thing I'm concerned about is Bernd Tegge's report of regression\n> test failure on Alpha/Tru64\n\nYes, I have them too. Unfortunately, I didn't have time to look into it\nwith more attention and I didn't want to report just \"I have regression\ntest errors\"...\n\n-- \nAlessio F. Bragadini\t\talessio@albourne.com\nAPL Financial Services\t\thttp://village.albourne.com\nNicosia, Cyprus\t\t \tphone: +357-22-755750\n\n\"It is more complicated than you think\"\n\t\t-- The Eighth Networking Truth from RFC 1925\n",
"msg_date": "Mon, 19 Nov 2001 09:51:22 +0200",
"msg_from": "Alessio Bragadini <alessio@albourne.com>",
"msg_from_op": false,
"msg_subject": "Re: Open items"
}
] |
[
{
"msg_contents": "Here is a report from a user in Japan. I confirmed it happens in\ncurrent.\n\nDROP TABLE t1;\nCREATE TABLE t1 ( name TEXT, n INTEGER);\nDROP TABLE t2;\nCREATE TABLE t2 ( name TEXT, n INTEGER);\nDROP TABLE t3;\nCREATE TABLE t3 ( name TEXT, n INTEGER);\n\nINSERT INTO t1 VALUES ( 'aa', 11 );\nINSERT INTO t2 VALUES ( 'aa', 12 );\nINSERT INTO t2 VALUES ( 'bb', 22 );\nINSERT INTO t3 VALUES ( 'aa', 13 );\nINSERT INTO t3 VALUES ( 'cc', 33 );\n\nSELECT * FROM t1 FULL JOIN t2 USING (name) FULL JOIN t3 USING (name); -- NG\nERROR: FULL JOIN is only supported with mergejoinable join conditions\n--\nTatsuo Ishii\n",
"msg_date": "Sun, 18 Nov 2001 20:57:39 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "full outer join bug?"
},
{
"msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> SELECT * FROM t1 FULL JOIN t2 USING (name) FULL JOIN t3 USING (name);\n> ERROR: FULL JOIN is only supported with mergejoinable join conditions\n\nI think we're kinda stuck with that in the near term. A possible\nworkaround is\n\nSELECT * FROM t1 FULL JOIN t2 on t1.name=t2.name\nFULL JOIN t3 on t1.name=t3.name;\n\nor similarly\n\nSELECT * FROM t1 FULL JOIN t2 on t1.name=t2.name\nFULL JOIN t3 on t2.name=t3.name;\n\neach of which is slightly different from the semantics of the original\nquery, but might be close enough for your purposes.\n\nThe problem is that \"name\" coming out of the t1/t2 full join is not a\nsimple variable: it's actually a \"COALESCE(t1.name,t2.name)\" construct.\nAnd the mergejoin code doesn't support mergejoining on anything but\nsimple variables. And our other join methods don't support FULL JOIN.\nSo there's no way to build a working plan.\n\nI have plans to revise the handling of join variables at some point\nin the future, probably as part of the fabled querytree redesign.\nAnd mergejoining on expressions should be allowed too, sooner or later.\nNeither one is going to happen for 7.2 though ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 18 Nov 2001 14:20:23 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: full outer join bug? "
},
{
"msg_contents": "I wrote:\n> I have plans to revise the handling of join variables at some point\n> in the future, probably as part of the fabled querytree redesign.\n> And mergejoining on expressions should be allowed too, sooner or later.\n> Neither one is going to happen for 7.2 though ...\n\nThere probably ought to be something in the master TODO list about\nthese. Bruce, would you add something along the lines of:\n\n* Nested FULL OUTER JOINs don't work (Tom)\n* Allow merge and hash joins on expressions not just simple variables (Tom)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 18 Nov 2001 15:53:08 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: full outer join bug? "
},
{
"msg_contents": "> I wrote:\n> > I have plans to revise the handling of join variables at some point\n> > in the future, probably as part of the fabled querytree redesign.\n> > And mergejoining on expressions should be allowed too, sooner or later.\n> > Neither one is going to happen for 7.2 though ...\n> \n> There probably ought to be something in the master TODO list about\n> these. Bruce, would you add something along the lines of:\n> \n> * Nested FULL OUTER JOINs don't work (Tom)\n> * Allow merge and hash joins on expressions not just simple variables (Tom)\n\nAdded.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 18 Nov 2001 16:17:03 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: full outer join bug?"
}
] |
[
{
"msg_contents": "Hi all,\n\n In looking over the TODO doc, I saw\n\n* Add table name mapping for numeric file names\n\n After reading the thread that bore that entry, I'd like to \nsubmit a bit of code for consideration, since it solves at least one \nof the problems raised in that thread. You may fetch it from cvs;\n \n sh$ cvs -d :pserver:anoncvs@rcfile.org:/var/cvs login\n [no password, just press enter]\n sh$ cvs -d :pserver:anoncvs@rcfile.org:/var/cvs co pg_stat_du\n sh$ echo \"read the README\"\n\nIt's been tested with 7.1 and 7.2bN and is basically a few c functions\nthat allow (approximate) reports of disk usage (in Kb) per database, \nand per relation (only on the current database, for now).\n\nbrent=# SELECT datname, pg_du_db(datname) AS diskusage FROM pg_database;\n datname | diskusage \n-----------+-----------\n brent | 1785\n template1 | 1713\n template0 | 1713\n(3 rows)\n\nbrent=# select relname,pg_du_rel(relname) as diskusage from pg_class \nbrent-# where (relkind='r' or relkind='i') order by diskusage desc;\n relname | diskusage \n---------------------------------+-----------\n pg_proc | 252\n pg_proc_proname_narg_type_index | 236\n pg_description | 100\n pg_attribute | 92\n pg_operator | 84\n...\n\n\n If you guys could suggest what needs to be added/modified to allow\nBruce to mark off that TODO item, I can probably get that done later\ntoday.\n\ncheers.\n brent\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n",
"msg_date": "Sun, 18 Nov 2001 07:36:04 -0500",
"msg_from": "Brent Verner <brent@rcfile.org>",
"msg_from_op": true,
"msg_subject": "TODO item inquiry/claim"
},
{
"msg_contents": "Brent Verner writes:\n\n> * Add table name mapping for numeric file names\n\n> It's been tested with 7.1 and 7.2bN and is basically a few c functions\n> that allow (approximate) reports of disk usage (in Kb) per database,\n> and per relation (only on the current database, for now).\n\nI've written on like that a while ago:\n\nhttp://webmail.postgresql.org/~petere/dbsize.html\n\nThe tarball can be rolled into contrib -- now that I think of it I don't\nknow why I never did that.\n\nNever imagined this would have anything to do with that TODO item, though.\nI figured oid2name accomplished that.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Sun, 18 Nov 2001 18:17:24 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: TODO item inquiry/claim"
},
{
"msg_contents": "On 18 Nov 2001 at 18:17 (+0100), Peter Eisentraut wrote:\n| Brent Verner writes:\n| \n| > * Add table name mapping for numeric file names\n| \n| > It's been tested with 7.1 and 7.2bN and is basically a few c functions\n| > that allow (approximate) reports of disk usage (in Kb) per database,\n| > and per relation (only on the current database, for now).\n| \n| I've written on like that a while ago:\n| \n| http://webmail.postgresql.org/~petere/dbsize.html\n| \n| The tarball can be rolled into contrib -- now that I think of it I don't\n| know why I never did that.\n\n Can you put your code in contrib? I've seen atleast a few other \nusers wanting to measure disk use.\n\n| Never imagined this would have anything to do with that TODO item, though.\n| I figured oid2name accomplished that.\n\n Yeah, but it address one inconvenience caused by the oid-named files.\nEven after reading through the thread[1], I'm not sure what the desired\nsolution to the problem is. There seem to be two reasons to know\nthe filename->(db|rel) mapping.\n\n 1) resource usage [solved]\n 2) data recovery from db failure.\n\n Are there any other reasons an admin needs to know the actual\nfilenames the system is using? Please reply if you have any.\n\n I really have no idea what all would be involved in recovering a\ndatabase (or a single relation, if possible), so I would appreciate\nsuggestions on what needs to be done to address this issue WRT the\nfilename->(db|rel) mappings. There was not a concensus on what the\nproper solution would look like; some suggested maintaining symlinks,\nother suggested a separate file of mappings. If someone would tell\nme what a reliable solution would be, I'll work at implementing it\nfor inclusion in one of the 7.2.X maintenance releases.\n\ncheers.\n brent\n\n[1] http://fts.postgresql.org/db/mw/msg.html?mid=114680\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n",
"msg_date": "Sun, 18 Nov 2001 18:25:34 -0500",
"msg_from": "Brent Verner <brent@rcfile.org>",
"msg_from_op": true,
"msg_subject": "Re: TODO item inquiry/claim"
},
{
"msg_contents": "Brent Verner wrote:\n> On 18 Nov 2001 at 18:17 (+0100), Peter Eisentraut wrote:\n> | Brent Verner writes:\n> | \n> | > * Add table name mapping for numeric file names\n> | \n> | > It's been tested with 7.1 and 7.2bN and is basically a few c functions\n> | > that allow (approximate) reports of disk usage (in Kb) per database,\n> | > and per relation (only on the current database, for now).\n> | \n> | I've written on like that a while ago:\n> | \n> | http://webmail.postgresql.org/~petere/dbsize.html\n> | \n> | The tarball can be rolled into contrib -- now that I think of it I don't\n> | know why I never did that.\n> \n> Can you put your code in contrib? I've seen atleast a few other \n> users wanting to measure disk use.\n\nDone.\n\n> | Never imagined this would have anything to do with that TODO item, though.\n> | I figured oid2name accomplished that.\n> \n> Yeah, but it address one inconvenience caused by the oid-named files.\n> Even after reading through the thread[1], I'm not sure what the desired\n> solution to the problem is. There seem to be two reasons to know\n> the filename->(db|rel) mapping.\n> \n> 1) resource usage [solved]\n\nYes.\n\n> 2) data recovery from db failure.\n> \n> Are there any other reasons an admin needs to know the actual\n> filenames the system is using? Please reply if you have any.\n> \n> I really have no idea what all would be involved in recovering a\n> database (or a single relation, if possible), so I would appreciate\n> suggestions on what needs to be done to address this issue WRT the\n> filename->(db|rel) mappings. There was not a concensus on what the\n> proper solution would look like; some suggested maintaining symlinks,\n> other suggested a separate file of mappings. If someone would tell\n> me what a reliable solution would be, I'll work at implementing it\n> for inclusion in one of the 7.2.X maintenance releases.\n\nI wish I knew the answer. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 22 Feb 2002 18:06:12 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: TODO item inquiry/claim"
}
] |
[
{
"msg_contents": "I just noticed that psql is now printing function argument/result types\nin a less than pleasing fashion:\n\nregression=# \\df length\n List of functions\n Result data type | Name | Argument data types\n------------------+--------+---------------------\n integer | length | \"bit\"\n integer | length | bpchar\n integer | length | character varying\n integer | length | text\n ....\n\nIt used to show these first two as unquoted bit and character,\nrespectively. The reason for the change is my recent twiddling\nto ensure that pg_dump would dump the types of columns with -1\ntypmod in an appropriate fashion.\n\nI think an appropriate fix would be to make the format_type function\ndistinguish between format_type(typeoid, -1) and format_type(typeoid,\nNULL), which it currently treats the same. The former could be taken\nto mean \"give me the type name for a column with typmod -1\" whereas\nthe latter could be taken to mean \"give me a type name in a context\nwhere there is no typmod\", such as a function argument/result type.\n\nLooking through the existing uses of format_type, it seems that all\nthe call sites have the right choice already, so this behavior is not\ntoo unreasonable. Any objections?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 18 Nov 2001 23:37:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "format_type infelicity"
},
{
"msg_contents": "Tom Lane writes:\n\n> I think an appropriate fix would be to make the format_type function\n> distinguish between format_type(typeoid, -1) and format_type(typeoid,\n> NULL), which it currently treats the same. The former could be taken\n> to mean \"give me the type name for a column with typmod -1\" whereas\n> the latter could be taken to mean \"give me a type name in a context\n> where there is no typmod\", such as a function argument/result type.\n\nThis was the idea, but until recently there was no actual need for\nprinting something different.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Mon, 19 Nov 2001 15:40:01 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: format_type infelicity"
}
] |
[
{
"msg_contents": " \n> > There is not much point in arguing a specific query case,\n> It is no specific query case. It is the speed of an index scan which\n> goes like N if you do it with PostgreSQL and it goes like log N if\n> you do not have to look back into the table like MS SQL server does.\n\nI cannot see why you keep saying that. It is simply not true.\nMS SQL shows a behavior of O(N), it is simply, that PostgreSQL\nbecause of well described methodology takes longer per affected row.\nThe speed difference is linear, no matter how many rows\nare affected.\n\nAndreas \n",
"msg_date": "Mon, 19 Nov 2001 13:55:10 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: Further open item (Was: Status of 7.2)"
},
{
"msg_contents": "On Mon, 19 Nov 2001, Zeugswetter Andreas SB SD wrote:\n\n> > It is no specific query case. It is the speed of an index scan which\n> > goes like N if you do it with PostgreSQL and it goes like log N if\n> > you do not have to look back into the table like MS SQL server does.\n>\n> I cannot see why you keep saying that. It is simply not true.\n> MS SQL shows a behavior of O(N), it is simply, that PostgreSQL\n> because of well described methodology takes longer per affected row.\n> The speed difference is linear, no matter how many rows\n> are affected.\nI�m basing my assumption on the statement of my colleague. He\ntold me that consequent index usage results in O(log N) behaviour.\nI�m really no expert in database theory but if you like I can foreward\nyour question.\n\nKind regards\n\n Andreas.\n",
"msg_date": "Mon, 19 Nov 2001 14:27:58 +0100 (CET)",
"msg_from": "\"Tille, Andreas\" <TilleA@rki.de>",
"msg_from_op": false,
"msg_subject": "Re: Further open item (Was: Status of 7.2)"
},
{
"msg_contents": "\n\nTille, Andreas wrote:\n\n>On Mon, 19 Nov 2001, Zeugswetter Andreas SB SD wrote:\n>\n>>>It is no specific query case. It is the speed of an index scan which\n>>>goes like N if you do it with PostgreSQL and it goes like log N if\n>>>you do not have to look back into the table like MS SQL server does.\n>>>\n>>I cannot see why you keep saying that. It is simply not true.\n>>MS SQL shows a behavior of O(N), it is simply, that PostgreSQL\n>>because of well described methodology takes longer per affected row.\n>>The speed difference is linear, no matter how many rows\n>>are affected.\n>>\n>I�m basing my assumption on the statement of my colleague. He\n>told me that consequent index usage results in O(log N) behaviour.\n>\nSearching through index only vs. searching through index + looking up \neach tuple in main\ntable can be better than linear, if the tuples are scattered throughout \nmain table.\n\nSearching through index only is probably faster by roughly a factor of \n2 * (size_of_heap_tuple/size_of_index_entry) in your case where you want \nto count\nabout half of the rows in table.\n\n\n\n----------------\nHannu\n\n\n",
"msg_date": "Mon, 19 Nov 2001 18:35:04 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Further open item (Was: Status of 7.2)"
}
] |
[
{
"msg_contents": "I have been thinking about postgresql.conf and I have made a few posts already\nabout it. My concern is that Postgres, as default, is not very well tuned. One\ncan even say the default is pretty much a bad configuration.\n\nMy idea of a postgresql.conf \"cookbook\" seemed not such a good idea, some liked\nit, some did not, and the discussion degenerated into a discussion about the\nvalues.\n\nHow about this: we have just just two or three default configuration files?\nCompact, Workstation, and server.\n\n\"Compact\" would be the current postgresql.conf.\n\n\"Workstation\" would boost the number of buffers, sort and vacuum memory, in\nessence Postgres would be configured to use about 64M~128M efficiently, maybe\nlimited to 16 backends. Say 4096 buffers, Sort mem setting of 4096.\n\n\"Server\" would have a huge number of buffers, large numbers for sort, and a\nboosted vacuum memory. A server would be assumed to have lots of memory and be\nlimited to 128 backends. Say 65536 buffers, Sort memory of 32768.\n\nWe could also tune some of the optimizer parameters. For instance, on \"Server\"\nrand_page_cost would be lower because of concurrent disk operations. We could\neven try to tune some of the wal settings accordingly.\n\nI know these things are all documented, and shame on the dba for not reading\nthe documentation, but all the help we can give to someone new to PostgreSQL\nmakes it that much more likely that they will be able to use it successfully.\nBesides, if we put out a small number of specific versions of postgresql.conf,\na more focused feedback about performence issues on optimization can be\nobtained.\n",
"msg_date": "Mon, 19 Nov 2001 08:45:36 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "postgresql.conf"
},
{
"msg_contents": "mlw <markw@mohawksoft.com> writes:\n> \"Server\" would have a huge number of buffers,\n\nDo you have any evidence whatsoever that that's actually a good idea?\n\nCertainly the existing default configuration is ridiculously cramped\nfor modern machines. But I've always felt that NBuffers somewhere in\nthe low thousands should be plenty. If you have lots of main memory\nthen the kernel can be expected to use it for kernel-level disk\nbuffering, which should be nearly as good as buffering inside Postgres.\n(Maybe better, considering that we have some routines that scan all our\nbuffers linearly ...) Moreover, if you request a huge chunk of shared\nmemory then you run a significant risk that the kernel will decide to\nstart swapping parts of it, at which point it definitely becomes a\nloser. Swapping a dirty buffer out and back in before it finally gets\nwritten to disk is counterproductive. You want to keep the number of\nbuffers small enough that they all stay pretty \"hot\" in the swapper's\neyes.\n\nBasically, I think that it's best to give the kernel plenty of elbow\nroom to deal with memory pressures on its own terms. Even on a machine\nthat's nominally dedicated to running Postgres.\n\nAwhile back I suggested raising the default configuration to 1000 or\nso buffers, which would be slightly less silly than the current default\neven if it's not optimal. Didn't get much feedback about the idea\nthough.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 19 Nov 2001 10:59:57 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf "
},
{
"msg_contents": "Tom Lane wrote:\n\n> mlw <markw@mohawksoft.com> writes:\n> > \"Server\" would have a huge number of buffers,\n>\n> Do you have any evidence whatsoever that that's actually a good idea?\n>\n> Certainly the existing default configuration is ridiculously cramped\n> for modern machines. But I've always felt that NBuffers somewhere in\n> the low thousands should be plenty. If you have lots of main memory\n> then the kernel can be expected to use it for kernel-level disk\n> buffering, which should be nearly as good as buffering inside Postgres.\n> (Maybe better, considering that we have some routines that scan all our\n> buffers linearly ...) Moreover, if you request a huge chunk of shared\n> memory then you run a significant risk that the kernel will decide to\n> start swapping parts of it, at which point it definitely becomes a\n> loser. Swapping a dirty buffer out and back in before it finally gets\n> written to disk is counterproductive. You want to keep the number of\n> buffers small enough that they all stay pretty \"hot\" in the swapper's\n> eyes.\n\nI can't speak about routines that scan buffers linearly, but I have noticed\nHUGH performance gains by increasing the number of buffers. Queries that\nnormally hit a few thousand blocks, can have a cache hit rate of about 80%.\n(Very query specific I know)\n\nSort memory is also a huge gain for large queries.\n\nI understand that there is a point of diminishing returns, cache management\nvs disk access. Cache too large and poorly managed costs more than disk.\nI'm not sure I've hit that point yet. On an SMP machine, it seems that a\nCPU bottle neck is better than an I/O bottleneck. The CPU bottleneck is\nscalable, where as an I/O bottleneck is not. Perhaps on a single process\nmachine, fewer buffers would be more appropriate.\n\n>\n> Basically, I think that it's best to give the kernel plenty of elbow\n> room to deal with memory pressures on its own terms. Even on a machine\n> that's nominally dedicated to running Postgres.\n\nIn our database systems we have 1G of ram. Postgres is configured to use\nabout 1/4~1/2. (Our number of buffers is 32768). Our sort memory is 32768.\n\n> Awhile back I suggested raising the default configuration to 1000 or\n> so buffers, which would be slightly less silly than the current default\n> even if it's not optimal. Didn't get much feedback about the idea\n> though.\n\n(I am really glad we are talking about this. )\n\nSurely your recommendation of using 1000 buffers is a great step. 8M of\nshared memory on a modern system is trivial, and would make a huge impact.\nSort memory also seems exceedingly small as well, when machines ship with a\npractical minimum of 256M RAM, and a probable 512M~1G, 512K seems like a\nvery small sort size.\n\nRegardless of the actual numbers, I still think that more than one\n\"default\" needs to be defined. I would bet that Postgres runs as much as a\nstand-alone server as it does as a workstation database ala Access.\n\n\n\n\n",
"msg_date": "Mon, 19 Nov 2001 11:43:55 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: postgresql.conf"
},
{
"msg_contents": "mlw writes:\n\n> I have been thinking about postgresql.conf and I have made a few posts already\n> about it. My concern is that Postgres, as default, is not very well tuned. One\n> can even say the default is pretty much a bad configuration.\n\nThe default configuration is mainly guided by three sometimes\ncontradictory aspects: It should be reasonably secure, \"unusual\" or\nnon-standard features are turned off, and resources are regulated so that\nit is easy to \"try out\" PostgreSQL without having to do major kernel\ntuning first or bringing other applications down to their knees. I think\nthe default settings for most parameters are not really disputed, it's\nonly the performance and resource-related settings that you want to work\non.\n\n> How about this: we have just just two or three default configuration files?\n> Compact, Workstation, and server.\n\nTrying to elimate the one-size-does-not-fit-all problem with\nN-sizes-fit-all cannot be an optimal idea considering the dimensionality\nof the space of possible configurations. If all you're concerned about is\nbuffers and sort memory it's much easier to say \"configure buffers to use\n1/4 of available memory\" than to make arbitrary guesses about the\navailable memory and attach arbitrary labels to them.\n\nTheoretically, it should be possible to determine optimal values for all\nperformance-related settings from a combination of benchmarks, a few\nquestions asked of the user about the system configuration and the\nexpected work load, and a dynamic analysis of the nature of the data.\nBut a system of formulas describing these relationsships is incredibly\ndifficult to figure out and solve. If it weren't, we could get rid of all\nthese settings and allocate resources dynamically at run time.\n\nWhat we ought to do, however, is to collect and document empirical methods\nfor tuning, such as the above \"1/4 of available memory\" rule (which does\nnot claim to be correct, btw.).\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Mon, 19 Nov 2001 19:44:30 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf"
},
{
"msg_contents": "> What we ought to do, however, is to collect and document empirical methods\n> for tuning, such as the above \"1/4 of available memory\" rule (which does\n> not claim to be correct, btw.).\n\nWhat would be interesting is to have a program that prompted the user,\nchecked some system values, and modified postgresql.conf accordingly.\n\nThe major problem with this idea is that I don't know of a good way to\ndetermine the proper values programmatically, let alone portably.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 19 Nov 2001 16:04:10 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf"
},
{
"msg_contents": "On Mon, 19 Nov 2001 16:04:10 -0500 (EST), you wrote:\n\n>> What we ought to do, however, is to collect and document empirical methods\n>> for tuning, such as the above \"1/4 of available memory\" rule (which does\n>> not claim to be correct, btw.).\n>\n>What would be interesting is to have a program that prompted the user,\n>checked some system values, and modified postgresql.conf accordingly.\n\nI vaguely remember Oracle had an out-of-the-box choice for a\nsmall, medium or large installation, with small being the\ndefault. This changed a bunch of parameters, including the RDBMS\nblock buffers, shared global area, max open cursors and such.\nThis can be implemented as an (installation time) config utility\non top of the current fundamental parameters.\n\nRegards,\nRen� Pijlman <rene@lab.applinet.nl>\n",
"msg_date": "Tue, 20 Nov 2001 19:05:41 +0100",
"msg_from": "Rene Pijlman <rene@lab.applinet.nl>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf"
},
{
"msg_contents": "\nSince I proposed three postgresql.conf configuration files, I will start by\nsuggesting some settings different from the default: (Any additions or corrections\nwould be greatly appreciated.)\n\n\nCompact:\nThe current postgresql.conf\n\n\nWorkstation:\ntcpip_socket = true\nmax_connections = 32\nshared_buffers = 1024\nsort_mem = 8192\nrandom_page_cost = 2\n\n\nServer:\ntcpip_socket = true\nmax_connections = 128\nshared_buffers = 8192\nsort_mem = 16384\nrandom_page_cost = 1\n\n\n\nThe random_page_cost is changed because of an assumption that the bigger systems\nwill be more busy. The more busy a machine is doing I/O the lower the differential\nbetween a sequential and random access. (\"sequential\" to the application is less\nlikely sequential to the physical disk.)\n\nI'd like to open a debate about the benefit/cost of shared_buffers. The question\nis: \"Will postgres' management of shared buffers out perform O/S cache? Is there a\npoint of diminishing return on number of buffers? If so, what?\n\nSort memory makes a huge impact on queries. If you got the memory, use it.\n\nThese are just ballpark settings, I don't even know how good they are. The problem\nis that server environments differ so greatly that there is no right answer. I am\njust really concerned that the newbe PostgreSQL user will assume the performance\nthey see with the default settings are what they will judge PostgreSQL.\n\n\n\n\n\n\n\n\n\n\n\n",
"msg_date": "Tue, 20 Nov 2001 13:16:01 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: postgresql.conf (Proposed settings)"
},
{
"msg_contents": "mlw writes:\n\n> These are just ballpark settings, I don't even know how good they are. The problem\n> is that server environments differ so greatly that there is no right answer.\n\nWhich is why this is clearly not a solution.\n\n> I am just really concerned that the newbe PostgreSQL user will assume\n> the performance they see with the default settings are what they will\n> judge PostgreSQL.\n\nFor this kind of \"newbie\", the kind that doesn't read the documentation,\nthis would only make it worse, because they'd assume that by making the\nchoice between three default configurations they've done an adequate\namount of tuning. Basically, you'd exchange, \"I can't find any tuning\ninformation, but it's slow\" for \"I did all the tuning and it's still\nslow\". Not a good choice.\n\nThe bottom line is that you *must* edit postgresql.conf in order to tune\nyour server. If this editing is simplified it doesn't matter what the\ndefault is.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Wed, 21 Nov 2001 19:55:07 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf (Proposed settings)"
},
{
"msg_contents": "> > I am just really concerned that the newbe PostgreSQL user will assume\n> > the performance they see with the default settings are what they will\n> > judge PostgreSQL.\n> \n> For this kind of \"newbie\", the kind that doesn't read the documentation,\n> this would only make it worse, because they'd assume that by making the\n> choice between three default configurations they've done an adequate\n> amount of tuning. Basically, you'd exchange, \"I can't find any tuning\n> information, but it's slow\" for \"I did all the tuning and it's still\n> slow\". Not a good choice.\n> \n> The bottom line is that you *must* edit postgresql.conf in order to tune\n> your server. If this editing is simplified it doesn't matter what the\n> default is.\n\nIs it possible to probe the machine with and update postgresql.conf\nautomatically?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Nov 2001 14:33:37 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf (Proposed settings)"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> \n> mlw writes:\n> \n> > These are just ballpark settings, I don't even know how good they are. The problem\n> > is that server environments differ so greatly that there is no right answer.\n> \n> Which is why this is clearly not a solution.\n\nSometimes an incomplete solution, or even a grossly poor solution, which\naddresses a problem, is better than no solution what so ever.\n\n> \n> > I am just really concerned that the newbe PostgreSQL user will assume\n> > the performance they see with the default settings are what they will\n> > judge PostgreSQL.\n> \n> For this kind of \"newbie\", the kind that doesn't read the documentation,\n> this would only make it worse, because they'd assume that by making the\n> choice between three default configurations they've done an adequate\n> amount of tuning. Basically, you'd exchange, \"I can't find any tuning\n> information, but it's slow\" for \"I did all the tuning and it's still\n> slow\". Not a good choice.\n\nI think sort of thinking will not help the end user at all. Offering a choice\nof three less badly tuned configuration files will probably produce a better\nuser experience than one very badly tuned file.\n\n> \n> The bottom line is that you *must* edit postgresql.conf in order to tune\n> your server. If this editing is simplified it doesn't matter what the\n> default is.\n\nI don't think this is true at all. Making buffers and sort larger numbers, will\nimprove performance dramatically. I would also bet that most users NEVER see\nthe postgresql.conf file, and just blame poor performance on bad design and\nstart using MySQL.\n",
"msg_date": "Wed, 21 Nov 2001 14:52:03 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: postgresql.conf (Proposed settings)"
},
{
"msg_contents": "On Thursday 22 November 2001 06:52, mlw wrote:\n\n> > For this kind of \"newbie\", the kind that doesn't read the\n> > documentation, this would only make it worse, because they'd assume\n> > that by making the choice between three default configurations they've\n> > done an adequate amount of tuning. Basically, you'd exchange, \"I\n> > can't find any tuning information, but it's slow\" for \"I did all the\n> > tuning and it's still slow\". Not a good choice.\n\nI am the other kind of newbie, the one that reads documentation.\nHowever, I fail to find much regarding tuning within the documentation \ndelivered with the original version 7.1.2 tarball. Even when searching the \npostgresql website or the mailing list archives, the information is still \nsporadic. In the whole \"Administrators guide\" section there is no chapter \n\"tuning\", and even if you grep your way through it, you won't find much. \nAnd yes, I have read Bruce's book , from first to last page. And I have \nread Stinson's PostgreSQL Essential Reference the same way. Still, I am no \nwiser.\n\nI am not complaining. Postgresql, after all, is free. But it is kinda \nstrange to blame the users for not reading documentation if this \ndocumentation is that hard to find (does it exist?) that \"Joe Average\" \ncan't find it.\n\nThe other domain where I could hardly find any information at all is the \nPostgres log settings.\n\nI would be more than happy to write the documentation if I could get hold \nof the neccessary information, as I badly need it myself.\n\nThe other thing I am thinking abouty is that tuning can't be any \"magic\". \nIf I can tune it, why shouldn't a configuration script be able to do the \nsame? After all, even I would follow some algorithm to do it; it would \nprobably involve a few educaterd trial & error & experiments - but hey, a \nscript can do that too, can't it? I know too little about Postgres to do \nthat part myself, but I don't think it is valid just to shove the idea of \n\"autotuning\" aside like this.\n\nHorst\n",
"msg_date": "Thu, 22 Nov 2001 07:42:21 +1100",
"msg_from": "Horst Herb <hherb@malleenet.net.au>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf (Proposed settings)"
},
{
"msg_contents": "What!\n\nI don't remember ever seeing docs pointing me to this configuration\nfile.\n\nWhen did this file appear in postgres.\n",
"msg_date": "Wed, 21 Nov 2001 18:11:48 -0700",
"msg_from": "Guy Fraser <guy@incentre.net>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf (Proposed settings)"
},
{
"msg_contents": "Horst Herb writes:\n\n> I am the other kind of newbie, the one that reads documentation.\n> However, I fail to find much regarding tuning within the documentation\n> delivered with the original version 7.1.2 tarball.\n\nYes, there is clearly a deficit in this area. But it's obviously not\ngoing to get better with more buffers by default.\n\n> The other thing I am thinking abouty is that tuning can't be any \"magic\".\n> If I can tune it, why shouldn't a configuration script be able to do the\n> same?\n\nTheoretically, you're right of course. In practice this could be quite\ncomplicated to set up. Let's say, the optimal configuration depends\nprimarily on four groups of parameters:\n\n1. hardware setup\n2. load/desired load on the server (\"dedicated\" vs something else)\n3. nature of the data\n4. nature of the clients/query workload\n\n#1 is easy to figure out by asking a few questions or perhaps a few\nnonportable peeks into /proc. #2 is more difficult, it's not simply yes or\nno or loadavg = X, because it depends on the nature of the other\napplications. #3 is also not quite that easy. It'd require a hypersmart\nversion of ANALYZE, but in reality you would want to configure your server\nbefore the data starts arriving. So it's not really feasible to run a an\nautomatic \"benchmark\" or something. The same with #4, the queries really\narrive only after the tuning is done. You'd hardly want to tune while the\napplication is live and ask the users \"how fast was it?\".\n\nSo what would be required is to parametrize these four factors (and others\nwe come up with) accurately into questions the user can answer easily.\nThis would be an enormous task, but I'm not saying it can't be done.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 22 Nov 2001 17:28:55 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf (Proposed settings)"
},
{
"msg_contents": "mlw writes:\n\n> I don't think this is true at all. Making buffers and sort larger numbers, will\n> improve performance dramatically.\n\nI think we want to make the buffers bigger by default but we need to put\nsome thought into the numbers. Currently, the shared buffers are set so\nthat they presumably fit under the default shared memory limit on most\nsystems. We already know this isn't actually true anymore.\n\nTom Lane thinks that \"a few thousand\" buffers is enough, so let's say 2048\n= 16MB. This is certainly a lot less than the 64MB that were proposed or\nthe 512MB that some people use. However, I feel we should have *some*\ndata points before we commit to a number that, as you say, most users will\nimplicitly be stuck with. Even more so if some users claim \"the more the\nbetter\" and the leading developer in the field disagrees.\n\n(Perhaps we could arrange it that by default the system attempts to\nallocate X amount of shared memory and if it fails it tries smaller sizes\nuntil it succeeds (down to a reasonable minimum)? This could combine\nconvenience and optimimal default.)\n\nAs for sort memory, I have no idea why this isn't much larger by default.\n\n> I would also bet that most users NEVER see the postgresql.conf file,\n> and just blame poor performance on bad design and start using MySQL.\n\nThis could be an information deficit more than anything else. I don't\nknow.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 22 Nov 2001 17:29:09 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf (Proposed settings)"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> ... However, I feel we should have *some*\n> data points before we commit to a number that, as you say, most users will\n> implicitly be stuck with.\n\nData points would be a good thing. I will freely admit that I have made\nno measurements to back up my opinion :-(\n\n> As for sort memory, I have no idea why this isn't much larger by default.\n\nThe problem with sort memory is that you don't know what the multiplier\nis for it. SortMem is per sort/hash/whatever plan step, which means\nthat not only might one backend be consuming several times SortMem on\na complex query, but potentially all MaxBackend backends might be doing\nthe same. In practice that seems like a pretty unlikely scenario, but\nsurely you should figure *some* function of SortMem * MaxBackends as\nthe number you need to compare to available RAM.\n\nThe present 512K default is on the small side for current hardware,\nno doubt, but that doesn't mean we should crank it up without thought.\nWe just recently saw a trouble report from someone who had pushed it\nto the moon and found out the hard way not to do that.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 22 Nov 2001 21:49:33 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf (Proposed settings) "
}
] |
[
{
"msg_contents": "\n> >>The underlying problem the user is seeing is how to _know_ an index\n> >>tuple is valid without checking the heap,\n> >>\n> I'd propose a memory-only (or heavily cached) structure of tuple death\ntransaction\n> ids for all transactions since the oldest live trx. And when that\noldest finishes then\n> the tombstone marks for all tuples deleted between that and the new\noldest are moved to \n> relevant indexes (or the index keys are deleted) by concurrent vacuum\n> or similar process.\n\nAndreas said, that his data is only loaded/changed in the night, thus\nfor his queries all tuples found in the index are actually live.\nEvery heap tuple lookup results in \"tuple valid\".\n\nIn his case a per table global \"highest xid\" in heapdata that can be\ncompared\nagainst highest xid during last vacuum would probably be sufficient\n(or a flag for \"modified after last vacuum\").\nOf course per table globals are a major headache regarding concurrency,\nbut there would be other possible optimizations that could profit from\nsuch \na structure, like rowcount ...\n\nAndreas\n",
"msg_date": "Mon, 19 Nov 2001 18:26:00 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: Further open item (Was: Status of 7.2)"
}
] |
[
{
"msg_contents": "Hi,\n\nI am trying to access index structures outside PostgreSQL becuase I am\nworking with a home grown\nquery processor.\nMy stand-alone program will access index structures stored in PostgreSQL\ndatabase by using registered\nuser-defined functions to PostgreSQL server.\nAs the first step, I wrote a user-defined function for open a specified\nindex relation as follows:\n\nRelation open_gist(char *index_name)\n{\n /* .../src/backend/access/index/indexam.c */\n return index_openr(index_name);\n}\n\nAnd register the function as follows:\n\nCREATE FUNCTION open_gist(opaque)\n RETURNS opaque\n AS\n'/usr/local/postgresql-7.1.3/src/backend/access/gist_ops/lib_gist.so.1.0'\n LANGUAGE 'c';\n\nUp to here, there has been no problem.\nBut, when I tried to open an index by using select statement, I got an error\nmessage as follows:\n\ntest=# select open_gist(\"b3dix\");\nERROR: Attribute 'b3dix' not found\nERROR: Attribute 'b3dix' not found\n\nHowever, there is the index named \"b3dix\" in my database as follows:\n\ntest=# \\di\n List of relations\n Name | Type | Owner\n-------+-------+--------\n b3dix | index | jeongs\n\nIs there any one who knows what problem is?\n\nOne more question is about \"opaque\".\nWhat is the usefulness of the data type in the PostgreSQL context?\nAnd what do I need to specify for \"void\" return data type when I register a\nuser-defined function.\nFor example,\n\nvoid close_gist(Relation index_relation);\n\nCREATE FUNCTION close_gist(opaque)\n RETURNS ?\n AS\n'/usr/local/postgresql-7.1.3/src/backend/access/gist_ops/lib_gist.so.1.0'\n LANGUAGE 'c';\n\nThank you for reading this question.\n\nCheers.\n\n\n\n\n",
"msg_date": "Mon, 19 Nov 2001 18:39:49 -0000",
"msg_from": "\"Seung Hyun Jeong\" <jeongs@cs.man.ac.uk>",
"msg_from_op": true,
"msg_subject": "about index_openr() function"
}
] |
[
{
"msg_contents": "> What's bothering me right now is the\n> difference between client and server encodings. It seems that the\nonly\n> plausible use for octet_length is to do memory allocation on the\nclient\n> side, and for that purpose the length ought to be measured in the\nclient\n> encoding. People seem to be happy with letting octet_length take the\n> easy way out (measure in the server encoding), and I'm trying to get\n> someone to explain to me why that's the right behavior. I \n> don't see it.\n\nI agree. octet_length should be the number of bytes the client gets when\nhe \ndoes \"select textfield from atable\".\n\nAndreas\n",
"msg_date": "Mon, 19 Nov 2001 20:02:11 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: OCTET_LENGTH is wrong "
}
] |
[
{
"msg_contents": "I have just added to TODO:\n\n\t* Remove USING clause from pg_get_indexdef() if index is btree (Bruce)\n\nThis will clean up pg_dump output.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 19 Nov 2001 15:01:59 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "New TODO item"
}
] |
[
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n> Cause not everyone has bzip2 ... \n\nWhy not have both, so the user can choose? Many other \nthings on the net do just that. That way, I can choose \nbz2 for a quick download, or use gz if I am using an \ninferior OS :)\n\nWhile we're on the subject, how about digitally signing the \nreleases as well? An MD5 checksum is fine, but certainly \nwon't protect against trojans and other maliciousness that \na pgp signature could prevent.\n\nGreg Sabino Mullane\ngreg@turnstep.com\nPGP Key: 0x14964AC8 200111191132\n\n-----BEGIN PGP SIGNATURE-----\nComment: http://www.turnstep.com/pgp.html\n\niQA/AwUBO/nc+7ybkGcUlkrIEQLBPgCeM6CXgV0W7WjJBwGhiVj6u8hjPJ8An3Os\nfP8flAAcciNI6FfOPyXKsD1B\n=00M7\n-----END PGP SIGNATURE-----\n\n",
"msg_date": "Mon, 19 Nov 2001 23:29:09 -0500",
"msg_from": "\"Greg Sabino Mullane\" <greg@turnstep.com>",
"msg_from_op": true,
"msg_subject": "Re: beta3"
}
] |
[
{
"msg_contents": "The mailinglist signup page on the \"Developers Mailing Lists\" website does\nnot list the pgsql-sql list.\n\nChris\n\n",
"msg_date": "Tue, 20 Nov 2001 13:37:26 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "The mailing list subscription page`"
},
{
"msg_contents": "On Tue, 20 Nov 2001, Christopher Kings-Lynne wrote:\n\n> The mailinglist signup page on the \"Developers Mailing Lists\" website does\n> not list the pgsql-sql list.\n\nIt's considered a user's list. Try the mailing list link from the\nUsers Lounge on the main site.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 20 Nov 2001 06:42:18 -0500 (EST)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: The mailing list subscription page`"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.