threads listlengths 1 2.99k |
|---|
[
{
"msg_contents": "CVSROOT:\t/home/projects/pgsql/cvsroot\nModule name:\tpgsql\nChanges by:\tpetere@hub.org\t01/05/23 18:00:44\n\nModified files:\n\tsrc/bin/scripts: Makefile createlang.sh \n\nLog message:\n\tMake createlang use dynamic loader enhancements (automatic path and suffix).\n\n",
"msg_date": "Wed, 23 May 2001 18:00:44 -0400 (EDT)",
"msg_from": "Peter Eisentraut - PostgreSQL <petere@hub.org>",
"msg_from_op": true,
"msg_subject": "pgsql/src/bin/scripts Makefile createlang.sh"
},
{
"msg_contents": "Peter Eisentraut - PostgreSQL <petere@hub.org> writes:\n> \tMake createlang use dynamic loader enhancements (automatic path and suffix).\n\nI observe that createlang still builds full paths for me. Evidently\nthis is because I have PGLIB set in my environment. I propose that\ncreatelang ought not pay attention to an environment PGLIB anymore,\nbut should only set a full path if forced to by an explicit -L switch.\nThoughts?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 23 May 2001 19:52:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql/src/bin/scripts Makefile createlang.sh "
},
{
"msg_contents": "Tom Lane writes:\n\n> Peter Eisentraut - PostgreSQL <petere@hub.org> writes:\n> > \tMake createlang use dynamic loader enhancements (automatic path and suffix).\n>\n> I observe that createlang still builds full paths for me. Evidently\n> this is because I have PGLIB set in my environment. I propose that\n> createlang ought not pay attention to an environment PGLIB anymore,\n> but should only set a full path if forced to by an explicit -L switch.\n\nIt wasn't supposed to pay attention to PGLIB at least since 7.0. I guess\nit slipped by, I'll disable it now.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Thu, 24 May 2001 02:12:53 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql/src/bin/scripts Makefile createlang.sh "
}
] |
[
{
"msg_contents": "I'm trying to form an rtree index on a custom datatype, and I've come\nacross a problem. The problem also affects the standard geometric\ndatatypes.\n\nHere's a simple example:\n\n> create table test_geom (poly polygon);\n\n> insert into test_geom values ( '<LOTS OF POINTS>');\n (...)\n\nSo far, so good, but when you try to create an rtree index, you get;\n\n> create index quick on test_geom using rtree (poly);\nERROR: index_formtuple: data takes 20040 bytes, max is 8191\n\nThis will happen if the size of the polygon object (after compression)\nis greater than the page size. Everything works fine if all the\npolygons (after compression) are < 8k in size.\n\n\nThe polygon type is actually creating the rtree index on a small portion\nof the actual polygon data (its boundingbox, NOT the actual points). \nWhy does the index need to store the entire geometry? Is that some type\nof by-product of how the index works? Or is it because the \"~=\"\n(is_same) operator actually needs to know the entire geometry?\nIf its because of the \"~=\" operator, could we solve this by making \"~=\"\njust look at the bounding box? Or will that have bad side-effects?\n\nI noticed that the GiST indexing has compress and decompress functions -\ncould this type of index be used? {I first tryed making a GiST index,\nbut it didnt work for me. I'm using the rtree index because it worked\nfine.}\n\nMy understanding of the actual mechanics of postgresql indexing is\npretty much nil.\n\nThanks for your help,\ndave\nps. I'm using 7.1.1 on Solaris.\n",
"msg_date": "Wed, 23 May 2001 17:19:03 -0700",
"msg_from": "Dave Blasby <dblasby@refractions.net>",
"msg_from_op": true,
"msg_subject": "Rtree; cannot create index on polygons with lots of points"
},
{
"msg_contents": "On Wed, 23 May 2001, Dave Blasby wrote:\n\n>\n> I noticed that the GiST indexing has compress and decompress functions -\n> could this type of index be used? {I first tryed making a GiST index,\n> but it didnt work for me. I'm using the rtree index because it worked\n> fine.}\n\nWhat're the problem with GiST ? Did you try Rtree implementation using\nGiST (http://www.sai.msu.su/~megera/postgres/gist) ?\nWe didn't implemented polygon datatype but it's rather straightforward.\n\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Thu, 24 May 2001 08:19:14 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": false,
"msg_subject": "Re: Rtree; cannot create index on polygons with lots of\n points"
},
{
"msg_contents": "Dave Blasby <dblasby@refractions.net> writes:\n> So far, so good, but when you try to create an rtree index, you get;\n>> create index quick on test_geom using rtree (poly);\n> ERROR: index_formtuple: data takes 20040 bytes, max is 8191\n\nYup. We don't yet have a solution that allows index entries to be moved\ninto a TOAST table (and even if we did, it'd doubtless be slower than\none would like for an index).\n\n> The polygon type is actually creating the rtree index on a small portion\n> of the actual polygon data (its boundingbox, NOT the actual points). \n> Why does the index need to store the entire geometry?\n\nrtree doesn't have any notion of compression or lossy storage of data.\nGIST does, so I'd recommend that you take a hard look at moving over\nto GIST. Over the long run I think we are going to abandon rtree in\nfavor of GIST --- the latter probably has more bugs at present, but once\nthose are flushed out I see no real reason to keep supporting rtree.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 24 May 2001 07:35:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rtree; cannot create index on polygons with lots of points "
}
] |
[
{
"msg_contents": "Hi all,\n\nHas anyone produced a UML diagram of the system catalogues or made a\nstart on it? Especially in a package that outputs xml/xmi file formats,\nsuch as Argouml or dia? If so, would you be willing to share? Else if\ndeemed a good idea might make a start myself...\n\ncheers,\nJohn\n\n",
"msg_date": "Thu, 24 May 2001 12:35:47 +1000",
"msg_from": "John Reid <jgreid@uow.edu.au>",
"msg_from_op": true,
"msg_subject": "uml diagrams of system catalogues"
}
] |
[
{
"msg_contents": ">This is true. You can adjust the value in the /proc/sys/kernel/shmmax \n>file. If you change the value it will be reset when you reboot, so you \n>will need to write a start-up script to always change this value if you \n>want it to be permanent.\n>\n>-r\n>\n>At 09:51 AM 5/24/01 -0700, you wrote:\n>\n>>In the past, I had to change the RedHat Linux kernel so that the\n>>shared memory was set to something much higher than the default (which\n>>I think was about 32 MBytes). It seems that this is no longer\n>>necessary in RH 7.1 (kernel 2.4). Can someone confirm this?\n>>\n>>-Tony\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>>\n>>\n>>\n>>---\n>>Incoming mail is certified Virus Free.\n>>Checked by AVG anti-virus system (http://www.grisoft.com).\n>>Version: 6.0.251 / Virus Database: 124 - Release Date: 4/26/01\n\n\n---\nOutgoing mail is certified Virus Free.\nChecked by AVG anti-virus system (http://www.grisoft.com).\nVersion: 6.0.251 / Virus Database: 124 - Release Date: 4/26/01",
"msg_date": "Thu, 24 May 2001 13:37:37 +0100",
"msg_from": "Ryan Mahoney <ryan@paymentalliance.net>",
"msg_from_op": true,
"msg_subject": "Fwd: Re: Shared memory for RH Linux 7.1"
},
{
"msg_contents": "On Thu, 24 May 2001, Ryan Mahoney wrote:\n\n> >This is true. You can adjust the value in the /proc/sys/kernel/shmmax \n> >file. If you change the value it will be reset when you reboot, so you \n> >will need to write a start-up script to always change this value if you \n> >want it to be permanent.\n\nor you can let sysctl do it with this in /etc/sysctl.conf:\n\nkernel.shmmax = 268435456\n\n(obviously changing the value with what is appropriate for your machine).\n\nThis is for a RH 6.2 box. DOnt know if its the same on 7.1. We switched to\nFreeBSD between redhat 6.2 and 7.0, so we dont have any RH7.1 boxes laying\naround. I suspect it hasn't changed though.\n\nMike\n\n",
"msg_date": "Fri, 1 Jun 2001 15:54:42 -0500 (CDT)",
"msg_from": "Michael J Schout <mschout@gkg.net>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: Re: Shared memory for RH Linux 7.1"
}
] |
[
{
"msg_contents": "This value can be dynamically changed by:\n\necho \"new value here\" > /proc/sys/kernel/shmmax\n\nGlad I bought that expensive RedHat support contract!\n\n-r\n\nAt 08:02 PM 5/24/01 +0200, Poul L. Christiansen wrote:\n\n>I think you still need to set your shared memory size, because my Redhat\n>7.1 gives me this:\n>\n>[root@localhost kernel]# cat /proc/sys/kernel/shmmax\n>33554432\n>[root@localhost kernel]# uname -a\n>Linux localhost.localdomain 2.4.2-2 #1 Sun Apr 8 20:41:30 EDT 2001 i686\n>unknown\n>\n>I think shared memory is set this low for compatability reasons, but I'm\n>not sure.\n>\n>Poul L. Christiansen\n>\n>Tony Reina wrote:\n> >\n> > In the past, I had to change the RedHat Linux kernel so that the\n> > shared memory was set to something much higher than the default (which\n> > I think was about 32 MBytes). It seems that this is no longer\n> > necessary in RH 7.1 (kernel 2.4). Can someone confirm this?\n> >\n> > -Tony\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 4: Don't 'kill -9' the postmaster\n>\n>\n>\n>---\n>Incoming mail is certified Virus Free.\n>Checked by AVG anti-virus system (http://www.grisoft.com).\n>Version: 6.0.251 / Virus Database: 124 - Release Date: 4/26/01\n\n\n---\nOutgoing mail is certified Virus Free.\nChecked by AVG anti-virus system (http://www.grisoft.com).\nVersion: 6.0.251 / Virus Database: 124 - Release Date: 4/26/01",
"msg_date": "Thu, 24 May 2001 14:35:41 +0100",
"msg_from": "Ryan Mahoney <ryan@paymentalliance.net>",
"msg_from_op": true,
"msg_subject": "Re: Re: Shared memory for RH Linux 7.1"
},
{
"msg_contents": "In the past, I had to change the RedHat Linux kernel so that the\nshared memory was set to something much higher than the default (which\nI think was about 32 MBytes). It seems that this is no longer\nnecessary in RH 7.1 (kernel 2.4). Can someone confirm this?\n\n-Tony\n",
"msg_date": "24 May 2001 09:51:05 -0700",
"msg_from": "reina@nsi.edu (Tony Reina)",
"msg_from_op": false,
"msg_subject": "Shared memory for RH Linux 7.1"
},
{
"msg_contents": "Tony Reina wrote:\n\n> In the past, I had to change the RedHat Linux kernel so that the\n> shared memory was set to something much higher than the default (which\n> I think was about 32 MBytes). It seems that this is no longer\n> necessary in RH 7.1 (kernel 2.4). Can someone confirm this?\n\nOne can:\n\necho 67108864 > /proc/sys/kernel/shmmax\n\nTo adjust limits\n\n\n\n",
"msg_date": "Thu, 24 May 2001 13:45:11 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Shared memory for RH Linux 7.1"
},
{
"msg_contents": "I think you still need to set your shared memory size, because my Redhat\n7.1 gives me this:\n\n[root@localhost kernel]# cat /proc/sys/kernel/shmmax\n33554432\n[root@localhost kernel]# uname -a\nLinux localhost.localdomain 2.4.2-2 #1 Sun Apr 8 20:41:30 EDT 2001 i686\nunknown\n\nI think shared memory is set this low for compatability reasons, but I'm\nnot sure.\n\nPoul L. Christiansen\n\nTony Reina wrote:\n> \n> In the past, I had to change the RedHat Linux kernel so that the\n> shared memory was set to something much higher than the default (which\n> I think was about 32 MBytes). It seems that this is no longer\n> necessary in RH 7.1 (kernel 2.4). Can someone confirm this?\n> \n> -Tony\n",
"msg_date": "Thu, 24 May 2001 20:02:47 +0200",
"msg_from": "\"Poul L. Christiansen\" <poulc@cs.auc.dk>",
"msg_from_op": false,
"msg_subject": "Re: Shared memory for RH Linux 7.1"
},
{
"msg_contents": "This is my first post here. I hope that I got\neverything right...\n\nThis has already been possible for 2.2.x Kernels.\nCheck out the Postgresql admin documentation at\n\nhttp://www.fr.postgresql.org/devel-corner/docs/admin/kernel-resources.html#SYSVIPC\n\nOr did I get something wrong.\n\nBy the way. Is there a release date for PG 7.1.2?\n\n\nYours\n\nDavid Kuczek\n\n__________________________________________________\nDo You Yahoo!?\nYahoo! Auctions - buy the things you want at great prices\nhttp://auctions.yahoo.com/\n",
"msg_date": "Thu, 24 May 2001 11:17:15 -0700 (PDT)",
"msg_from": "David Kuczek <royblackd@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: Shared memory for RH Linux 7.1"
},
{
"msg_contents": "On Thu, 24 May 2001, Ryan Mahoney wrote:\n\n> echo \"new value here\" > /proc/sys/kernel/shmmax\n\nThe new canonical way is to:\n\n$ sysctl -w kernel.shmmax=\"new value\"\n\nyou can arrange for it you happen at boot time via /etc/sysctl.conf.\n\nMatthew.\n\n",
"msg_date": "Fri, 25 May 2001 10:21:01 +0100 (BST)",
"msg_from": "Matthew Kirkwood <matthew@hairy.beasts.org>",
"msg_from_op": false,
"msg_subject": "Re: Re: Shared memory for RH Linux 7.1"
}
] |
[
{
"msg_contents": "\nI have a friend of mine who sent me this... (I'm acting as a relay\nhere...).\n\nI am trying to compile PostgreSQL 7.1.1 under HP-UX 11.00\n(HP-UX dwhp2 B.11.00 U 9000/800 1195951537 unlimited-user license)\nwith the C++ library and OpenSSL but having little success.\nI've tried both the HP-UX aCC++ and GCC 2.95 compilers.\nEach different attempt brings out its own set of errors and\nweirdness.\n\nHere's where I stand:\n\nCannot compile PostgreSQL, C++ library (--with-CXX), and\nOpenSSL (--with-openssl=/opt/openssl) with HP's aCC\n(latest version). Configure hangs when testing for SSL library.\nThe configure command line looks like this.\n\n./configure\n--with-includes=\"/opt/zlib/include:/opt/openssl/include/openssl\"\n\\\n --with-libraries=\"/opt/zlib/lib\" \\\n --with-openssl=/opt/openssl \\\n --with-CXX\nThe /opt/openssl/include/openssl line is needed because the\nHP-UX porting archive's layout of OpenSSL puts the .h files\nin /opt/openssl/include/openssl/, so specifying\n--with-openssl=/opt/openssl by itself isn't sufficient to\nget all the paths. The configure script errors out looking\nfor _eprintf(), which apparently is a now deprecated GCC\nattempt at a compatibility layer. This tells me I really\ncan't use the binary build made by the HP-UX porting archive\nfolks.\n\nCompile PostgreSQL and C++ library (--with-CXX) with HP's\naCC (latest version). Works fine (but still doesn't get me\nto OpenSSL).\n\nTried also the HP developer program support desk's version\nof GCC which installs in /usr/local. Configure runs fine\nbut HP's ld breaks on OpenSSL's libssl .\n /usr/ccs/bin/ld: DP relative code in file\n /opt/openssl/lib/libssl.a(s23_meth.o) - shared library\n must be position independent. Use +z or +Z to recompile.\nThis tells me I still need to recompile OpenSSL from source\nbecause the archive centre's packaging job isn't quite all\nthere yet (or there's a compiler conflict).\n\nCannot build OpenSSL from source because Perl-5.6.1 (guess\nfrom where) is also broken (it can't understand \"use strict\"\nbecause the library is missing the strict.pm module, among\nothers.\n\nCannot compile PostgreSQL and C++ library (--with-CXX) with GCC.\nCan't find <string> class in STL. Suspect packaging\njob done by HP-UX porting archive: it moves GCC from\n/usr/local to /opt, and a few things didn't make the\ntrip smoothly. The other version\n\nCompile PostgreSQL by itself with GCC. Works fine.\n\nI know that the problem is not strictly an issue with PostgreSQL,\nbut any expertise you can bring to the problem is welcome.\nIf you need to actually get on the system, we'll get you a\nNDA and an ssh login.\n\nAny ideas?\n\n\n Chris Bowlby,\n -----------------------------------------------------\n Web Developer @ Hub.org.\n excalibur@hub.org\n www.hub.org\n 1-902-542-3657\n -----------------------------------------------------\n\n",
"msg_date": "Thu, 24 May 2001 12:12:36 -0400 (EDT)",
"msg_from": "Chris Bowlby <excalibur@hub.org>",
"msg_from_op": true,
"msg_subject": "HP Unix 11.00 Compiler error."
},
{
"msg_contents": "Chris Bowlby <excalibur@hub.org> writes:\n> The configure script errors out looking\n> for _eprintf(), which apparently is a now deprecated GCC\n> attempt at a compatibility layer. This tells me I really\n> can't use the binary build made by the HP-UX porting archive\n> folks.\n\nFWIW, I've seen eprintf link failures on HPUX 10.20 as well. I think\nthe key to avoiding it is that you have to build all the components with\nthe same compiler (all gcc, or all HP's). If you want to link against a\nprecompiled ssl library then that will determine your choice of compiler.\n\n> Cannot compile PostgreSQL and C++ library (--with-CXX) with GCC.\n> Can't find <string> class in STL. Suspect packaging\n> job done by HP-UX porting archive: it moves GCC from\n> /usr/local to /opt, and a few things didn't make the\n> trip smoothly.\n\nI agree. I don't have any trouble building the C++ library here,\nusing either hand-built gcc or HP's cc. I let gcc install itself\nin the usual place, ie, /usr/local.\n\nThe Porting Archive guys do good work, but their stuff gives headaches\nif you want to mix and match it with stuff you build yourself. They're\nway too eager to mess around with install locations.\n\nI'd recommend rebuilding gcc in a vanilla configuration, using the\nporting archive gcc just for bootstrap purposes.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 24 May 2001 15:17:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] HP Unix 11.00 Compiler error. "
},
{
"msg_contents": "On Thu, 24 May 2001, Tom Lane wrote:\n\nGreat, I'll pass this on thanks.\n\n> Chris Bowlby <excalibur@hub.org> writes:\n> > The configure script errors out looking\n> > for _eprintf(), which apparently is a now deprecated GCC\n> > attempt at a compatibility layer. This tells me I really\n> > can't use the binary build made by the HP-UX porting archive\n> > folks.\n>\n> FWIW, I've seen eprintf link failures on HPUX 10.20 as well. I think\n> the key to avoiding it is that you have to build all the components with\n> the same compiler (all gcc, or all HP's). If you want to link against a\n> precompiled ssl library then that will determine your choice of compiler.\n>\n> > Cannot compile PostgreSQL and C++ library (--with-CXX) with GCC.\n> > Can't find <string> class in STL. Suspect packaging\n> > job done by HP-UX porting archive: it moves GCC from\n> > /usr/local to /opt, and a few things didn't make the\n> > trip smoothly.\n>\n> I agree. I don't have any trouble building the C++ library here,\n> using either hand-built gcc or HP's cc. I let gcc install itself\n> in the usual place, ie, /usr/local.\n>\n> The Porting Archive guys do good work, but their stuff gives headaches\n> if you want to mix and match it with stuff you build yourself. They're\n> way too eager to mess around with install locations.\n>\n> I'd recommend rebuilding gcc in a vanilla configuration, using the\n> porting archive gcc just for bootstrap purposes.\n>\n> \t\t\tregards, tom lane\n>\n\n Chris Bowlby,\n -----------------------------------------------------\n Web Developer @ Hub.org.\n excalibur@hub.org\n www.hub.org\n 1-902-542-3657\n -----------------------------------------------------\n\n",
"msg_date": "Thu, 24 May 2001 15:33:10 -0400 (EDT)",
"msg_from": "Chris Bowlby <excalibur@hub.org>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] HP Unix 11.00 Compiler error. "
}
] |
[
{
"msg_contents": "> >> Impractical ? Oracle does it.\n> >\n> >Oracle has MVCC?\n> \n> With restrictions, yes.\n\nWhat restrictions? Rollback segments size?\nNon-overwriting smgr can eat all disk space...\n\n> You didn't know that? Vadim did ...\n\nDidn't I mention a few times that I was\ninspired by Oracle? -:)\n\nVadim\n",
"msg_date": "Thu, 24 May 2001 10:00:31 -0700",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: Plans for solving the VACUUM problem "
},
{
"msg_contents": "\"Mikheev, Vadim\" wrote:\n> \n> > >> Impractical ? Oracle does it.\n> > >\n> > >Oracle has MVCC?\n> >\n> > With restrictions, yes.\n> \n> What restrictions? Rollback segments size?\n> Non-overwriting smgr can eat all disk space...\n\nIs'nt the same true for an overwriting smgr ? ;)\n\n> > You didn't know that? Vadim did ...\n> \n> Didn't I mention a few times that I was\n> inspired by Oracle? -:)\n\nHow does it do MVCC with an overwriting storage manager ?\n\nCould it possibly be a Postgres-inspired bolted-on hack \nneeded for better concurrency ?\n\n\nBTW, are you aware how Interbase does its MVCC - is it more \nlike Oracle's way or like PostgreSQL's ?\n\n----------------\nHannu\n",
"msg_date": "Fri, 25 May 2001 01:51:06 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Plans for solving the VACUUM problem"
},
{
"msg_contents": "At 01:51 25/05/01 +0500, Hannu Krosing wrote:\n>\n>How does it do MVCC with an overwriting storage manager ?\n>\n\nI don't know about Oracle, but Dec/RDB also does overwriting and MVCC. It\ndoes this by taking a snapshot of pages that are participating in both RW\nand RO transactions (De/RDB has the options on SET TRANSACTION that specify\nif the TX will do updates or not). It has the disadvantage that the\nsnapshot will grow quite large for bulk loads. Typically they are about\n10-20% of DB size. Pages are freed from the snapshot as active TXs finish.\n\nNote that the snapshots are separate from the journalling (WAL) and\nrollback files.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Fri, 25 May 2001 10:21:37 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Plans for solving the VACUUM problem"
}
] |
[
{
"msg_contents": "> If PostgreSQL wants to stay MVCC, then we should imho forget\n> \"overwriting smgr\" very fast.\n> \n> Let me try to list the pros and cons that I can think of:\n> Pro:\n> \tno index modification if key stays same\n> \tno search for free space for update (if tuple still\n> fits into page)\n> \tno pg_log\n> Con:\n> \tadditional IO to write \"before image\" to rollback segment\n> \t\t(every before image, not only first after checkpoint)\n> \t\t(also before image of every index page that is updated !)\n\nI don't think that Oracle writes entire page as before image - just\ntuple data and some control info. As for additional IO - we'll do it\nanyway to remove \"before image\" (deleted tuple data) from data files.\n\nVadim\n",
"msg_date": "Thu, 24 May 2001 10:30:39 -0700",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: Plans for solving the VACUUM problem"
}
] |
[
{
"msg_contents": "> I think so too. I've never said that an overwriting smgr\n> is easy and I don't love it particularily.\n> \n> What I'm objecting is to avoid UNDO without giving up\n> an overwriting smgr. We shouldn't be noncommittal now. \n\nWhy not? We could decide to do overwriting smgr later\nand implement UNDO then. For the moment we could just\nchange checkpointer to use checkpoint.redo instead of\ncheckpoint.undo when defining what log files should be\ndeleted - it's a few minutes deal, and so is changing it\nback.\n\nVadim\n",
"msg_date": "Thu, 24 May 2001 10:57:19 -0700",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: Plans for solving the VACUUM problem"
},
{
"msg_contents": "\"Mikheev, Vadim\" wrote:\n> \n> > I think so too. I've never said that an overwriting smgr\n> > is easy and I don't love it particularily.\n> >\n> > What I'm objecting is to avoid UNDO without giving up\n> > an overwriting smgr. We shouldn't be noncommittal now.\n> \n> Why not? We could decide to do overwriting smgr later\n> and implement UNDO then.\n\nWhat I'm refering to is the discussion about the handling\nof subtransactions in order to introduce the savepoints\nfunctionality. Or do we postpone *savepoints* again ?\n\nI realize now few people have had the idea how to switch\nto an overwriting smgr. I don't think an overwriting smgr\ncould be achived at once and we have to prepare one by one\nfor it. AFAIK there's no idea how to introduce an overwriting\nsmgr without UNDO. If we avoid UNDO now when overwriting smgr\nwould appear ? I also think that the problems Andreas has\nspecified are pretty serious. I also have known the problems\nand I've expected that people have the idea to solve it but\n... I'm inclined to give up an overwriting smgr now.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Fri, 25 May 2001 08:51:44 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Plans for solving the VACUUM problem"
},
{
"msg_contents": "> What I'm refering to is the discussion about the handling\n> of subtransactions in order to introduce the savepoints\n> functionality. Or do we postpone *savepoints* again ?\n> \n> I realize now few people have had the idea how to switch\n> to an overwriting smgr. I don't think an overwriting smgr\n> could be achived at once and we have to prepare one by one\n> for it. AFAIK there's no idea how to introduce an overwriting\n> smgr without UNDO. If we avoid UNDO now when overwriting smgr\n> would appear ? I also think that the problems Andreas has\n> specified are pretty serious. I also have known the problems\n> and I've expected that people have the idea to solve it but\n> ... I'm inclined to give up an overwriting smgr now.\n\nNow that everyone has commented on the UNDO issue, I thought I would try\nto summarize the comments so we can come to some kind of conclusion.\n\nHere are the issues as I see them:\n\n---------------------------------------------------------------------------\n\nDo we want to keep MVCC?\n\nYes. No one has said otherwise.\n\n---------------------------------------------------------------------------\n\nDo we want to head for an overwriting storage manager?\n\nNot sure. \n\nAdvantages: UPDATE has easy space reuse because usually done in-place,\nno index change on UPDATE unless key is changed.\n\nDisadvantages: Old records have to be stored somewhere for MVCC use. \nCould limit transaction size.\n\n---------------------------------------------------------------------------\n\nDo we want UNDO _if_ we are heading for an overwriting storage manager?\n\nEveryone seems to say yes.\n\n---------------------------------------------------------------------------\n\nDo we want UNDO if we are _not_ heading for an overwriting storage\nmanager?\n\nThis is the tough one. UNDO advantages are:\n\t\n\tMake subtransactions easier by rolling back aborted subtransaction. \n\tWorkaround is using a new transactions id for each subtransaction.\n\t\n\tEasy space reuse for aborted transactions.\n\t\n\tReduce size of pg_log.\n\nUNDO disadvantages are:\n\n\tLimit size of transactions to log storage size.\n\n---------------------------------------------------------------------------\n\nIf we are heading for an overwriting storage manager, we may as well get\nUNDO now. If we are not, then we have to decide if we can solve the\nproblems that UNDO would fix. Basically, can we solve those problems\neasier without UNDO, or are the disadvanges of UNDO too great?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 24 May 2001 21:00:08 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Plans for solving the VACUUM problem"
}
] |
[
{
"msg_contents": "> > - A simple typo in psql can currently cause a forced \n> > rollback of the entire TX. UNDO should avoid this.\n> \n> Yes, I forgot to mention this very big advantage, but undo is\n> not the only possible way to implement savepoints. Solutions\n> using CommandCounter have been discussed.\n\nThis would be hell.\n\nVadim\n",
"msg_date": "Thu, 24 May 2001 11:06:08 -0700",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: AW: Plans for solving the VACUUM problem"
}
] |
[
{
"msg_contents": "At 10:00 AM 5/24/01 -0700, Mikheev, Vadim wrote:\n>> >> Impractical ? Oracle does it.\n>> >\n>> >Oracle has MVCC?\n>> \n>> With restrictions, yes.\n>\n>What restrictions? Rollback segments size?\n>Non-overwriting smgr can eat all disk space...\n\nActually, the restriction I'm thinking about isn't MVCC related, per\nse, but a within-transaction restriction. The infamous \"mutating table\"\nerror.\n\n>> You didn't know that? Vadim did ...\n>\n>Didn't I mention a few times that I was\n>inspired by Oracle? -:)\n\nYes, you most certainly have!\n\n\n\n- Don Baccus, Portland OR <dhogaza@pacifier.com>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Thu, 24 May 2001 11:16:46 -0700",
"msg_from": "Don Baccus <dhogaza@pacifier.com>",
"msg_from_op": true,
"msg_subject": "RE: Plans for solving the VACUUM problem "
},
{
"msg_contents": "At 10:00 AM 24-05-2001 -0700, Mikheev, Vadim wrote:\n>> >> Impractical ? Oracle does it.\n>> >\n>> >Oracle has MVCC?\n>> \n>> With restrictions, yes.\n>\n>What restrictions? Rollback segments size?\n>Non-overwriting smgr can eat all disk space...\n>\n>> You didn't know that? Vadim did ...\n>\n>Didn't I mention a few times that I was\n>inspired by Oracle? -:)\n\nIs there yet another way to do it, that could be better? Has Oracle\nactually done it the best way for once? ;).\n\nBut as long as Postgresql doesn't get painted into a corner, it doesn't\nmatter so much to me - I believe you guys can do a good job (as long as\n\"it's ready when it's ready\", not \"when Marketing says so\"). \n\nMy worry is if suddenly it is better to do savepoints another way, but it\nchanges _usage_ and thus breaks apps. Or it makes Postgresql look really\nugly. \n\nRight now Postgresql is fairly neat/clean with only a few exceptions\n(BLOBs, VACUUM). Whereas Oracle is full of so many cases where things were\ndone wrong but had to be kept that way (and erm why VARCHAR2?). And full of\nbits slapped on. So I sure hope postgresql doesn't end up like Oracle.\nBecause if I want a Frankenstein database I'll go for Oracle. Sure it's\npowerful and all that, but it's damn ugly...\n\nTake all the time you want to do it right, coz once Postgresql gets really\npopular, your hands will be even more tied. When that happens it's better\nto be tied to a nice spot eh?\n\nCheerio,\nLink.\n\n",
"msg_date": "Fri, 25 May 2001 10:26:51 +0800",
"msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>",
"msg_from_op": false,
"msg_subject": "RE: Plans for solving the VACUUM problem "
}
] |
[
{
"msg_contents": "> > > >Oracle has MVCC?\n> > >\n> > > With restrictions, yes.\n> > \n> > What restrictions? Rollback segments size?\n> > Non-overwriting smgr can eat all disk space...\n> \n> Is'nt the same true for an overwriting smgr ? ;)\n\nRemoving dead records from rollback segments should\nbe faster than from datafiles.\n\n> > > You didn't know that? Vadim did ...\n> > \n> > Didn't I mention a few times that I was\n> > inspired by Oracle? -:)\n> \n> How does it do MVCC with an overwriting storage manager ?\n\n1. System Change Number (SCN) is used: system increments it\n on each transaction commit.\n2. When scan meets data block with SCN > SCN as it was when\n query/transaction started, old block image is restored\n using rollback segments.\n\n> Could it possibly be a Postgres-inspired bolted-on hack \n> needed for better concurrency ?\n\n-:)) Oracle has MVCC for years, probably from the beginning\nand for sure before Postgres.\n\n> BTW, are you aware how Interbase does its MVCC - is it more \n> like Oracle's way or like PostgreSQL's ?\n\nLike ours.\n\nVadim\n",
"msg_date": "Thu, 24 May 2001 17:23:19 -0700",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: Plans for solving the VACUUM problem"
},
{
"msg_contents": "\"Mikheev, Vadim\" wrote:\n> \n> > > > >Oracle has MVCC?\n> > > >\n> > > > With restrictions, yes.\n> > >\n> > > What restrictions? Rollback segments size?\n> > > Non-overwriting smgr can eat all disk space...\n> >\n> > Is'nt the same true for an overwriting smgr ? ;)\n> \n> Removing dead records from rollback segments should\n> be faster than from datafiles.\n\nIs it for better locality or are they stored in a different way ?\n\nDo you think that there is some fundamental performance advantage \nin making a copy to rollback segment and then deleting it from \nthere vs. reusing space in datafiles ?\n\nOne thing (not having to updata non-changing index entries) can be \nquite substantial under some scenarios, but there are probably ways \nto at least speed up part of this by doing other compromizes, perhaps \nby saving more info in index leaf (trading lookup speed for space \nand insert speed) or chaining data pages (trading insert speed for \n(some) space and lookup speed)\n\n> > > > You didn't know that? Vadim did ...\n> > >\n> > > Didn't I mention a few times that I was\n> > > inspired by Oracle? -:)\n> >\n> > How does it do MVCC with an overwriting storage manager ?\n> \n> 1. System Change Number (SCN) is used: system increments it\n> on each transaction commit.\n> 2. When scan meets data block with SCN > SCN as it was when\n> query/transaction started, old block image is restored\n> using rollback segments.\n\nYou mean it is restored in session that is running the transaction ?\n\nI guess thet it could be slower than our current way of doing it.\n\n> > Could it possibly be a Postgres-inspired bolted-on hack\n> > needed for better concurrency ?\n> \n> -:)) Oracle has MVCC for years, probably from the beginning\n> and for sure before Postgres.\n\nIn that case we can claim thet their way is more primitive ;) ;)\n\n-----------------\nHannu\n",
"msg_date": "Fri, 25 May 2001 11:01:53 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Plans for solving the VACUUM problem"
}
] |
[
{
"msg_contents": "I'm trying to get my geometric type to spatially index. I tried RTrees,\nbut they dont like objects that are bigger than 8k.\n\nI'm now trying to get a GiST index to index based on the bounding box\nthats contained inside the geometry. So the index is on a GEOMETRY\ntype, but the index is only acting on the GEOMETRY->bvolume (which is a\nBOX3D).\n\nSo far, it doesnt work. Only one of my GiST support functions is called\n(the compress function), after that I get the error message:\n\n # create index qq on tp3 using gist (the_geom gist_geometry_ops) with\n(islossy);\nERROR: index_formtuple: data takes 8424504 bytes, max is 8191\n\nI simplified the all the geometry in the test (tp3) table so they\ncontain only one point - each object is only a few hundred bytes, and\nthere's only 100 rows.\n\nI'm obviously doing something very wrong.\n\nMy compress function looks like:\n\nGISTENTRY *ggeometry_compress(GISTENTRY *entry)\n{\n BOX3D *tmp;\n GISTENTRY *retval;\n\n if (entry->leafkey) \n {\n\t\ttmp = (BOX3D *) palloc(sizeof(BOX3D));\n\t\tmemcpy((char *) tmp, (char *) &(((GEOMETRY *)(entry->pred))->bvol),\nsizeof(BOX3D));\n\n\t\tretval = palloc(sizeof(GISTENTRY));\n\t\tgistentryinit(*retval, (char *)tmp, entry->rel, entry->page, \n\t\t \t\t entry->offset, sizeof(BOX3D),FALSE);\n\t\treturn(retval);\n }\n else \n\treturn(entry);\n}\n\n\nOn its first (and only) call, the geometry (\"entry->pred\") really is the\nfirst row in the tp3 table.\n\nDoes anyone have any ideas where to start tracking this problem down? \nAm I writing code for a very old version of GiST?\n\nI've tried to find other examples of GiST using compression, but none of\nthem work. \"contrib/intarray\" in the standard distribution just spins\n(cpu 100%) when you try to build an index, and\n\"http://s2k-ftp.cs.berkeley.edu:8000/gist/pggist/\" has an example using\nthe standard built-in polygon type (I based my code on it) - but its\nreally really old and I spent a few hours trying to get it to compile,\nthen gave up.\n\n\nAny ideas or examples?\n\ndave\nps. I'm using postgresql 7.1.1 with the gist.c 7.1 patch. I get the\nexact result with out-of-the-box-7.1.1.\npps. My code is available at\nftp://ftp.refractions.net/pub/refractions/postgis.c\n\t\t\t ftp://ftp.refractions.net/pub/refractions/postgis.h\n and the sql definitions are at\nftp://ftp.refractions.net/pub/refractions/def.sql\n and a dump of the tp3 table is at\nftp://ftp.refractions.net/pub/refractions/tp3.sql\n",
"msg_date": "Thu, 24 May 2001 18:55:51 -0700",
"msg_from": "Dave Blasby <dblasby@refractions.net>",
"msg_from_op": true,
"msg_subject": "GiST index on data types that require compression"
},
{
"msg_contents": "Dave Blasby <dblasby@refractions.net> writes:\n> So far, it doesnt work. Only one of my GiST support functions is called\n> (the compress function), after that I get the error message:\n> # create index qq on tp3 using gist (the_geom gist_geometry_ops) with\n> (islossy);\n> ERROR: index_formtuple: data takes 8424504 bytes, max is 8191\n\nIt looks like the GIST code expects your compress function to give back\na varlena datatype, not the fixed-length type you are actually handing\nback. The ridiculous length comes from interpreting the first word\nof your BOX3D as a length.\n\nThere are/were provisions in the GIST code for having the compress\nfunction emit a different datatype than it takes in, but I think they\nare incomplete or broken. Might be easiest to produce a varlena result\nfor now.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 25 May 2001 01:00:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: GiST index on data types that require compression "
},
{
"msg_contents": "On Fri, 25 May 2001, Tom Lane wrote:\n\n> Dave Blasby <dblasby@refractions.net> writes:\n> > So far, it doesnt work. Only one of my GiST support functions is called\n> > (the compress function), after that I get the error message:\n> > # create index qq on tp3 using gist (the_geom gist_geometry_ops) with\n> > (islossy);\n> > ERROR: index_formtuple: data takes 8424504 bytes, max is 8191\n>\n> It looks like the GIST code expects your compress function to give back\n> a varlena datatype, not the fixed-length type you are actually handing\n> back. The ridiculous length comes from interpreting the first word\n> of your BOX3D as a length.\n>\n> There are/were provisions in the GIST code for having the compress\n> function emit a different datatype than it takes in, but I think they\n> are incomplete or broken. Might be easiest to produce a varlena result\n> for now.\n\ncompress fully supports fixed-length and varlena types. The problem is\nindex_formtuple - types of key and column could be different\n(example - polygon, where column has varlena type but key is fixed-length)\nAs a workaround one could use the samy type for key and column.\n1st integer field in strcuture BOX3D should be length of this structure\nin bytes.\n\nTom, do you have an idea how to fix this problem ?\n\n\tOleg\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Fri, 25 May 2001 11:38:44 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": false,
"msg_subject": "Re: GiST index on data types that require compression "
},
{
"msg_contents": "Oleg Bartunov <oleg@sai.msu.su> writes:\n> compress fully supports fixed-length and varlena types. The problem is\n> index_formtuple - types of key and column could be different\n> (example - polygon, where column has varlena type but key is fixed-length)\n\nRight. There used to be a horrible kluge whereby the user could specify\nthe type to be assumed for the key in the CREATE INDEX command (the\n\"haskeytype\" stuff is the remaining traces of this). This is brain dead\nof course ... the correct way is to look at the pg_proc definition of\nthe compress function and see what type it's declared to return, not\nrely on the user to get it right.\n\nWhat I find just about as objectionable as the old haskeytype hack is\nthat the user has to tell you whether the index is lossy or not. This\nshould be a property available from the system catalogs. Not sure where\nto put it; do we need another column in pg_opclass, or is someplace\nother than the opclass needed?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 25 May 2001 09:47:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: GiST index on data types that require compression "
},
{
"msg_contents": "> What I find just about as objectionable as the old haskeytype hack is\n> that the user has to tell you whether the index is lossy or not. This\n> should be a property available from the system catalogs. Not sure where\n> to put it; do we need another column in pg_opclass, or is someplace\n> other than the opclass needed?\n> \n\nSo, may by add to pg_opclass two fields?\nbool is_varlena_key\nbool is_lossy_compress\n\nThen index_formtuple must look at is_varlena_key and 'with (islossy)' \ncould be determined automatically by view at used ops.\n\n\n\n-- \nTeodor Sigaev\nteodor@stack.net\n\n\n",
"msg_date": "Fri, 25 May 2001 18:42:58 +0400",
"msg_from": "Teodor Sigaev <teodor@stack.net>",
"msg_from_op": false,
"msg_subject": "Re: GiST index on data types that require compression"
},
{
"msg_contents": " > So, may by add to pg_opclass two fields?\n > bool is_varlena_key\n > bool is_lossy_compress\n\n\nSorry, I was wrong. In GiST, index_formtuple doesn't know about size of \nfixed-length type of keys, because only loadable module has information \nabout structure of key. So, may be it needs to have function which \nreturn size of key or index_formtuple must looks at GISTENTRY.bytes( \nNote: A lot of currrent implementation of GiST modules don't set value \nGISTENTRY.bytes ). Or fields in pg_opclass:\nint len_key\nbool is_lossy_compress\n\nif len_key==-1 then key is varlena type.\n\n-- \nTeodor Sigaev\nteodor@stack.net\n\n\n",
"msg_date": "Fri, 25 May 2001 20:29:21 +0400",
"msg_from": "Teodor Sigaev <teodor@stack.net>",
"msg_from_op": false,
"msg_subject": "Re: GiST index on data types that require compression"
},
{
"msg_contents": "I took your (Tom, Oleg, and Teodor's) advice and changed my GiST code so\nit compresses a normal GEOMETRY into a BOX3D-only-GEOMETRY by striping\nout the actual geometric information. This way, everything is\nconsistent.\n\nI now appear to be able to create and use a GiST index (well, it works\non my 1 test case ;^) ).\n\nUnfortunately, when I issue the CREATE INDEX command, it takes a really\nlong time. The system goes to about 80% iowait (according to top), or\nmostly idle. There's lots of memory free. Any ideas why?\n\ndave\n",
"msg_date": "Fri, 25 May 2001 12:00:54 -0700",
"msg_from": "Dave Blasby <dblasby@refractions.net>",
"msg_from_op": true,
"msg_subject": "Re: GiST index on data types that require compression"
},
{
"msg_contents": "Teodor Sigaev <teodor@stack.net> writes:\n> So, may by add to pg_opclass two fields?\n> bool is_varlena_key\n> bool is_lossy_compress\n\nCertainly not 'is_varlena_key', since that's not all the info you\nneed about the key datatype --- a type OID would be more appropriate.\nBut it seems to me that we should be able to learn the key type OID\nby examining the signature of the compression function.\n\nThe real question is whether the opclass is the right place for this\ninfo. After thinking some more, I'm inclined to think not, since the\nopclass isn't tied to a single index type. For example, poly_ops\nmight be lossy for GIST but not for another index type such as rtree.\n\nIt occurs to me that the structure of pg_opclass/pg_amop/pg_amproc\nmight be wrong. Perhaps pg_opclass should be indexed by (AM OID,\nopclass name) not just opclass name, and then we could remove the\namopid columns from pg_amop and pg_amproc, since the OID of a\npg_opclass row would be sufficient to identify the access method.\nThis would allow us to put access-method-specific information into\npg_opclass. It would also be clearer whether a given AM supports\na given opclass name or not (right now, one has to see whether there\nare matching entries in the other tables, which is pretty iffy\nconsidering that one doesn't know how many there should be).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 25 May 2001 18:51:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: GiST index on data types that require compression "
},
{
"msg_contents": "I'm trying to keep a variable around for the duration of a transaction.\n\nUnfortunately, the \"SET LOCAL\" command doesnt allow me to create my own \nvariable. Also, the \"CREATE TEMP TABLE ... ON COMMIT DELETE ROWS\" isnt \nyet implemented.\n\nBut, I believe I can implement it using a TEMP table like this:\n\nCREATE TEMP TABLE my_variable (transId xid, value in);\n\nINSERT INTO my_variable ( getTransactionID(), 25);\n\nAnd I can read from the table with:\n\nSELECT value FROM my_variable WHERE transId = getTransactionID();\n\nThe question is, how to write the getTransactionID() function.\n\nI'm comfortable writing \"C\" extensions to postgresql, but I'm not sure \nwhere to actually get the current transaction id. Could someone give me \npointers to where I can find this magic value?\n\ndave\n\n",
"msg_date": "Thu, 24 Jul 2003 11:55:19 -0700",
"msg_from": "David Blasby <dblasby@refractions.net>",
"msg_from_op": false,
"msg_subject": "Getting the current transaction's xid"
}
] |
[
{
"msg_contents": "Hi,\n\nI think I've come across a bug in plpgsql. It happens in the following\nsituation:\n\nI have 2 tables, one with a foreign key to the other.\nInside a plpgsql function, I do:\n update row in table2\n delete that row in table2\n delete the referenced row in table1\n\nAnd I get a foreign key constraint error. I apologize if that's not clear,\nbut hopefully the test case is more explanatory...\n\n-- create --\n\ncreate table foo (id integer primary key);\ncreate table bar (id integer references foo);\n\ninsert into foo (id) values (1);\ninsert into bar (id) values (1);\n\ncreate function f_1 ()\nreturns integer as '\nbegin\n --any update statement causes problems\n update bar set id=1 where id=1;\n delete from bar where id = 1;\n delete from foo where id = 1;\n return 0; \nend;' language 'plpgsql';\n\ndrop function f_2 ();\ncreate function f_2 ()\nreturns integer as '\nbegin\n -- no update statement\n delete from bar where id = 1;\n delete from foo where id = 1;\n return 0; \nend;' language 'plpgsql';\n\n--Tests:\n-- Tests attempt to delete a row from bar & foo\n-- Thus the result of select count(*) from foo should be 0 \n\n--test1: Test plpgsql with an update before a delete -> fails\n\nselect f_1();\nselect count(*) from foo;\n\nERROR: <unnamed> referential integrity violation - key referenced from bar not found in foo\n count \n-------\n 1\n\n--test2: Test plpgsql with just a delete -> succeeds\n-- wrap in a transaction so I can rollback & do test3\n\nbegin transaction;\nselect f_2();\nselect count(*) from foo;\nrollback;\n\n count \n-------\n 0\n\nROLLBACK\n\n--test3: Test direct sql with update before a delete in transaction -> succeeds\n\nbegin transaction;\nupdate bar set id=1 where id=1;\ndelete from bar where id = 1;\ndelete from foo where id = 1;\nselect count(*) from foo;\nend transaction;\n\nUPDATE 1\nDELETE 1\nDELETE 1\n count \n-------\n 0\n\nCOMMIT\n\nIt seems like function f_1 should succeed, but it doesn't...\n\nVinod\n\n\n-- \n_____________________________\nVinod Kurup, MD\nemail: vkurup@massmed.org\nphone: 617.277.2012\ncell: 617.359.5990\nhttp://www.kurup.com\naim: vvkurup\n",
"msg_date": "Fri, 25 May 2001 00:11:57 -0400",
"msg_from": "Vinod Kurup <vkurup@massmed.org>",
"msg_from_op": true,
"msg_subject": "plpgsql update bug?"
},
{
"msg_contents": "On Fri, 25 May 2001, Vinod Kurup wrote:\n\n> Hi,\n> \n> I think I've come across a bug in plpgsql. It happens in the following\n> situation:\n> \n> I have 2 tables, one with a foreign key to the other.\n> Inside a plpgsql function, I do:\n> update row in table2\n> delete that row in table2\n> delete the referenced row in table1\n> \n> And I get a foreign key constraint error. I apologize if that's not clear,\n> but hopefully the test case is more explanatory...\n\nOkay, I think I may understand why this occurs. This is a\nvery similar problem to the defered constraints problem we\nhave. It doesn't realize that the fk row isn't there anymore\nand shouldn't be checked.\n\nMy guess is that these statements are all treated as part of \na single statement when put inside the function which is why\nthey're treated differently than as separate statements in a\ntransaction. \n\nI'm not sure whether or not this is actually a triggered data change\nviolation (I don't have a draft of 99 to check right now) as it's\nattempting to delete a row that was previously modified in the statement\n(assuming that it's treated as a single statement of course). I think the\ntriggered data change may only apply to updates though.\n\nI think the following checks are needed (at least for the deferred case, \nand this case as well). These checks only work for match full and\nmatch unspecified, but we don't support match partial anyway:\nOn insert/update to fk check, can we see a row exist with the new values?\n If not, we don't need to check, it's already been deleted or updated\n again in which case we want the later trigger to act.\nOn delete/update from pk with no action, can we see a row with the old\n values?\n If so, we don't need to check, anything that succeeded before will\n succeed now.\n\nI'm a bit uncertain on the deferred cases with action. The spec is none\ntoo clear about when the actions occur. Although it appears to me\nthat it's at statement time, not check time since it mentions things\nlike \"marked for deletion\" which I believe is a statement level thing\n(with said rows deleted at the end of the statement before integrity\nchecks are applied).\n\n",
"msg_date": "Fri, 25 May 2001 11:47:38 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: plpgsql update bug?"
}
] |
[
{
"msg_contents": "\n------- Forwarded Message\n\nDate: Fri, 25 May 2001 00:47:13 +0200\nFrom: Michal Politowski <mpol@charybda.icm.edu.pl>\nTo: Debian Bug Tracking System <submit@bugs.debian.org>\nSubject: Bug#98643: plpgsql SELECT INTO causes trouble when assignment impossib\n\t le\n\nPackage: postgresql\nVersion: 7.1.1-4\nSeverity: important\n\nAfter\npsql template1 -f x.sql\n\nwhere x.sql looks as follows:\n\n- ----- x.sql -----\nCREATE DATABASE test;\n\\c test\nCREATE TABLE foo (bar int4);\nINSERT INTO foo VALUES (17);\nCREATE FUNCTION quux() RETURNS int4 AS '\nDECLARE\n\tbaz int4;\nBEGIN\n\tSELECT INTO baz bar FROM foo WHERE bar > 23;\n\tRETURN 42;\nEND;\n' LANGUAGE 'plpgsql';\nSELECT quux();\n- ----------\n\nI get this on screen:\n\nCREATE DATABASE\nYou are now connected to database test.\nCREATE\nINSERT 117035 1\nCREATE\npsql:x.sql:13: pqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n\tbefore or while processing the request.\npsql:x.sql:13: connection to server was lost\n\nAnd this in logs:\nMay 25 00:39:13 Amber postgres[15856]: [1] DEBUG: connection: host=[local] use\nr=mike database=template1\nMay 25 00:39:13 Amber postgres[15858]: [1] DEBUG: connection: host=[local] use\nr=mike database=test\nMay 25 00:39:13 Amber postgres[15859]: [1] DEBUG: database system was interrup\nted at 2001-05-25 00:38:16 CEST\nMay 25 00:39:13 Amber postgres[15859]: [2] DEBUG: CheckPoint record at (0, 377\n3544)\nMay 25 00:39:13 Amber postgres[15859]: [3] DEBUG: Redo record at (0, 3773544);\n Undo record at (0, 0); Shutdown TRUE\nMay 25 00:39:13 Amber postgres[15859]: [4] DEBUG: NextTransactionId: 1134; Nex\ntOid: 117024\nMay 25 00:39:13 Amber postgres[15859]: [5] DEBUG: database system was not prop\nerly shut down; automatic recovery in progress...\nMay 25 00:39:13 Amber postgres[15859]: [6] DEBUG: redo starts at (0, 3773608)\nMay 25 00:39:13 Amber postgres[15859]: [7] DEBUG: ReadRecord: record with zero\n len at (0, 3883500)\nMay 25 00:39:13 Amber postgres[15859]: [8] DEBUG: redo done at (0, 3883464)\nMay 25 00:39:16 Amber postgres[15859]: [9] DEBUG: database system is in produc\ntion state\n\n- -- System Information\nDebian Release: testing/unstable\nArchitecture: i386\nKernel: Linux Amber 2.2.19 #1 Thu Mar 29 15:52:51 CEST 2001 i586\nLocale: LANG=pl_PL, LC_CTYPE=C\n\nVersions of packages postgresql depends on:\nii debianutils 1.15 Miscellaneous utilities specific t\nii libc6 2.2.3-1 GNU C Library: Shared libraries an\nii libpgsql2.1 7.1.1-4 Shared library libpq.so.2.1 for Po\nii libreadline4 4.2-3 GNU readline and history libraries\nii libssl0.9.6 0.9.6a-3 SSL shared libraries \nii postgresql-client 7.1.1-4 Front-end programs for PostgreSQL \nii procps 1:2.0.7-4 The /proc file system utilities. \nii zlib1g 1:1.1.3-15 compression library - runtime \n\n- -- \nMichal Politowski -- mpol@lab.icm.edu.pl\nWarning: this is a memetically modified message\n\n\n------- End of Forwarded Message\n\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"And Jesus answering said unto them, They that are\n whole need not a physician; but they that are sick. I\n come not to call the righteous, but sinners to\n repentance.\" Luke 5:31,32\n\n\n",
"msg_date": "Fri, 25 May 2001 07:08:15 +0100",
"msg_from": "\"Oliver Elphick\" <olly@lfix.co.uk>",
"msg_from_op": true,
"msg_subject": "Bug#98643: plpgsql SELECT INTO causes trouble when assignment\n\timpossible (fwd)"
},
{
"msg_contents": "\"Oliver Elphick\" <olly@lfix.co.uk> writes:\n> psql:x.sql:13: pqReadData() -- backend closed the channel unexpectedly.\n\nThis is the already-fixed-in-7.1.2 problem with empty SELECTs in\nplpgsql.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 25 May 2001 10:23:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bug#98643: plpgsql SELECT INTO causes trouble when assignment\n\timpossible (fwd)"
}
] |
[
{
"msg_contents": "\n> > >> Impractical ? Oracle does it.\n> > >\n> > >Oracle has MVCC?\n> > \n> > With restrictions, yes.\n> \n> What restrictions? Rollback segments size?\n\nNo, that is not the whole story. The problem with their \"rollback segment approach\" is,\nthat they do not guard against overwriting a tuple version in the rollback segment. \nThey simply recycle each segment in a wrap around manner.\nThus there could be an open transaction that still wanted to see a tuple version\nthat was already overwritten, leading to the feared \"snapshot too old\" error.\n\nCopying their \"rollback segment\" approach is imho the last thing we want to do.\n\n> Non-overwriting smgr can eat all disk space...\n> \n> > You didn't know that? Vadim did ...\n> \n> Didn't I mention a few times that I was inspired by Oracle? -:)\n\nLooking at what they supply in the feature area is imho good.\nCopying their technical architecture is not so good in general.\n\nAndreas\n",
"msg_date": "Fri, 25 May 2001 09:44:14 +0200",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: Plans for solving the VACUUM problem "
}
] |
[
{
"msg_contents": "Sometimes PQfinish() does not return forever in the following program.\n\nconn = PQsetdbLogin();\nif(PQstatus(conn) == CONNECTION_BAD)\n{\n PQfinish(conn);\t/* blocks here */\n}\n\nPQfinish calls closePGconn that call pqPuts. pqPuts calls select(2)\nand it never returns if a connection associated with the socket is not\nestablished.\n\nThis could happend if the connection is not established, but the\nsocket is still opened by PQsetdbLogin. Possible fixes are:\n\n1) close the socket in PQsetdbLogin if it is in a situation being\n returns CONNECTION_BAD\n\n\t\tcase PGRES_POLLING_WRITING:\n\t\t\tif (pqWait(0, 1, conn))\n\t\t\t{\n\t\t\t\tconn->status = CONNECTION_BAD;\n\t\t\t\tclose(conn->sock); <-- add this\n\t\t\t\tconn->sock == -1; <-- add this\n\t\t\t\treturn 0;\n\t\t\t}\n\t\t\tbreak;\n\n\n2) check if the status of handle returned PQsetdbLogin is\n CONNECTION_BAD closePGconn. if so, do not call pqPuts (and\n pqFlush)\n\n change this:\n\tif (conn->sock >= 0)\n\n to:\n\tif (conn->status != CONNECTION_BAD && conn->sock >= 0)\n\nany thoughts?\n--\nTatsuo Ishii\n\n\n",
"msg_date": "Fri, 25 May 2001 22:12:39 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "PQsetdbLogin bug?"
},
{
"msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> 2) check if the status of handle returned PQsetdbLogin is\n> CONNECTION_BAD closePGconn. if so, do not call pqPuts (and\n> pqFlush)\n\nI like this approach. The other way, you'd have to be sure that all\nfailure paths close the socket before returning; even if you get it\nright today, somebody will break it again in future. The sending of\nthe final 'X' is really quite optional anyhow, so I'd say it's fine\nnot to do it when there's any question about whether it's safe.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 25 May 2001 10:26:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PQsetdbLogin bug? "
},
{
"msg_contents": "> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > 2) check if the status of handle returned PQsetdbLogin is\n> > CONNECTION_BAD closePGconn. if so, do not call pqPuts (and\n> > pqFlush)\n> \n> I like this approach. The other way, you'd have to be sure that all\n> failure paths close the socket before returning; even if you get it\n> right today, somebody will break it again in future. The sending of\n> the final 'X' is really quite optional anyhow, so I'd say it's fine\n> not to do it when there's any question about whether it's safe.\n\ndone.\n--\nTatsuo Ishii\n",
"msg_date": "Mon, 28 May 2001 13:36:23 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: PQsetdbLogin bug? "
}
] |
[
{
"msg_contents": "> Do we want to head for an overwriting storage manager?\n> \n> Not sure. \n> \n> Advantages: UPDATE has easy space reuse because usually done\n> in-place, no index change on UPDATE unless key is changed.\n> \n> Disadvantages: Old records have to be stored somewhere for MVCC use. \n> Could limit transaction size.\n\nReally? Why is it assumed that we *must* limit size of rollback segments?\nWe can let them grow without bounds, as we do now keeping old records in\ndatafiles and letting them eat all of disk space.\n\n> UNDO disadvantages are:\n> \n> \tLimit size of transactions to log storage size.\n\nDon't be kidding - in any system transactions size is limitted\nby available storage. So we should tell that more disk space\nis required for UNDO. From my POV, putting $100 to buy 30Gb\ndisk is not big deal, keeping in mind that PGSQL requires\n$ZERO to be used.\n\nVadim\n",
"msg_date": "Fri, 25 May 2001 09:37:16 -0700",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: Plans for solving the VACUUM problem"
},
{
"msg_contents": "> -----Original Message-----\n> From: Mikheev, Vadim [mailto:vmikheev@SECTORBASE.COM]\n> \n> > Do we want to head for an overwriting storage manager?\n> > \n> > Not sure. \n> > \n> > Advantages: UPDATE has easy space reuse because usually done\n> > in-place, no index change on UPDATE unless key is changed.\n> > \n> > Disadvantages: Old records have to be stored somewhere for MVCC use. \n> > Could limit transaction size.\n> \n> Really? Why is it assumed that we *must* limit size of rollback segments?\n> We can let them grow without bounds, as we do now keeping old records in\n> datafiles and letting them eat all of disk space.\n> \n\nIs it proper/safe for a DBMS to allow the system eating all disk\nspace ? For example, could we expect to recover the database\neven when no disk space available ?\n\n1) even before WAL \n Is 'deleting records and vacuum' always possible ?\n I saw the cases that indexes grow by vacuum.\n\n2) under WAL(current)\n If DROP or VACUUM is done after a checkpoint, wouldn't\n REDO recovery add the pages drop/truncated by the \n DROP/VACUUM ?\n \n3) with rollback data\n Shouldn't WAL log UNDO operations either ?\n If so, UNDO requires an extra disk space which could\n be unlimitedly big.\n\nThere's another serious problem. Once UNDO is required\nwith a biiiig rollback data, it would take a veeery long time\nto undo. It's quite different from the current behavior. Even\nthough people want to cancel the UNDO, there's no way\nunfortunately(under an overwriting smgr).\n\nregards,\nHiroshi Inoue \n",
"msg_date": "Sun, 27 May 2001 17:32:54 +0900",
"msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "RE: Plans for solving the VACUUM problem"
}
] |
[
{
"msg_contents": "> > > >Oracle has MVCC?\n> > > \n> > > With restrictions, yes.\n> > \n> > What restrictions? Rollback segments size?\n> \n> No, that is not the whole story. The problem with their\n> \"rollback segment approach\" is, that they do not guard against\n> overwriting a tuple version in the rollback segment.\n> They simply recycle each segment in a wrap around manner.\n> Thus there could be an open transaction that still wanted to\n> see a tuple version that was already overwritten, leading to the\n> feared \"snapshot too old\" error.\n> \n> Copying their \"rollback segment\" approach is imho the last \n> thing we want to do.\n\nSo, they limit size of rollback segments and we don't limit\nhow big our datafiles may grow if there is some long running\ntransaction in serializable mode. We could allow our rollback\nsegments to grow without limits as well.\n\n> > Non-overwriting smgr can eat all disk space...\n> > \n> > > You didn't know that? Vadim did ...\n> > \n> > Didn't I mention a few times that I was inspired by Oracle? -:)\n> \n> Looking at what they supply in the feature area is imho good.\n> Copying their technical architecture is not so good in general.\n\nCopying is not inspiration -:)\n\nVadim\n",
"msg_date": "Fri, 25 May 2001 10:05:31 -0700",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: Plans for solving the VACUUM problem "
}
] |
[
{
"msg_contents": "> > Removing dead records from rollback segments should\n> > be faster than from datafiles.\n> \n> Is it for better locality or are they stored in a different way ?\n\nLocality - all dead data would be localized in one place.\n\n> Do you think that there is some fundamental performance advantage\n> in making a copy to rollback segment and then deleting it from\n> there vs. reusing space in datafiles ?\n\nAs it showed by WAL additional writes don't mean worse performance.\nAs for deleting from RS (rollback segment) - we could remove or reuse\nRS files as whole.\n\n> > > How does it do MVCC with an overwriting storage manager ?\n> > \n> > 1. System Change Number (SCN) is used: system increments it\n> > on each transaction commit.\n> > 2. When scan meets data block with SCN > SCN as it was when\n> > query/transaction started, old block image is restored\n> > using rollback segments.\n> \n> You mean it is restored in session that is running the transaction ?\n> \n> I guess thet it could be slower than our current way of doing it.\n\nYes, for older transactions which *really* need in *particular*\nold data, but not for newer ones. Look - now transactions have to read\ndead data again and again, even if some of them (newer) need not to see\nthose data at all, and we keep dead data as long as required for other\nold transactions *just for the case* they will look there.\nBut who knows?! Maybe those old transactions will not read from table\nwith big amount of dead data at all! So - why keep dead data in datafiles\nfor long time? This obviously affects overall system performance.\n\nVadim\n",
"msg_date": "Fri, 25 May 2001 10:52:17 -0700",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: Plans for solving the VACUUM problem"
}
] |
[
{
"msg_contents": "\nDoes anyone know what HeapTupleSatisfiesDirty() and\nHeapTupleSatisfiesUpdate() do in tqual.c? There are no comments and I\ncan add them if someone can explain their purpose.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 25 May 2001 20:56:39 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Request for info on HeapTupleSatisfiesDirty"
}
] |
[
{
"msg_contents": "I just noticed that if I do BEGIN;CREATE TABLE..., and then start VACUUM\nof the database in another psql session, the VACUUM hangs until the\ntransaction completes. Is this expected?\n\n---------------------------------------------------------------------------\n\n#0 0x28256145 in semop () from /shlib/libc.so.2\n#1 0x8118755 in IpcSemaphoreLock (semId=1441793, sem=0, interruptOK=1)\n at ipc.c:426\n#2 0x811dda3 in ProcSleep (lockMethodTable=0x81ff1c0, lockmode=7, \n lock=0x283ace9c, holder=0x283addbc) at proc.c:667\n#3 0x811c898 in WaitOnLock (lockmethod=1, lockmode=7, lock=0x283ace9c, \n holder=0x283addbc) at lock.c:955\n#4 0x811c553 in LockAcquire (lockmethod=1, locktag=0x8046bcc, xid=9198, \n lockmode=7) at lock.c:739\n#5 0x811b67a in LockRelation (relation=0x826b01c, lockmode=7) at lmgr.c:141\n#6 0x80735c5 in heap_open (relationId=1259, lockmode=7) at heapam.c:596\n#7 0x80bf3cd in vacuum_rel (relid=1259) at vacuum.c:455\n#8 0x80befa1 in vacuum (vacstmt=0x82ef1d0) at vacuum.c:239\n#9 0x8124790 in ProcessUtility (parsetree=0x82ef1d0, dest=Remote)\n at utility.c:718\n#10 0x81222cb in pg_exec_query_string (query_string=0x82ef038 \"vacuum;\", \n dest=Remote, parse_context=0x82666f4) at postgres.c:777\n#11 0x81234d8 in PostgresMain (argc=7, argv=0x8046f4c, real_argc=4, \n real_argv=0x804789c, username=0x8258661 \"postgres\") at postgres.c:1908\n#12 0x81094c0 in DoBackend (port=0x8258400) at postmaster.c:2119\n#13 0x8109006 in BackendStartup (port=0x8258400) at postmaster.c:1902\n#14 0x8108189 in ServerLoop () at postmaster.c:1000\n#15 0x8107b6e in PostmasterMain (argc=4, argv=0x804789c) at postmaster.c:690\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 26 May 2001 00:58:08 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Hanging VACUUM"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I just noticed that if I do BEGIN;CREATE TABLE..., and then start VACUUM\n> of the database in another psql session, the VACUUM hangs until the\n> transaction completes. Is this expected?\n\nSure. You have a write lock on pg_class ... not to mention a few other\nsystem tables, but that's the one this trace shows VACUUM is waiting for\nexclusive lock on:\n\n> #5 0x811b67a in LockRelation (relation=0x826b01c, lockmode=7) at lmgr.c:141\n> #6 0x80735c5 in heap_open (relationId=1259, lockmode=7) at heapam.c:596\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 26 May 2001 01:26:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Hanging VACUUM "
}
] |
[
{
"msg_contents": "We have implemented multi-key index support for GiST. Patch is available\nfrom http://www.sai.msu.su/~megera/postgres/gist/code/7.1.2/patch_multikeygist.7.1.2.gz\nThe patch could be applied for postgresql version 7.1.2 and current sources 7.2\n\n1. initdb is required. But, it's possible just to execute update\n update pg_am set amstrategies = 12 where amname = 'gist';\n\n2. You have to recompile all gist_*_ops functions\n\n3. multi-key indexes works only for О©╫О©╫О©╫ gist__int_ops and\n gist__intbig_ops (from contrib/intarray), because they have\n support for NULLs.\n\nAs a bonus we fixed several memory leaks in old GiST code.\n\n\nExample:\n\ncreate index mgix on tabletest using gist (b gist_int_ops, a\n gist__intbig_ops ) with ( islossy );\n\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Sat, 26 May 2001 15:33:01 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "First version of multi-key index support for GiST"
},
{
"msg_contents": "\nDo you want this applied to the current CVS?\n\n> We have implemented multi-key index support for GiST. Patch is available\n> from http://www.sai.msu.su/~megera/postgres/gist/code/7.1.2/patch_multikeygist.7.1.2.gz\n> The patch could be applied for postgresql version 7.1.2 and current sources 7.2\n> \n> 1. initdb is required. But, it's possible just to execute update\n> update pg_am set amstrategies = 12 where amname = 'gist';\n> \n> 2. You have to recompile all gist_*_ops functions\n> \n> 3. multi-key indexes works only for ??? gist__int_ops and\n> gist__intbig_ops (from contrib/intarray), because they have\n> support for NULLs.\n> \n> As a bonus we fixed several memory leaks in old GiST code.\n> \n> \n> Example:\n> \n> create index mgix on tabletest using gist (b gist_int_ops, a\n> gist__intbig_ops ) with ( islossy );\n> \n> \n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 26 May 2001 09:53:24 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: First version of multi-key index support for GiST"
},
{
"msg_contents": "On Sat, 26 May 2001, Bruce Momjian wrote:\n\n>\n> Do you want this applied to the current CVS?\n>\n\nSure. I want our development to be in sync with cvs\nThere are several problems we have to resolve but basic functionality\nis there and seems works for us.\n\n\tOleg\n\n> > We have implemented multi-key index support for GiST. Patch is available\n> > from http://www.sai.msu.su/~megera/postgres/gist/code/7.1.2/patch_multikeygist.7.1.2.gz\n> > The patch could be applied for postgresql version 7.1.2 and current sources 7.2\n> >\n> > 1. initdb is required. But, it's possible just to execute update\n> > update pg_am set amstrategies = 12 where amname = 'gist';\n> >\n> > 2. You have to recompile all gist_*_ops functions\n> >\n> > 3. multi-key indexes works only for ??? gist__int_ops and\n> > gist__intbig_ops (from contrib/intarray), because they have\n> > support for NULLs.\n> >\n> > As a bonus we fixed several memory leaks in old GiST code.\n> >\n> >\n> > Example:\n> >\n> > create index mgix on tabletest using gist (b gist_int_ops, a\n> > gist__intbig_ops ) with ( islossy );\n> >\n> >\n> > \tRegards,\n> > \t\tOleg\n> > _____________________________________________________________\n> > Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> > Sternberg Astronomical Institute, Moscow University (Russia)\n> > Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> > phone: +007(095)939-16-83, +007(095)939-23-83\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> >\n> > http://www.postgresql.org/users-lounge/docs/faq.html\n> >\n>\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Sat, 26 May 2001 20:53:39 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "Re: First version of multi-key index support for GiST"
},
{
"msg_contents": "Your patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it withing the next 48 hours.\n\n[ Charset KOI8-R unsupported, converting... ]\n> We have implemented multi-key index support for GiST. Patch is available\n> from http://www.sai.msu.su/~megera/postgres/gist/code/7.1.2/patch_multikeygist.7.1.2.gz\n> The patch could be applied for postgresql version 7.1.2 and current sources 7.2\n> \n> 1. initdb is required. But, it's possible just to execute update\n> update pg_am set amstrategies = 12 where amname = 'gist';\n> \n> 2. You have to recompile all gist_*_ops functions\n> \n> 3. multi-key indexes works only for ??? gist__int_ops and\n> gist__intbig_ops (from contrib/intarray), because they have\n> support for NULLs.\n> \n> As a bonus we fixed several memory leaks in old GiST code.\n> \n> \n> Example:\n> \n> create index mgix on tabletest using gist (b gist_int_ops, a\n> gist__intbig_ops ) with ( islossy );\n> \n> \n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 26 May 2001 23:46:41 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: First version of multi-key index support for GiST"
},
{
"msg_contents": "Oleg Bartunov <oleg@sai.msu.su> writes:\n> We have implemented multi-key index support for GiST. Patch is available\n> from http://www.sai.msu.su/~megera/postgres/gist/code/7.1.2/patch_multikeygist.7.1.2.gz\n\nWhat is the point of the macro\n\n#define ATTGET(itup, Rel, i, isnull ) ((char*)( \\\n\t\t( IndexTupleSize(itup) == sizeof(IndexTupleData) ) ? \\\n\t\t\t\t*(isnull)=true, NULL \\\n\t\t\t: \\\n\t\t\tindex_getattr(itup, i, (Rel)->rd_att, isnull) \\\n\t))\n\nIt appears to me that index_getattr should handle an all-NULL index\ntuple just fine by itself --- certainly the btree code expects it to.\nSo I do not see the reason for this extra layer on top of it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 30 May 2001 16:35:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: First version of multi-key index support for GiST "
},
{
"msg_contents": "\n> What is the point of the macro\n> \n> #define ATTGET(itup, Rel, i, isnull ) ((char*)( \\\n> \t\t( IndexTupleSize(itup) == sizeof(IndexTupleData) ) ? \\\n> \t\t\t\t*(isnull)=true, NULL \\\n> \t\t\t: \\\n> \t\t\tindex_getattr(itup, i, (Rel)->rd_att, isnull) \\\n> \t))\n> \n> It appears to me that index_getattr should handle an all-NULL index\n> tuple just fine by itself --- certainly the btree code expects it to.\n> So I do not see the reason for this extra layer on top of it.\n\n\n\nYou are right. It can be removed or replaced to\n#define ATTGET(itup, Rel, i, isnull ) (char*)( index_getattr(itup, i, (Rel)->rd_att, isnull) )\n\nThe point was that in gist_tuple_replacekey (called from gistPageAddItem) key may be \n\n\nreplaced by null value, but flag itup->t_info & INDEX_NULL_MASK is not set. \n\n\nNow we don't use gistPageAddItem (\n\n\nsee http://fts.postgresql.org/db/mw/msg.html?mid=118707).\n\nThis is our oversight.\n\n \n\n-- \nTeodor Sigaev\nteodor@stack.net\n\n\n",
"msg_date": "Thu, 31 May 2001 12:57:07 +0400",
"msg_from": "Teodor Sigaev <teodor@stack.net>",
"msg_from_op": false,
"msg_subject": "Re: First version of multi-key index support for GiST"
},
{
"msg_contents": "Teodor Sigaev <teodor@stack.net> writes:\n> The point was that in gist_tuple_replacekey (called from \n> gistPageAddItem) key may be replaced by null value, but flag \n> itup->t_info & INDEX_NULL_MASK is not set.\n\nAh. That's certainly a bug.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 31 May 2001 09:38:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: First version of multi-key index support for GiST "
},
{
"msg_contents": "Oleg Bartunov <oleg@sai.msu.su> writes:\n> We have implemented multi-key index support for GiST. Patch is available\n> from http://www.sai.msu.su/~megera/postgres/gist/code/7.1.2/patch_multikeygist.7.1.2.gz\n\nI have committed these changes, along with your leak patch of 5/30.\n\n> 1. initdb is required. But, it's possible just to execute update\n> update pg_am set amstrategies = 12 where amname = 'gist';\n\nNo initdb is needed --- I fixed the code instead ;-)\n\n> 2. You have to recompile all gist_*_ops functions\n\nI bit the bullet and fixed all the places that were using \"char*\" where\nthey should have been using \"Datum\". This doesn't completely free GIST\nfrom datatype assumptions: it still assumes that all datatypes it deals\nwith will be pass-by-reference. But it's a step forward. This means\nnot only a recompile but code changes for any user-supplied GIST ops.\nI applied the appropriate changes to everything that's in contrib\n(including your new RTREE emulation code).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 31 May 2001 14:35:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: First version of multi-key index support for GiST "
},
{
"msg_contents": "On Thu, 31 May 2001, Tom Lane wrote:\n\n>\n> they should have been using \"Datum\". This doesn't completely free GIST\n> from datatype assumptions: it still assumes that all datatypes it deals\n> with will be pass-by-reference. But it's a step forward. This means\n\nI'm afraid this problem is connected with the problem of index_formtuple -\nall keys (even btree_ops) are greater than 4 bytes, so it's impossible\nto pass them by value. They should be pass-by-reference.\nSo, probably functions gistindex and gistbuild should be modified for\ntranslation from pass-by-value to pass-by-reference. Your comments ?\n\n>\n> \t\t\tregards, tom lane\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Fri, 1 Jun 2001 12:41:53 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "Re: First version of multi-key index support for GiST "
}
] |
[
{
"msg_contents": "psql -c 'select * from pg_class; select * from no_such_table;'\n\nShouldn't this at least give me the result of the first select before\naborting the second? Moreover, shouldn't\n\npsql -c 'select * from no_such_table; select * from pg_class;'\n\ngive me the result of the second select?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sat, 26 May 2001 16:23:13 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Parser abort ignoring following commands"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> psql -c 'select * from pg_class; select * from no_such_table;'\n> Shouldn't this at least give me the result of the first select before\n> aborting the second?\n\nThe behavior you are complaining of is not the backend's fault.\nThe reason it acts that way is that psql is feeding the entire -c \nstring to the backend as one query, and not paying any attention\nto the possibility that multiple query results might be available\nfrom the string.\n\nIt's a little bit inconsistent that psql feeds a -c string to the\nbackend as one query, whereas the same input line fed to it from a file\nor terminal would be split into per-statement queries. However,\nI'd vote against changing it, since (a) you might break existing\napplications that depend on this behavior, and (b) if you do that then\nthere will be no way to exercise multi-statement query strings from\npsql.\n\nIt might possibly make sense for psql to stop using PQexec() but\ninstead use PQsendQuery() and PQgetResult(), so that it could\nhandle multiple results coming back from a single query string.\nIf you did that then the first example would work the way you want.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 26 May 2001 11:12:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Parser abort ignoring following commands "
},
{
"msg_contents": "Tom Lane writes:\n\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > psql -c 'select * from pg_class; select * from no_such_table;'\n> > Shouldn't this at least give me the result of the first select before\n> > aborting the second?\n>\n> The behavior you are complaining of is not the backend's fault.\n> The reason it acts that way is that psql is feeding the entire -c\n> string to the backend as one query, and not paying any attention\n> to the possibility that multiple query results might be available\n> from the string.\n\nNo, I think there is another problem. How about something without\nselects:\n\n$ psql -c 'delete from pk; delete from xx;'\nERROR: Relation 'xx' does not exist\n\n\"pk\" exists, but nothing is deleted.\n\n$ psql -c 'drop user joe; drop user foo;'\nERROR: DROP USER: user \"foo\" does not exist\n\nUser \"joe\" exists, but it is not dropped.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sat, 26 May 2001 17:31:28 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: Parser abort ignoring following commands "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> No, I think there is another problem. How about something without\n> selects:\n\n> $ psql -c 'delete from pk; delete from xx;'\n> ERROR: Relation 'xx' does not exist\n\n> \"pk\" exists, but nothing is deleted.\n\nSure, because the transaction is rolled back. The whole string\nis executed in one transaction. You will definitely break existing\napplications if you change that.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 26 May 2001 11:36:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Parser abort ignoring following commands "
},
{
"msg_contents": "Tom Lane writes:\n\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > No, I think there is another problem. How about something without\n> > selects:\n>\n> > $ psql -c 'delete from pk; delete from xx;'\n> > ERROR: Relation 'xx' does not exist\n>\n> > \"pk\" exists, but nothing is deleted.\n>\n> Sure, because the transaction is rolled back. The whole string\n> is executed in one transaction. You will definitely break existing\n> applications if you change that.\n\nApplications that rely on this behaviour are broken. It was always said\nthat statements are in their own transaction block unless in an explicit\nBEGIN/COMMIT block. A statement is defined to end at the semicolon, not\nat the end of the string you submit to PQexec().\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sat, 26 May 2001 17:57:16 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: Parser abort ignoring following commands "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n>> Sure, because the transaction is rolled back. The whole string\n>> is executed in one transaction. You will definitely break existing\n>> applications if you change that.\n\n> Applications that rely on this behaviour are broken. It was always said\n> that statements are in their own transaction block unless in an explicit\n> BEGIN/COMMIT block. A statement is defined to end at the semicolon, not\n> at the end of the string you submit to PQexec().\n\nAu contraire: single query strings have always been executed as single\ntransactions. Whether that would be the most consistent behavior in a\ngreen field is quite irrelevant. We *cannot* change it now, or we will\nbreak existing applications --- silently.\n\nIf you can find something in the documentation that states what you\nclaim is the definition, I'll gladly change it ;-).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 26 May 2001 12:14:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Parser abort ignoring following commands "
},
{
"msg_contents": "Peter Eisentraut wrote:\n> \n> Tom Lane writes:\n> \n> > Peter Eisentraut <peter_e@gmx.net> writes:\n> > > No, I think there is another problem. How about something without\n> > > selects:\n> >\n> > > $ psql -c 'delete from pk; delete from xx;'\n> > > ERROR: Relation 'xx' does not exist\n> >\n> > > \"pk\" exists, but nothing is deleted.\n> >\n> > Sure, because the transaction is rolled back. The whole string\n> > is executed in one transaction. You will definitely break existing\n> > applications if you change that.\n> \n> Applications that rely on this behaviour are broken. It was always said\n> that statements are in their own transaction block unless in an explicit\n> BEGIN/COMMIT block. A statement is defined to end at the semicolon, not\n> at the end of the string you submit to PQexec().\n\nI guess that this is a multi-command statement ?\n\nIt has always been so, except that psql seems to do some parsing and\nissue \neach command to backend separately.\n\n----------------\nHannu\n",
"msg_date": "Sat, 26 May 2001 23:50:58 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Parser abort ignoring following commands"
},
{
"msg_contents": "On Sat, May 26, 2001 at 05:57:16PM +0200, Peter Eisentraut wrote:\n> Tom Lane writes:\n> \n> > Peter Eisentraut <peter_e@gmx.net> writes:\n> > > No, I think there is another problem. How about something without\n> > > selects:\n> >\n> > > $ psql -c 'delete from pk; delete from xx;'\n> > > ERROR: Relation 'xx' does not exist\n> >\n> > > \"pk\" exists, but nothing is deleted.\n> >\n> > Sure, because the transaction is rolled back. The whole string\n> > is executed in one transaction. You will definitely break existing\n> > applications if you change that.\n> \n> Applications that rely on this behaviour are broken. It was always said\n> that statements are in their own transaction block unless in an explicit\n> BEGIN/COMMIT block. A statement is defined to end at the semicolon, not\n> at the end of the string you submit to PQexec().\n\n\tYou put semicolons at the end of your strings to PQexec()?\n\n-- \nAdam Haberlach | At either end of the social spectrum there lies\nadam@newsnipple.com | a leisure class. -- Eric Beck 1965 \nhttp://www.newsnipple.com |\n'88 EX500 '00 >^< | http://youlook.org\n",
"msg_date": "Sat, 26 May 2001 19:00:44 -0700",
"msg_from": "Adam Haberlach <adam@newsnipple.com>",
"msg_from_op": false,
"msg_subject": "Re: Parser abort ignoring following commands"
}
] |
[
{
"msg_contents": "The docs seem a little sketchy but the source implies that there is\na difference between new and old style triggers. I can't seem to find\nthis difference. I tried following the only current example I could\nfind but I get a core dump. Here is the backtrace.\n\n#0 0x0 in ?? ()\n#1 0x8136052 in fmgr_oldstyle (fcinfo=0xbfbfd134) at fmgr.c:433\n#2 0x80b6b2d in ExecCallTriggerFunc (trigger=0x82ce030, trigdata=0xbfbfd1c8, \n per_tuple_context=0x827acf0) at trigger.c:865\n#3 0x80b6e15 in ExecBRUpdateTriggers (estate=0x82fbf50, tupleid=0xbfbfd264, \n newtuple=0x83094a8) at trigger.c:1008\n#4 0x80bcf3a in ExecReplace (slot=0x8308018, tupleid=0xbfbfd264, \n estate=0x82fbf50) at execMain.c:1416\n#5 0x80bcc82 in ExecutePlan (estate=0x82fbf50, plan=0x82fbec8, \n operation=CMD_UPDATE, numberTuples=0, direction=ForwardScanDirection, \n destfunc=0x83092f0) at execMain.c:1127\n#6 0x80bc27b in ExecutorRun (queryDesc=0x8307cc0, estate=0x82fbf50, \n feature=3, count=0) at execMain.c:233\n#7 0x80ff29b in ProcessQuery (parsetree=0x82f6e20, plan=0x82fbec8, \n dest=Remote) at pquery.c:277\n#8 0x80fddd3 in pg_exec_query_string (\n query_string=0x82f6030 \"UPDATE bgroup SET actypid = 1,ddate = NULL,edate = '2001-07-23',mail = '',ftp = '',dns = 'f',bname = 'jpsantos',bgdesc = '',isp = 'VEX',pdate = '2001-04-23',pmon = 5,bgroup_active = 't',sdate = '1999-\"..., \n dest=Remote, parse_context=0x827a720) at postgres.c:808\n#9 0x80fedd1 in PostgresMain (argc=4, argv=0xbfbfd4c8, real_argc=9, \n real_argv=0xbfbfdc54, username=0x826325d \"darcy\") at postgres.c:1905\n#10 0x80ea2ad in DoBackend (port=0x8263000) at postmaster.c:2114\n#11 0x80e9eb2 in BackendStartup (port=0x8263000) at postmaster.c:1897\n#12 0x80e91aa in ServerLoop () at postmaster.c:995\n#13 0x80e8b7c in PostmasterMain (argc=9, argv=0xbfbfdc54) at postmaster.c:685\n#14 0x80caff4 in main (argc=9, argv=0xbfbfdc54) at main.c:175\n#15 0x806bc81 in ___start ()\n\nIt gets to fmgr_oldstyle() and then dies from a jump to a null pointer.\nCan someone please tell me how to make my function a newstyle one. I\nwill update the docs once I have it working.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Sun, 27 May 2001 10:56:13 -0400 (EDT)",
"msg_from": "darcy@druid.net (D'Arcy J.M. Cain)",
"msg_from_op": true,
"msg_subject": "New/old style trigger API"
},
{
"msg_contents": "darcy@druid.net (D'Arcy J.M. Cain) writes:\n> It gets to fmgr_oldstyle() and then dies from a jump to a null pointer.\n> Can someone please tell me how to make my function a newstyle one.\n\nPerhaps you forgot the PG_FUNCTION_INFO_V1 declaration? See\nhttp://www.ca.postgresql.org/users-lounge/docs/7.1/postgres/trigger-examples.html\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 27 May 2001 13:11:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: New/old style trigger API "
},
{
"msg_contents": "Thus spake Tom Lane\n> darcy@druid.net (D'Arcy J.M. Cain) writes:\n> > It gets to fmgr_oldstyle() and then dies from a jump to a null pointer.\n> > Can someone please tell me how to make my function a newstyle one.\n> \n> Perhaps you forgot the PG_FUNCTION_INFO_V1 declaration? See\n> http://www.ca.postgresql.org/users-lounge/docs/7.1/postgres/trigger-examples.html\n\nYah, I was looking at chapter 22 on SPI programming. I assume that the\nsame should apply there. Shall I go ahead and add it to the docs in that\nchapter as well?\n\nI wonder if there is a relation to another problem I am having. On AIX\nI can compile my chkpass function (in contrib which I just updated) OK\nbut when I use it I get a similar core dump there that I don't see on\nNetBSD. Does it require the same interface? Shall I update those pages\nas well?\n\nBTW, here is the link command I needed to get as far as I did.\n\n ld -G -o $@ $< -bexpall -bnoentry -lc\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Sun, 27 May 2001 15:14:27 -0400 (EDT)",
"msg_from": "darcy@druid.net (D'Arcy J.M. Cain)",
"msg_from_op": true,
"msg_subject": "Re: New/old style trigger API"
},
{
"msg_contents": "darcy@druid.net (D'Arcy J.M. Cain) writes:\n> Yah, I was looking at chapter 22 on SPI programming. I assume that the\n> same should apply there. Shall I go ahead and add it to the docs in that\n> chapter as well?\n\nUh ... where? I see nothing related to trigger programming in chapter 22.\nThere's an example of an old-style function at\nhttp://www.ca.postgresql.org/users-lounge/docs/7.1/postgres/spi-examples.html\nbut it's not a trigger.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 27 May 2001 15:50:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: New/old style trigger API "
},
{
"msg_contents": "Thus spake Tom Lane\n> darcy@druid.net (D'Arcy J.M. Cain) writes:\n> > Yah, I was looking at chapter 22 on SPI programming. I assume that the\n> > same should apply there. Shall I go ahead and add it to the docs in that\n> > chapter as well?\n> \n> Uh ... where? I see nothing related to trigger programming in chapter 22.\n> There's an example of an old-style function at\n> http://www.ca.postgresql.org/users-lounge/docs/7.1/postgres/spi-examples.html\n> but it's not a trigger.\n\nAh. So PG_FUNCTION_INFO_V1() is strictly for triggers then. The name\nsuggested it was a little more global than that.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Sun, 27 May 2001 16:10:36 -0400 (EDT)",
"msg_from": "darcy@druid.net (D'Arcy J.M. Cain)",
"msg_from_op": true,
"msg_subject": "Re: New/old style trigger API"
},
{
"msg_contents": "darcy@druid.net (D'Arcy J.M. Cain) writes:\n>> There's an example of an old-style function at\n>> http://www.ca.postgresql.org/users-lounge/docs/7.1/postgres/spi-examples.html\n>> but it's not a trigger.\n\n> Ah. So PG_FUNCTION_INFO_V1() is strictly for triggers then.\n\nNo, it's for new-style functions. See\nhttp://www.ca.postgresql.org/users-lounge/docs/7.1/postgres/xfunc-c.html\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 27 May 2001 16:37:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: New/old style trigger API "
},
{
"msg_contents": "Thus spake Tom Lane\n> darcy@druid.net (D'Arcy J.M. Cain) writes:\n> >> There's an example of an old-style function at\n> >> http://www.ca.postgresql.org/users-lounge/docs/7.1/postgres/spi-examples.html\n> >> but it's not a trigger.\n> \n> > Ah. So PG_FUNCTION_INFO_V1() is strictly for triggers then.\n> \n> No, it's for new-style functions. See\n> http://www.ca.postgresql.org/users-lounge/docs/7.1/postgres/xfunc-c.html\n\nI'm so confused. :-)\n\nOK, so all functions can be upgraded to version 1 calling sequence, not\njust ones used for triggers. I think I have it but the docs could still\nuse some work. If someone wants to check out the latest version of my\nchkpass function in contrib and let me know if I seem to have the right\nidea now then I will go ahead and try to improve the docs.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Mon, 28 May 2001 11:48:00 -0400 (EDT)",
"msg_from": "darcy@druid.net (D'Arcy J.M. Cain)",
"msg_from_op": true,
"msg_subject": "Re: New/old style trigger API"
},
{
"msg_contents": "darcy@druid.net (D'Arcy J.M. Cain) writes:\n> OK, so all functions can be upgraded to version 1 calling sequence, not\n> just ones used for triggers.\n\nNow you're getting there. Actually triggers *must* be upgraded to the\nnew calling sequence, because we don't support old-style triggers\nanymore. But for other functions it's optional.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 28 May 2001 12:06:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: New/old style trigger API "
},
{
"msg_contents": "Thus spake Tom Lane\n> darcy@druid.net (D'Arcy J.M. Cain) writes:\n> > OK, so all functions can be upgraded to version 1 calling sequence, not\n> > just ones used for triggers.\n> \n> Now you're getting there. Actually triggers *must* be upgraded to the\n> new calling sequence, because we don't support old-style triggers\n> anymore. But for other functions it's optional.\n\nOK, I am going to try to find time to update the other pages. In particular\nI will update the examples and add the list of macros.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Mon, 28 May 2001 15:07:34 -0400 (EDT)",
"msg_from": "darcy@druid.net (D'Arcy J.M. Cain)",
"msg_from_op": true,
"msg_subject": "Re: New/old style trigger API"
}
] |
[
{
"msg_contents": "When trying write a function in plpgsql I'm getting behavior that\nprobably isn't the corect one.\n\nin the function bellow:\n\n-----\n-- split the given key\ncreate function dad_char_key_split(\n varchar, -- char_key\n integer, -- subkey_len\n char -- separator\n) returns varchar as '\ndeclare\n p_char_key alias for $1;\n p_subkey_len alias for $2;\n p_separator alias for $3;\n v_key_len integer;\n v_from integer;\n v_result varchar;\n v_sep char;\nbegin\n v_result := '''';\n v_sep := '''';\n v_from := 1;\n v_key_len := char_length(p_char_key);\n for i in 1..(v_key_len/p_subkey_len) loop\n v_result := v_result || v_sep ||\nsubstr(p_char_key,v_from,p_subkey_len);\n v_sep := p_separator;\n v_from := v_from + p_subkey_len;\n end loop;\n return v_result;\nend;' language 'plpgsql';\n----\n\nif I try this:\n\nselect dad_char_key_split('00kjoi',2,',');\n\nI get this result:\n\n\",kj,oi\"\n\nAnd when I change the initialization of the variables \"v_sep\" and\n\"v_result\" from empty strings to a space ('' '' istead of '''') I get\nthe expected result:\n\n\"00,kj,oi\"\n\nIt seems that plpgsql treats empty strings as null so when concatenating\nwith a empty string we get null instead of some value.\n",
"msg_date": "Sun, 27 May 2001 19:00:37 +0200",
"msg_from": "Domingo Alvarez Duarte <domingo@dad-it.com>",
"msg_from_op": true,
"msg_subject": "maybe a bug in plpgsql, nulls and empty strings are not the same"
},
{
"msg_contents": "Domingo Alvarez Duarte <domingo@dad-it.com> writes:\n> When trying write a function in plpgsql I'm getting behavior that\n> probably isn't the corect one.\n\nIt works as expected if you declare v_sep as varchar rather than char.\n\nI think plpgsql may be interpreting\n\tv_sep char;\nas declaring v_sep to be the internal 1-byte \"char\" type, not char(n)\nwith unspecified length as you are expecting. There's definitely\nsomething strange going on with the assignment\n\tv_sep := '''';\n\nIn any case it's a tad bizarre to use char rather than varchar for\nsomething that you intend to have varying width, no?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 28 May 2001 12:02:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: maybe a bug in plpgsql, nulls and empty strings are not the same "
}
] |
[
{
"msg_contents": "I have an IP Address allocation system that uses a networks table like\nso:\n\nCREATE TABLE \"networks\" (\n \"netblock\" cidr,\n \"router\" integer,\n \"interface\" character varying(256),\n \"dest_ip\" inet,\n \"mis_token\" character(16),\n \"assigned_date\" date,\n \"assigned_by\" character varying(256),\n \"justification_now\" integer,\n \"justification_1yr\" integer,\n \"cust_asn\" integer,\n \"comments\" character varying(2048),\n \"other_reference\" character varying(256),\n \"parent_asn\" integer,\n \"status\" integer,\n \"purpose\" integer,\n \"last_update_by\" character varying(256),\n \"last_update_at\" timestamp with time zone,\n \"customer_reference\" integer\n);\n\nWhen I go looking for an available netblock, I do the following query:\n\nBEGIN TRANSACTION;\nSELECT host(netblock),masklen(netblock),netblock,netmask(netblock) \nFROM networks \nWHERE parent_asn=xxxx AND \nstatus=get_status_code('available') AND \nmasklen(netblock) = xxx FOR UPDATE LIMIT 1;\n\n(if this fails, we go looking for a /24 to bust up, and if we can find\nthat we add new available rows for that, and retry this query). \n\n\nget_status_code is a function to look up a number based on text in\nanother table (not marked cacheable at the moment, but should it be? )\n\n\nMy questions are:\n\n1) if this code is running twice for the same size block what will\nhappen ?\n\n2) what can I do to make this more efficient?\n\nthe table will contain ~5000 rows to begin with. \n\nLER\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Sun, 27 May 2001 12:37:05 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "query/locking/efficiency question"
}
] |
[
{
"msg_contents": "Just tried to update one of my dev boxes to 7.2devel...\n\n\ncc -O -K inline -I../../../src/interfaces/libpq -I../../../src/include -I/usr/local/include -c -o describe.o describe.c\nUX:acomp: ERROR: \"describe.c\", line 928: newline in string literal\nUX:acomp: ERROR: \"describe.c\", line 929: undefined symbol: u\nUX:acomp: ERROR: \"describe.c\", line 929: undefined struct/union member: usesysid\nUX:acomp: WARNING: \"describe.c\", line 929: left operand of \".\" must be struct/union object\nUX:acomp: ERROR: \"describe.c\", line 929: Syntax error before or at: AS\nUX:acomp: ERROR: \"describe.c\", line 929: invalid source character: '\\'\nUX:acomp: ERROR: \"describe.c\", line 929: newline in string literal\nUX:acomp: ERROR: \"describe.c\", line 930: invalid source character: '\\'\nUX:acomp: ERROR: \"describe.c\", line 930: newline in string literal\nUX:acomp: ERROR: \"describe.c\", line 931: invalid source character: '\\'\nUX:acomp: ERROR: \"describe.c\", line 931: newline in string literal\nUX:acomp: ERROR: \"describe.c\", line 932: invalid source character: '\\'\nUX:acomp: ERROR: \"describe.c\", line 932: newline in string literal\nUX:acomp: ERROR: \"describe.c\", line 939: Syntax error before or at: \"ORDER BY \\\"User Name\\\"\\n\"\nUX:acomp: WARNING: \"describe.c\", line 939: declaration missing specifiers: assuming \"int\"\nUX:acomp: ERROR: \"describe.c\", line 939: identifier redeclared: strcat\nUX:acomp: WARNING: \"describe.c\", line 941: declaration missing specifiers: assuming \"int\"\nUX:acomp: ERROR: \"describe.c\", line 941: undefined symbol: buf\nUX:acomp: WARNING: \"describe.c\", line 941: improper pointer/integer combination: arg #1\nUX:acomp: WARNING: \"describe.c\", line 941: improper pointer/integer combination: op \"=\"\nUX:acomp: ERROR: \"describe.c\", line 941: non-constant initializer: op \"CALL\"\nUX:acomp: ERROR: \"describe.c\", line 942: Syntax error before or at: if\nUX:acomp: ERROR: \"describe.c\", line 942: syntax error, probably missing \",\", \";\" or \"=\"\nUX:acomp: ERROR: \"describe.c\", line 942: identifier redefined: res\nUX:acomp: ERROR: \"describe.c\", line 942: cannot recover from previous errors\ngmake[3]: *** [describe.o] Error 1\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/bin/psql'\ngmake[2]: *** [all] Error 2\ngmake[2]: Leaving directory `/home/ler/pg-dev/pgsql/src/bin'\ngmake[1]: *** [all] Error 2\ngmake[1]: Leaving directory `/home/ler/pg-dev/pgsql/src'\ngmake: *** [all] Error 2\n\nConfigure input:\n\nCC=cc CXX=CC ./configure --prefix=/usr/local/pgsql --enable-syslog \\\n\t--with-CXX --with-perl --enable-multibyte --enable-cassert \\\n\t--with-includes=/usr/local/include --with-libs=/usr/local/lib \\\n\t--with-tcl --with-tclconfig=/usr/local/lib \\\n\t--with-tkconfig=/usr/local/lib --enable-locale --with-python\n\nAny ideas?\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Sun, 27 May 2001 16:28:16 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Compile Issue, current CVS/UnixWare 7.1.1"
},
{
"msg_contents": "Larry Rosenman writes:\n\n> cc -O -K inline -I../../../src/interfaces/libpq -I../../../src/include -I/usr/local/include -c -o describe.o describe.c\n> UX:acomp: ERROR: \"describe.c\", line 928: newline in string literal\n\nIt seems that newer compilers don't like multi-line string literals. I\njust ran into that same problem with GCC 3.0. A fix has been committed.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sun, 27 May 2001 23:55:35 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Compile Issue, current CVS/UnixWare 7.1.1"
}
] |
[
{
"msg_contents": "I fixed it:\n\nIndex: describe.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/bin/psql/describe.c,v\nretrieving revision 1.31\ndiff -c -r1.31 describe.c\n*** describe.c\t2001/05/09 17:57:42\t1.31\n--- describe.c\t2001/05/27 21:32:27\n***************\n*** 925,935 ****\n \theaders[cols] = NULL;\n \n \tstrcpy(buf,\n! \t\t \"SELECT u.usename AS \\\"User Name\\\"\\n\n! , u.usesysid AS \\\"User ID\\\"\\n\n! , u.usesuper AS \\\"Super User\\\"\\n\n! , u.usecreatedb AS \\\"Create DB\\\"\\n\n! FROM pg_user u\\n\");\n \tif (name)\n \t{\n \t\tstrcat(buf, \" WHERE u.usename ~ '^\");\n--- 925,935 ----\n \theaders[cols] = NULL;\n \n \tstrcpy(buf,\n! \t\t \"SELECT u.usename AS \\\"User Name\\\"\\n\"\n! \", u.usesysid AS \\\"User ID\\\"\\n\"\n! \", u.usesuper AS \\\"Super User\\\"\\n\"\n! \", u.usecreatedb AS \\\"Create DB\\\"\\n\"\n! \"FROM pg_user u\\n\");\n \tif (name)\n \t{\n \t\tstrcat(buf, \" WHERE u.usename ~ '^\");\n----- Forwarded message from Larry Rosenman <ler@lerctr.org> -----\n\nFrom: Larry Rosenman <ler@lerctr.org>\nSubject: Compile Issue, current CVS/UnixWare 7.1.1\nDate: Sun, 27 May 2001 16:28:16 -0500\nMessage-ID: <20010527162816.A8989@lerami.lerctr.org>\nUser-Agent: Mutt/1.3.18i\nX-Mailer: Mutt http://www.mutt.org/\nTo: PostgreSQL Hackers List <pgsql-hackers@postgresql.org>\nCc: PostgreSQL Bugs List <pgsql-bugs@postgresql.org>\n\nJust tried to update one of my dev boxes to 7.2devel...\n\n\ncc -O -K inline -I../../../src/interfaces/libpq -I../../../src/include -I/usr/local/include -c -o describe.o describe.c\nUX:acomp: ERROR: \"describe.c\", line 928: newline in string literal\nUX:acomp: ERROR: \"describe.c\", line 929: undefined symbol: u\nUX:acomp: ERROR: \"describe.c\", line 929: undefined struct/union member: usesysid\nUX:acomp: WARNING: \"describe.c\", line 929: left operand of \".\" must be struct/union object\nUX:acomp: ERROR: \"describe.c\", line 929: Syntax error before or at: AS\nUX:acomp: ERROR: \"describe.c\", line 929: invalid source character: '\\'\nUX:acomp: ERROR: \"describe.c\", line 929: newline in string literal\nUX:acomp: ERROR: \"describe.c\", line 930: invalid source character: '\\'\nUX:acomp: ERROR: \"describe.c\", line 930: newline in string literal\nUX:acomp: ERROR: \"describe.c\", line 931: invalid source character: '\\'\nUX:acomp: ERROR: \"describe.c\", line 931: newline in string literal\nUX:acomp: ERROR: \"describe.c\", line 932: invalid source character: '\\'\nUX:acomp: ERROR: \"describe.c\", line 932: newline in string literal\nUX:acomp: ERROR: \"describe.c\", line 939: Syntax error before or at: \"ORDER BY \\\"User Name\\\"\\n\"\nUX:acomp: WARNING: \"describe.c\", line 939: declaration missing specifiers: assuming \"int\"\nUX:acomp: ERROR: \"describe.c\", line 939: identifier redeclared: strcat\nUX:acomp: WARNING: \"describe.c\", line 941: declaration missing specifiers: assuming \"int\"\nUX:acomp: ERROR: \"describe.c\", line 941: undefined symbol: buf\nUX:acomp: WARNING: \"describe.c\", line 941: improper pointer/integer combination: arg #1\nUX:acomp: WARNING: \"describe.c\", line 941: improper pointer/integer combination: op \"=\"\nUX:acomp: ERROR: \"describe.c\", line 941: non-constant initializer: op \"CALL\"\nUX:acomp: ERROR: \"describe.c\", line 942: Syntax error before or at: if\nUX:acomp: ERROR: \"describe.c\", line 942: syntax error, probably missing \",\", \";\" or \"=\"\nUX:acomp: ERROR: \"describe.c\", line 942: identifier redefined: res\nUX:acomp: ERROR: \"describe.c\", line 942: cannot recover from previous errors\ngmake[3]: *** [describe.o] Error 1\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/bin/psql'\ngmake[2]: *** [all] Error 2\ngmake[2]: Leaving directory `/home/ler/pg-dev/pgsql/src/bin'\ngmake[1]: *** [all] Error 2\ngmake[1]: Leaving directory `/home/ler/pg-dev/pgsql/src'\ngmake: *** [all] Error 2\n\nConfigure input:\n\nCC=cc CXX=CC ./configure --prefix=/usr/local/pgsql --enable-syslog \\\n\t--with-CXX --with-perl --enable-multibyte --enable-cassert \\\n\t--with-includes=/usr/local/include --with-libs=/usr/local/lib \\\n\t--with-tcl --with-tclconfig=/usr/local/lib \\\n\t--with-tkconfig=/usr/local/lib --enable-locale --with-python\n\nAny ideas?\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n----- End forwarded message -----\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Sun, 27 May 2001 16:33:29 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "(forw) Compile Issue, current CVS/UnixWare 7.1.1"
}
] |
[
{
"msg_contents": "Playing with the earthdistance stuff, and found I needed the following \npatch:\n\n\nIndex: Makefile\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/contrib/earthdistance/Makefile,v\nretrieving revision 1.8\ndiff -c -r1.8 Makefile\n*** Makefile\t2001/02/20 19:20:27\t1.8\n--- Makefile\t2001/05/27 22:14:41\n***************\n*** 15,21 ****\n all: $(SONAME) $(NAME).sql\n \n $(NAME).sql: $(NAME).sql.in\n! \tsed -e 's:MODULE_PATHNAME:$(datadir)/contrib/$(SONAME):g' < $< > $@\n \n install: all installdirs\n \t$(INSTALL_SHLIB) $(SONAME)\t$(libdir)/contrib\n--- 15,21 ----\n all: $(SONAME) $(NAME).sql\n \n $(NAME).sql: $(NAME).sql.in\n! \tsed -e 's:MODULE_PATHNAME:$(libdir)/contrib/$(SONAME):g' < $< > $@\n \n install: all installdirs\n \t$(INSTALL_SHLIB) $(SONAME)\t$(libdir)/contrib\n\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Sun, 27 May 2001 17:17:06 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "/contrib/earthdistance/Makefile patch"
},
{
"msg_contents": "Your patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it withing the next 48 hours.\n\n> Playing with the earthdistance stuff, and found I needed the following \n> patch:\n> \n> \n> Index: Makefile\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/contrib/earthdistance/Makefile,v\n> retrieving revision 1.8\n> diff -c -r1.8 Makefile\n> *** Makefile\t2001/02/20 19:20:27\t1.8\n> --- Makefile\t2001/05/27 22:14:41\n> ***************\n> *** 15,21 ****\n> all: $(SONAME) $(NAME).sql\n> \n> $(NAME).sql: $(NAME).sql.in\n> ! \tsed -e 's:MODULE_PATHNAME:$(datadir)/contrib/$(SONAME):g' < $< > $@\n> \n> install: all installdirs\n> \t$(INSTALL_SHLIB) $(SONAME)\t$(libdir)/contrib\n> --- 15,21 ----\n> all: $(SONAME) $(NAME).sql\n> \n> $(NAME).sql: $(NAME).sql.in\n> ! \tsed -e 's:MODULE_PATHNAME:$(libdir)/contrib/$(SONAME):g' < $< > $@\n> \n> install: all installdirs\n> \t$(INSTALL_SHLIB) $(SONAME)\t$(libdir)/contrib\n> \n> \n> \n> -- \n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 E-Mail: ler@lerctr.org\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 27 May 2001 20:15:09 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: /contrib/earthdistance/Makefile patch"
},
{
"msg_contents": "\nPatch applied. Thanks.\n\n\n> Playing with the earthdistance stuff, and found I needed the following \n> patch:\n> \n> \n> Index: Makefile\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/contrib/earthdistance/Makefile,v\n> retrieving revision 1.8\n> diff -c -r1.8 Makefile\n> *** Makefile\t2001/02/20 19:20:27\t1.8\n> --- Makefile\t2001/05/27 22:14:41\n> ***************\n> *** 15,21 ****\n> all: $(SONAME) $(NAME).sql\n> \n> $(NAME).sql: $(NAME).sql.in\n> ! \tsed -e 's:MODULE_PATHNAME:$(datadir)/contrib/$(SONAME):g' < $< > $@\n> \n> install: all installdirs\n> \t$(INSTALL_SHLIB) $(SONAME)\t$(libdir)/contrib\n> --- 15,21 ----\n> all: $(SONAME) $(NAME).sql\n> \n> $(NAME).sql: $(NAME).sql.in\n> ! \tsed -e 's:MODULE_PATHNAME:$(libdir)/contrib/$(SONAME):g' < $< > $@\n> \n> install: all installdirs\n> \t$(INSTALL_SHLIB) $(SONAME)\t$(libdir)/contrib\n> \n> \n> \n> -- \n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 E-Mail: ler@lerctr.org\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 30 May 2001 08:58:44 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: /contrib/earthdistance/Makefile patch"
}
] |
[
{
"msg_contents": "Good day,\n\nI'm experiencing a truly strange issue with the libpq++ interface when I\nrun it within an Apache process, and I was hoping that perhaps someone\nelse has run into this.\n\nInside of an Apache module I am developing, I need to make calls to\nPostgreSQL, so I am building it with the libpq++ interface. It's worked\nfine thus far, because the module is designed to keep a persistent\nconnection to PostgreSQL by having a PgDatabase* object that lives and\ndies with each httpd process.\n\nHowever, if I try to instantiate a PgDatabase* connection temporarily\nwithin the httpd process, even though I delete the pointer to call the\ndestructor of the Object, the postmaster connection *continues to hang\naround*, well after the request has finished processing, and well after\nthe call to its destructor is made.\n\nI thought this might have something to do with the kind of convoluted way\nthat I'm linking my Apache module (which is written in C++, while Apache\nis compiled in C) so I wrote a test case of a simple console application\nthat is written in C, but calls to object code compiled in C++ with\nlibpq++, and it behaved correctly. I was able to call destructors fine,\nand memory was instantly returned and the connection closed.\n\nSo... I'm really at a loss here on what's going on, and wondering if\nanyone has some insight? I can give more technical details if necessary,\nthat's just the gist of the problem.\n\n\nThanks, and Regards,\nJw.\n--\nJohn Worsley - Command Prompt, Inc.\njlx@commandprompt.com By way of pgsql-interfaces@commandprompt.com\n\n\n",
"msg_date": "Sun, 27 May 2001 19:22:03 -0700 (PDT)",
"msg_from": "<pgsql-interfaces@commandprompt.com>",
"msg_from_op": true,
"msg_subject": "libpq++ in Apache Problem."
},
{
"msg_contents": "Hello all,\n\nInspired by others who have recently gotten PostgreSQL functions to return\nsets, I set out to create my own. I have on more than one occasion wished I\ncould run a query across databases or servers (similar to a dblink in Oracle\nor a linked db in MSSQL). Attached is a C file which is my attempt. It\nexports two functions:\n\ndblink(text connect_string, text sql, text field_separator)\ndblink_tok(text delimited_text, text field_separator, int ord_position)\n\nThe functions are used as shown in the following example:\n\nselect\n dblink_tok(t1.f1,'~',0)::int as vl_id\n ,dblink_tok(t1.f1,'~',1)::text as vl_guid\n ,dblink_tok(t1.f1,'~',2)::text as vl_pri_email\n ,dblink_tok(t1.f1,'~',3)::text as vl_acct_pass_phrase\n ,dblink_tok(t1.f1,'~',4)::text as vl_email_relname\n ,dblink_tok(t1.f1,'~',5)::text as vl_hwsn_relname\n ,dblink_tok(t1.f1,'~',6)::timestamp as vl_mod_dt\n ,dblink_tok(t1.f1,'~',7)::int as vl_status\nfrom\n (select dblink('host=192.168.5.150 port=5432 dbname=vsreg_001 user=postgres\npassword=postgres','select * from vs_lkup','~') as f1) as t1\n\nBy doing \"create view vs_lkup_rm as . . .\" with the above query, from a\ndatabase on another server, I can then write:\n\"select * from vs_lkup\" and get results just as if I were on 192.168.5.150\n(sort of -- see problem below).\n\nI have one question, and one problem regarding this.\n\nFirst the question: is there any way to get the dblink function to return\nsetof composite -- i.e. return tuples instead of scalar values? The\ndocumentation indicates that a function can return a composite type, but my\nattempts all seemed to produce only pointer values (to the tuples?)\n\nNow the problem: as I stated above, \"select * from vs_lkup\" returns results\njust as if I were on 192.168.5.150 -- but if I try \"select * from vs_lkup\nWHERE vl_id = 1\" or \"select * from vs_lkup WHERE vl_pri_email in\n('email1@foo.com')\" I get the following error message: \"ERROR: Set-valued\nfunction called in context that cannot accept a set\". Any ideas how to work\naround this?\n\nThanks,\n\nJoe Conway",
"msg_date": "Sun, 27 May 2001 22:02:17 -0700",
"msg_from": "\"Joe Conway\" <joe.conway@mail.com>",
"msg_from_op": false,
"msg_subject": "remote database queries"
},
{
"msg_contents": "[ Redirected away from the entirely inappropriate pgsql-interfaces list ]\n\n\"Joe Conway\" <joe.conway@mail.com> writes:\n> Inspired by others who have recently gotten PostgreSQL functions to return\n> sets, I set out to create my own. I have on more than one occasion wished I\n> could run a query across databases or servers (similar to a dblink in Oracle\n> or a linked db in MSSQL). Attached is a C file which is my attempt.\n\n> select\n> dblink_tok(t1.f1,'~',0)::int as vl_id\n> ,dblink_tok(t1.f1,'~',1)::text as vl_guid\n> ,dblink_tok(t1.f1,'~',2)::text as vl_pri_email\n> ,dblink_tok(t1.f1,'~',3)::text as vl_acct_pass_phrase\n> ,dblink_tok(t1.f1,'~',4)::text as vl_email_relname\n> ,dblink_tok(t1.f1,'~',5)::text as vl_hwsn_relname\n> ,dblink_tok(t1.f1,'~',6)::timestamp as vl_mod_dt\n> ,dblink_tok(t1.f1,'~',7)::int as vl_status\n> from\n> (select dblink('host=192.168.5.150 port=5432 dbname=vsreg_001 user=postgres\n> password=postgres','select * from vs_lkup','~') as f1) as t1\n\n> By doing \"create view vs_lkup_rm as . . .\" with the above query, from a\n> database on another server, I can then write:\n> \"select * from vs_lkup\" and get results just as if I were on 192.168.5.150\n> (sort of -- see problem below).\n\n> I have one question, and one problem regarding this.\n\n> First the question: is there any way to get the dblink function to return\n> setof composite -- i.e. return tuples instead of scalar values?\n\nIt could return a tuple, but there are notational problems that would\nmake it difficult to do anything useful with the tuple; in particular,\nAFAICS you couldn't retrieve more than one column out of it, so there's\nno point.\n\nUntil we fix that (maybe for 7.2, maybe not) your existing hack is\nprobably pretty reasonable. You could save some cycles by avoiding\nconversion to text, though --- instead return an opaque datum that is\npointer-to-tuple-slot and let the dblink_tok function extract fields\nfrom the tuple. Look at SQL function support and the FieldSelect\nexpression node type for inspiration.\n\n> Now the problem: as I stated above, \"select * from vs_lkup\" returns results\n> just as if I were on 192.168.5.150 -- but if I try \"select * from vs_lkup\n> WHERE vl_id = 1\" or \"select * from vs_lkup WHERE vl_pri_email in\n> ('email1@foo.com')\" I get the following error message: \"ERROR: Set-valued\n> function called in context that cannot accept a set\". Any ideas how to work\n> around this?\n\nI think this would work if the planner weren't so enthusiastic about\ntrying to collapse the sub-SELECT query together with the main query.\nUnfortunately it doesn't check to see if any set-valued functions are\ninvolved before it collapses 'em --- leaving you with a set-valued\nfunction call in the WHERE clause. Not sure if this is worth fixing,\nconsidering that using set-valued functions in this way is just a\nband-aid that doesn't have a long life expectancy.\n\nIf you just need a hack with a short life expectancy, here's a hack that\nI recommend not reading right before dinner ... might make you lose your\nappetite ;-). Build the view as a dummy UNION:\n\tcreate view vs_lkup_rm as\n\t\tselect ... from (select dblink(...))\n\t\tunion all\n\t\tselect null, null, ... where false;\nDone this way, the UNION won't change the view's results --- but it will\nprevent the (current version of the) planner from collapsing the view\ntogether with the surrounding query. For example:\n\nregression=# create function dblink() returns setof int as\nregression-# 'select f1 from int4_tbl' language 'sql';\nCREATE\nregression=# select dblink();\n ?column?\n-------------\n 0\n 123456\n -123456\n 2147483647\n -2147483647\n(5 rows)\n\nregression=# create view vv1 as\nregression-# select f1 from (select dblink() as f1) t1;\nCREATE\nregression=# select * from vv1 where f1 > 0;\nERROR: Set-valued function called in context that cannot accept a set\nregression=# create view vv2 as\nregression-# select f1 from (select dblink() as f1) t1\nregression-# union all\nregression-# select null where false;\nCREATE\nregression=# select * from vv2;\n f1\n-------------\n 0\n 123456\n -123456\n 2147483647\n -2147483647\n(5 rows)\n\nregression=# select * from vv2 where f1 > 0;\n f1\n------------\n 123456\n 2147483647\n(2 rows)\n\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 28 May 2001 17:11:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [INTERFACES] remote database queries "
},
{
"msg_contents": "> [ Redirected away from the entirely inappropriate pgsql-interfaces list ]\n\noops . . . sorry!\n\n> Until we fix that (maybe for 7.2, maybe not) your existing hack is\n> probably pretty reasonable. You could save some cycles by avoiding\n> conversion to text, though --- instead return an opaque datum that is\n> pointer-to-tuple-slot and let the dblink_tok function extract fields\n> from the tuple. Look at SQL function support and the FieldSelect\n> expression node type for inspiration.\n\nThanks -- I'll take a look.\n\n\n> I think this would work if the planner weren't so enthusiastic about\n> trying to collapse the sub-SELECT query together with the main query.\n> Unfortunately it doesn't check to see if any set-valued functions are\n> involved before it collapses 'em --- leaving you with a set-valued\n> function call in the WHERE clause. Not sure if this is worth fixing,\n> considering that using set-valued functions in this way is just a\n> band-aid that doesn't have a long life expectancy.\n\nI'd certainly bow to your wisdom in this area, but I was thinking it would\nbe useful (at times) to force a FROM clause sub-select to be treated as if\nit were a \"real\" table (probably not the best way to express this, but\nhopefully you get the idea). In MSSQL I've found many situations where\nputting intermediate results into a temp table, and then joining to it, is\nsignificantly faster than letting the optimizer do it's best. But the fact\nthat MSSQL will return record sets from a stored procedure help makes this\ntolerable/manageable -- i.e. the whole ugly mess can be rolled into one nice\nneat strored procedure call. If the FROM clause sub-select could be treated,\nin a sense, like an on-the-fly temp table in PostgreSQL, a similar result is\npossible. And if the whole ugly mess is rolled up behind a view, no one has\nto know except the especially curious ;-)\n\n>\n> If you just need a hack with a short life expectancy, here's a hack that\n> I recommend not reading right before dinner ... might make you lose your\n> appetite ;-). Build the view as a dummy UNION:\n\n*Very* few things make me lose my appetite -- and this worked perfectly!\nThank you.\n\n-- Joe\n\n",
"msg_date": "Mon, 28 May 2001 19:55:37 -0700",
"msg_from": "\"Joe Conway\" <joe@conway-family.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: [INTERFACES] remote database queries "
},
{
"msg_contents": "> Until we fix that (maybe for 7.2, maybe not) your existing hack is\n> probably pretty reasonable. You could save some cycles by avoiding\n> conversion to text, though --- instead return an opaque datum that is\n> pointer-to-tuple-slot and let the dblink_tok function extract fields\n> from the tuple. Look at SQL function support and the FieldSelect\n> expression node type for inspiration.\n>\n\nI changed the dblink() function to return a pointer instead of concatenated\ntext, and dblink_tok() to use the pointer. FWIW, a query on a small (85\ntuples) remote (a second PC on a 100baseT subnet) table takes about 34\nmilliseconds (based on show_query_stats) versus about 4 milliseconds when\nrun locally. It actually takes a bit longer (~65 milliseconds) when run\nagainst a second database on the same PC. The original text parsing version\nwas about 25% slower.\n\nAlthough shifting from text parsing to pointer passing is more efficient, I\nhave one more question regarding this -- for now ;) -- is there any way to\ncheck the pointer passed to dblink_tok() to be sure it came from dblink()?\n\nThanks,\n\n-- Joe",
"msg_date": "Wed, 30 May 2001 21:59:52 -0700",
"msg_from": "\"Joe Conway\" <joe.conway@mail.com>",
"msg_from_op": false,
"msg_subject": "remote database queries"
},
{
"msg_contents": "Your patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n> > Until we fix that (maybe for 7.2, maybe not) your existing hack is\n> > probably pretty reasonable. You could save some cycles by avoiding\n> > conversion to text, though --- instead return an opaque datum that is\n> > pointer-to-tuple-slot and let the dblink_tok function extract fields\n> > from the tuple. Look at SQL function support and the FieldSelect\n> > expression node type for inspiration.\n> >\n> \n> I changed the dblink() function to return a pointer instead of concatenated\n> text, and dblink_tok() to use the pointer. FWIW, a query on a small (85\n> tuples) remote (a second PC on a 100baseT subnet) table takes about 34\n> milliseconds (based on show_query_stats) versus about 4 milliseconds when\n> run locally. It actually takes a bit longer (~65 milliseconds) when run\n> against a second database on the same PC. The original text parsing version\n> was about 25% slower.\n> \n> Although shifting from text parsing to pointer passing is more efficient, I\n> have one more question regarding this -- for now ;) -- is there any way to\n> check the pointer passed to dblink_tok() to be sure it came from dblink()?\n> \n> Thanks,\n> \n> -- Joe\n> \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 12 Jun 2001 14:09:08 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: remote database queries"
},
{
"msg_contents": "\nAdded to /contrib. Thanks.\n\n> > Until we fix that (maybe for 7.2, maybe not) your existing hack is\n> > probably pretty reasonable. You could save some cycles by avoiding\n> > conversion to text, though --- instead return an opaque datum that is\n> > pointer-to-tuple-slot and let the dblink_tok function extract fields\n> > from the tuple. Look at SQL function support and the FieldSelect\n> > expression node type for inspiration.\n> >\n> \n> I changed the dblink() function to return a pointer instead of concatenated\n> text, and dblink_tok() to use the pointer. FWIW, a query on a small (85\n> tuples) remote (a second PC on a 100baseT subnet) table takes about 34\n> milliseconds (based on show_query_stats) versus about 4 milliseconds when\n> run locally. It actually takes a bit longer (~65 milliseconds) when run\n> against a second database on the same PC. The original text parsing version\n> was about 25% slower.\n> \n> Although shifting from text parsing to pointer passing is more efficient, I\n> have one more question regarding this -- for now ;) -- is there any way to\n> check the pointer passed to dblink_tok() to be sure it came from dblink()?\n> \n> Thanks,\n> \n> -- Joe\n> \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 14 Jun 2001 12:48:47 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: remote database queries"
}
] |
[
{
"msg_contents": "\n> So, may by add to pg_opclass two fields?\n> bool is_varlena_key\n> bool is_lossy_compress\n\nThose are both properties of the compress function, the index method or the key type.\nI do not think it has anything to do with operator classes (comparison functions), \nand thus would be wrong in pg_opclass.\n\nAndreas\n",
"msg_date": "Mon, 28 May 2001 08:59:48 +0200",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: GiST index on data types that require compression"
}
] |
[
{
"msg_contents": "\n> > You mean it is restored in session that is running the transaction ?\n\nDepends on what you mean with restored. It first reads the heap page,\nsees that it needs an older version and thus reads it from the \"rollback segment\".\n\n> > \n> > I guess thet it could be slower than our current way of doing it.\n> \n> Yes, for older transactions which *really* need in *particular*\n> old data, but not for newer ones. Look - now transactions have to read\n> dead data again and again, even if some of them (newer) need not to see\n> those data at all, and we keep dead data as long as required for other\n> old transactions *just for the case* they will look there.\n> But who knows?! Maybe those old transactions will not read from table\n> with big amount of dead data at all! So - why keep dead data in datafiles\n> for long time? This obviously affects overall system performance.\n\nYes, that is a good description. And old version is only required in the following \ntwo cases:\n\n1. the txn that modified this tuple is still open (reader in default committed read)\n2. reader is in serializable transaction isolation and has earlier xtid\n\nSeems overwrite smgr has mainly advantages in terms of speed for operations\nother than rollback.\n\nAndreas\n",
"msg_date": "Mon, 28 May 2001 10:02:17 +0200",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: Plans for solving the VACUUM problem"
},
{
"msg_contents": "Zeugswetter Andreas SB wrote:\n> \n> > > You mean it is restored in session that is running the transaction ?\n> \n> Depends on what you mean with restored. It first reads the heap page,\n> sees that it needs an older version and thus reads it from the \"rollback segment\".\n\nSo are whole pages stored in rollback segments or just the modified data\n?\n\nStoring whole pages could be very wasteful for tables with small records\nthat \nare often modified.\n\n---------------\nHannu\n",
"msg_date": "Mon, 28 May 2001 11:11:46 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: AW: Plans for solving the VACUUM problem"
},
{
"msg_contents": "> Yes, that is a good description. And old version is only required in the following \n> two cases:\n> \n> 1. the txn that modified this tuple is still open (reader in default committed read)\n> 2. reader is in serializable transaction isolation and has earlier xtid\n> \n> Seems overwrite smgr has mainly advantages in terms of speed for operations\n> other than rollback.\n\n... And rollback is required for < 5% transactions ...\n\nVadim\n\n\n",
"msg_date": "Mon, 28 May 2001 10:11:10 -0700",
"msg_from": "\"Vadim Mikheev\" <vmikheev@sectorbase.com>",
"msg_from_op": false,
"msg_subject": "Re: Plans for solving the VACUUM problem"
},
{
"msg_contents": "> > > > You mean it is restored in session that is running the transaction ?\n> > \n> > Depends on what you mean with restored. It first reads the heap page,\n> > sees that it needs an older version and thus reads it from the \"rollback segment\".\n> \n> So are whole pages stored in rollback segments or just the modified data?\n\nThis is implementation dependent. Storing whole pages is much easy to do,\nbut obviously it's better to store just modified data.\n\nVadim\n\n\n",
"msg_date": "Mon, 28 May 2001 10:15:17 -0700",
"msg_from": "\"Vadim Mikheev\" <vmikheev@sectorbase.com>",
"msg_from_op": false,
"msg_subject": "Re: AW: Plans for solving the VACUUM problem"
},
{
"msg_contents": "Vadim Mikheev wrote:\n> \n> > Yes, that is a good description. And old version is only required in the following\n> > two cases:\n> >\n> > 1. the txn that modified this tuple is still open (reader in default committed read)\n> > 2. reader is in serializable transaction isolation and has earlier xtid\n> >\n> > Seems overwrite smgr has mainly advantages in terms of speed for operations\n> > other than rollback.\n> \n> ... And rollback is required for < 5% transactions ...\n\nThis obviously depends on application. \n\nI know people who rollback most of their transactions (actually they use\nit to \nemulate temp tables when reporting). \n\nOTOH it is possible to do without rolling back at all as MySQL folks\nhave \nshown us ;)\n\nAlso, IIRC, pgbench does no rollbacks. I think that we have no\nperformance test that does.\n\n-----------------\nHannu\n",
"msg_date": "Mon, 28 May 2001 19:41:40 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Plans for solving the VACUUM problem"
}
] |
[
{
"msg_contents": "What ever happened to it? The files are in source/v7.1.2. Did someone\njust forget to make the top level link or are we still waiting on the docs\nto be re-rolled?\n\n- Brandon\n\n\nb. palmer, bpalmer@crimelabs.net\npgp: www.crimelabs.net/bpalmer.pgp5\n\n",
"msg_date": "Mon, 28 May 2001 10:58:16 -0400 (EDT)",
"msg_from": "bpalmer <bpalmer@crimelabs.net>",
"msg_from_op": true,
"msg_subject": "7.1.2?"
}
] |
[
{
"msg_contents": "Tried to rebuild my docs using the CVS tip...\n\n$ gmake\ngmake -C sgml clean\ngmake[1]: Entering directory `/home/ler/pg-dev/pgsql/doc/src/sgml'\nrm -f HTML.manifest *.html\nrm -rf *.1 *.l man1 manl manpage.refs manpage.links manpage.log\nrm -f *.rtf *.tex *.dvi *.aux *.log *.ps *.pdf\nrm -f HTML.index bookindex.sgml setindex.sgml\ngmake[1]: Leaving directory `/home/ler/pg-dev/pgsql/doc/src/sgml'\ngmake -C sgml admin.html\ngmake[1]: Entering directory `/home/ler/pg-dev/pgsql/doc/src/sgml'\nperl /usr/local/share/sgml/docbook/dsssl/modular/bin/collateindex.pl -f -g -t 'Index' -o bookindex.sgml -N\nperl /usr/local/share/sgml/docbook/dsssl/modular/bin/collateindex.pl -f -g -t 'Index' -x -o setindex.sgml -N\njade -D . -D ./ref -D ./../graphics -c /usr/local/share/sgml/docbook/dsssl/modular/catalog -d stylesheet.dsl -i output-html -t sgml book-decl.sgml admin.sgml\nln -sf admin.html index.html\ngmake[1]: Leaving directory `/home/ler/pg-dev/pgsql/doc/src/sgml'\ncd sgml && tar -cf ../admin.tar --exclude=Makefile --exclude='*.sgml' --exclude=ref *.html\ngzip -f admin.tar\ngmake -C sgml clean\ngmake[1]: Entering directory `/home/ler/pg-dev/pgsql/doc/src/sgml'\nrm -f HTML.manifest *.html\nrm -rf *.1 *.l man1 manl manpage.refs manpage.links manpage.log\nrm -f *.rtf *.tex *.dvi *.aux *.log *.ps *.pdf\nrm -f HTML.index bookindex.sgml setindex.sgml\ngmake[1]: Leaving directory `/home/ler/pg-dev/pgsql/doc/src/sgml'\ngmake -C sgml developer.html\ngmake[1]: Entering directory `/home/ler/pg-dev/pgsql/doc/src/sgml'\nperl /usr/local/share/sgml/docbook/dsssl/modular/bin/collateindex.pl -f -g -t 'Index' -o bookindex.sgml -N\nperl /usr/local/share/sgml/docbook/dsssl/modular/bin/collateindex.pl -f -g -t 'Index' -x -o setindex.sgml -N\njade -D . -D ./ref -D ./../graphics -c /usr/local/share/sgml/docbook/dsssl/modular/catalog -d stylesheet.dsl -i output-html -t sgml book-decl.sgml developer.sgml\nln -sf developer.html index.html\ngmake[1]: Leaving directory `/home/ler/pg-dev/pgsql/doc/src/sgml'\ncd sgml && tar -cf ../developer.tar --exclude=Makefile --exclude='*.sgml' --exclude=ref *.html\ngzip -f developer.tar\ngmake -C sgml clean\ngmake[1]: Entering directory `/home/ler/pg-dev/pgsql/doc/src/sgml'\nrm -f HTML.manifest *.html\nrm -rf *.1 *.l man1 manl manpage.refs manpage.links manpage.log\nrm -f *.rtf *.tex *.dvi *.aux *.log *.ps *.pdf\nrm -f HTML.index bookindex.sgml setindex.sgml\ngmake[1]: Leaving directory `/home/ler/pg-dev/pgsql/doc/src/sgml'\ngmake -C sgml reference.html\ngmake[1]: Entering directory `/home/ler/pg-dev/pgsql/doc/src/sgml'\nperl /usr/local/share/sgml/docbook/dsssl/modular/bin/collateindex.pl -f -g -t 'Index' -o bookindex.sgml -N\nperl /usr/local/share/sgml/docbook/dsssl/modular/bin/collateindex.pl -f -g -t 'Index' -x -o setindex.sgml -N\njade -D . -D ./ref -D ./../graphics -c /usr/local/share/sgml/docbook/dsssl/modular/catalog -d stylesheet.dsl -i output-html -t sgml book-decl.sgml reference.sgml\nln -sf reference.html index.html\ngmake[1]: Leaving directory `/home/ler/pg-dev/pgsql/doc/src/sgml'\ncd sgml && tar -cf ../reference.tar --exclude=Makefile --exclude='*.sgml' --exclude=ref *.html\ngzip -f reference.tar\ngmake -C sgml clean\ngmake[1]: Entering directory `/home/ler/pg-dev/pgsql/doc/src/sgml'\nrm -f HTML.manifest *.html\nrm -rf *.1 *.l man1 manl manpage.refs manpage.links manpage.log\nrm -f *.rtf *.tex *.dvi *.aux *.log *.ps *.pdf\nrm -f HTML.index bookindex.sgml setindex.sgml\ngmake[1]: Leaving directory `/home/ler/pg-dev/pgsql/doc/src/sgml'\ngmake -C sgml programmer.html\ngmake[1]: Entering directory `/home/ler/pg-dev/pgsql/doc/src/sgml'\nperl /usr/local/share/sgml/docbook/dsssl/modular/bin/collateindex.pl -f -g -t 'Index' -o bookindex.sgml -N\nperl /usr/local/share/sgml/docbook/dsssl/modular/bin/collateindex.pl -f -g -t 'Index' -x -o setindex.sgml -N\njade -D . -D ./ref -D ./../graphics -c /usr/local/share/sgml/docbook/dsssl/modular/catalog -d stylesheet.dsl -i output-html -t sgml book-decl.sgml programmer.sgml\njade:plsql.sgml:1011:13:E: end tag for element \"PARA\" which is not open\ngmake[1]: *** [programmer.html] Error 1\ngmake[1]: *** Deleting file `programmer.html'\ngmake[1]: Leaving directory `/home/ler/pg-dev/pgsql/doc/src/sgml'\ngmake: *** [programmer.tar] Error 2\n$ \n\nCan one of you SGML masters fix?\n\nThanks!\n\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Mon, 28 May 2001 10:12:00 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "doc markup bug...."
},
{
"msg_contents": "\nPeter E has fixed it in CVS. I think I introduced it when adding docs\nfor ELSEIF.\n\nI now have full SGML tools installed, so I will be able to test any of\nmy future changes.\n\n> Tried to rebuild my docs using the CVS tip...\n> \n> $ gmake\n> gmake -C sgml clean\n> gmake[1]: Entering directory `/home/ler/pg-dev/pgsql/doc/src/sgml'\n> rm -f HTML.manifest *.html\n> rm -rf *.1 *.l man1 manl manpage.refs manpage.links manpage.log\n> rm -f *.rtf *.tex *.dvi *.aux *.log *.ps *.pdf\n> rm -f HTML.index bookindex.sgml setindex.sgml\n> gmake[1]: Leaving directory `/home/ler/pg-dev/pgsql/doc/src/sgml'\n> gmake -C sgml admin.html\n> gmake[1]: Entering directory `/home/ler/pg-dev/pgsql/doc/src/sgml'\n> perl /usr/local/share/sgml/docbook/dsssl/modular/bin/collateindex.pl -f -g -t 'Index' -o bookindex.sgml -N\n> perl /usr/local/share/sgml/docbook/dsssl/modular/bin/collateindex.pl -f -g -t 'Index' -x -o setindex.sgml -N\n> jade -D . -D ./ref -D ./../graphics -c /usr/local/share/sgml/docbook/dsssl/modular/catalog -d stylesheet.dsl -i output-html -t sgml book-decl.sgml admin.sgml\n> ln -sf admin.html index.html\n> gmake[1]: Leaving directory `/home/ler/pg-dev/pgsql/doc/src/sgml'\n> cd sgml && tar -cf ../admin.tar --exclude=Makefile --exclude='*.sgml' --exclude=ref *.html\n> gzip -f admin.tar\n> gmake -C sgml clean\n> gmake[1]: Entering directory `/home/ler/pg-dev/pgsql/doc/src/sgml'\n> rm -f HTML.manifest *.html\n> rm -rf *.1 *.l man1 manl manpage.refs manpage.links manpage.log\n> rm -f *.rtf *.tex *.dvi *.aux *.log *.ps *.pdf\n> rm -f HTML.index bookindex.sgml setindex.sgml\n> gmake[1]: Leaving directory `/home/ler/pg-dev/pgsql/doc/src/sgml'\n> gmake -C sgml developer.html\n> gmake[1]: Entering directory `/home/ler/pg-dev/pgsql/doc/src/sgml'\n> perl /usr/local/share/sgml/docbook/dsssl/modular/bin/collateindex.pl -f -g -t 'Index' -o bookindex.sgml -N\n> perl /usr/local/share/sgml/docbook/dsssl/modular/bin/collateindex.pl -f -g -t 'Index' -x -o setindex.sgml -N\n> jade -D . -D ./ref -D ./../graphics -c /usr/local/share/sgml/docbook/dsssl/modular/catalog -d stylesheet.dsl -i output-html -t sgml book-decl.sgml developer.sgml\n> ln -sf developer.html index.html\n> gmake[1]: Leaving directory `/home/ler/pg-dev/pgsql/doc/src/sgml'\n> cd sgml && tar -cf ../developer.tar --exclude=Makefile --exclude='*.sgml' --exclude=ref *.html\n> gzip -f developer.tar\n> gmake -C sgml clean\n> gmake[1]: Entering directory `/home/ler/pg-dev/pgsql/doc/src/sgml'\n> rm -f HTML.manifest *.html\n> rm -rf *.1 *.l man1 manl manpage.refs manpage.links manpage.log\n> rm -f *.rtf *.tex *.dvi *.aux *.log *.ps *.pdf\n> rm -f HTML.index bookindex.sgml setindex.sgml\n> gmake[1]: Leaving directory `/home/ler/pg-dev/pgsql/doc/src/sgml'\n> gmake -C sgml reference.html\n> gmake[1]: Entering directory `/home/ler/pg-dev/pgsql/doc/src/sgml'\n> perl /usr/local/share/sgml/docbook/dsssl/modular/bin/collateindex.pl -f -g -t 'Index' -o bookindex.sgml -N\n> perl /usr/local/share/sgml/docbook/dsssl/modular/bin/collateindex.pl -f -g -t 'Index' -x -o setindex.sgml -N\n> jade -D . -D ./ref -D ./../graphics -c /usr/local/share/sgml/docbook/dsssl/modular/catalog -d stylesheet.dsl -i output-html -t sgml book-decl.sgml reference.sgml\n> ln -sf reference.html index.html\n> gmake[1]: Leaving directory `/home/ler/pg-dev/pgsql/doc/src/sgml'\n> cd sgml && tar -cf ../reference.tar --exclude=Makefile --exclude='*.sgml' --exclude=ref *.html\n> gzip -f reference.tar\n> gmake -C sgml clean\n> gmake[1]: Entering directory `/home/ler/pg-dev/pgsql/doc/src/sgml'\n> rm -f HTML.manifest *.html\n> rm -rf *.1 *.l man1 manl manpage.refs manpage.links manpage.log\n> rm -f *.rtf *.tex *.dvi *.aux *.log *.ps *.pdf\n> rm -f HTML.index bookindex.sgml setindex.sgml\n> gmake[1]: Leaving directory `/home/ler/pg-dev/pgsql/doc/src/sgml'\n> gmake -C sgml programmer.html\n> gmake[1]: Entering directory `/home/ler/pg-dev/pgsql/doc/src/sgml'\n> perl /usr/local/share/sgml/docbook/dsssl/modular/bin/collateindex.pl -f -g -t 'Index' -o bookindex.sgml -N\n> perl /usr/local/share/sgml/docbook/dsssl/modular/bin/collateindex.pl -f -g -t 'Index' -x -o setindex.sgml -N\n> jade -D . -D ./ref -D ./../graphics -c /usr/local/share/sgml/docbook/dsssl/modular/catalog -d stylesheet.dsl -i output-html -t sgml book-decl.sgml programmer.sgml\n> jade:plsql.sgml:1011:13:E: end tag for element \"PARA\" which is not open\n> gmake[1]: *** [programmer.html] Error 1\n> gmake[1]: *** Deleting file `programmer.html'\n> gmake[1]: Leaving directory `/home/ler/pg-dev/pgsql/doc/src/sgml'\n> gmake: *** [programmer.tar] Error 2\n> $ \n> \n> Can one of you SGML masters fix?\n> \n> Thanks!\n> \n> \n> \n> -- \n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 (voice) Internet: ler@lerctr.org\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 28 May 2001 14:21:35 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: doc markup bug...."
}
] |
[
{
"msg_contents": "IBM is trying to find the answer to this but I thought I would throw\nthis out here to see if anyone can help me. I am compiling a user\ndefined type on AIX and it fails when I try to use it. The type is\nchkpass and it is in the contrib directory. It fails with a core dump\nat line 88 in chkpass.c. The line reads as follows.\n\n result = (chkpass *) palloc(sizeof(chkpass));\n\nThe top of the backtrace looks like this.\n\n#0 0x0 in ?? () from (unknown load module)\n#1 0xd1087a60 in chkpass_in (fcinfo=0x0) at chkpass.c:88\n#2 0x10045cf4 in or_clause (clause=0x0) at clauses.c:211\n#3 0x10075d68 in int82ge (fcinfo=0x1015cfc8) at int8.c:343\n#4 0x1005909c in _readArrayRef () at readfuncs.c:924\n#5 0x10059b68 in _readSeqScan () at readfuncs.c:600\n\nIt looks like the dynamically loaded object (chkpass.so) can't determine\nthe address of palloc() from the parent. I assume I need a flag for the\ncompile either on the main build to export the addresses or on the build\nof chkpass to tell it where to look up the addresses. Anyone been\nthrough this that might be able to shed some light?\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Mon, 28 May 2001 11:56:17 -0400 (EDT)",
"msg_from": "darcy@druid.net (D'Arcy J.M. Cain)",
"msg_from_op": true,
"msg_subject": "User functions and AIX"
},
{
"msg_contents": "darcy@druid.net (D'Arcy J.M. Cain) writes:\n> The top of the backtrace looks like this.\n\n> #0 0x0 in ?? () from (unknown load module)\n> #1 0xd1087a60 in chkpass_in (fcinfo=0x0) at chkpass.c:88\n> #2 0x10045cf4 in or_clause (clause=0x0) at clauses.c:211\n> #3 0x10075d68 in int82ge (fcinfo=0x1015cfc8) at int8.c:343\n> #4 0x1005909c in _readArrayRef () at readfuncs.c:924\n> #5 0x10059b68 in _readSeqScan () at readfuncs.c:600\n\nI don't believe a word of that backtrace, and neither should you.\nThe alleged call arcs at levels below #1 do not exist in the code.\nErgo, I doubt the top two levels can be trusted either.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 28 May 2001 13:21:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: User functions and AIX "
},
{
"msg_contents": "Thus spake Tom Lane\n> darcy@druid.net (D'Arcy J.M. Cain) writes:\n> > The top of the backtrace looks like this.\n> \n> > #0 0x0 in ?? () from (unknown load module)\n> > #1 0xd1087a60 in chkpass_in (fcinfo=0x0) at chkpass.c:88\n> > #2 0x10045cf4 in or_clause (clause=0x0) at clauses.c:211\n> > #3 0x10075d68 in int82ge (fcinfo=0x1015cfc8) at int8.c:343\n> > #4 0x1005909c in _readArrayRef () at readfuncs.c:924\n> > #5 0x10059b68 in _readSeqScan () at readfuncs.c:600\n> \n> I don't believe a word of that backtrace, and neither should you.\n> The alleged call arcs at levels below #1 do not exist in the code.\n> Ergo, I doubt the top two levels can be trusted either.\n\nCan you clarify? I see or_clause takes a clause arg and I assumed that\nthe fcinfo is hidden in the macro. I don't understand how the arg for\nchkpass_in can be NULL. I'm also not sure why these functions are involved\nin reading the chkpass type.\n\nHmm. I just rebooted and reran the test (SELECT 'hello'::chkpass) and\nit gave me a different stacktrace. It looks like this.\n\n#0 0x0 in ?? () from (unknown load module)\n#1 0xd1085a60 in chkpass_in (fcinfo=0x0) at chkpass.c:88\n#2 0x1004b874 in OidFunctionCall3 (functionId=269952520, arg1=269952532, \n arg2=269952540, arg3=269952548) at fmgr.c:1136\n#3 0x1007f350 in stringTypeDatum (tp=0x10172694, string=0x101726a0 \"pendant\", \n atttypmod=269952680) at parse_type.c:181\n#4 0x10060630 in parser_typecast_constant (expr=0x10172794, \n typename=0x101727a0) at parse_expr.c:876\n#5 0x10061188 in transformExpr (pstate=0x10172910, expr=0x10172920, \n precedence=269953332) at parse_expr.c:118\n#6 0x10076f28 in transformTargetEntry (pstate=0x258, node=0x5c, \n expr=0x2ff1df70, colname=0x101729f4 \"inner\", resjunk=16 '\\020')\n at parse_target.c:56\n#7 0x10077198 in transformTargetList (pstate=0x10172ab8, \n targetlist=0x10172ac0) at parse_target.c:158\n#8 0x10093c10 in transformSelectStmt (pstate=0x10172b80, stmt=0x10172b88)\n at analyze.c:1835\n#9 0x1009497c in transformStmt (pstate=0x20000890, parseTree=0x2001f43c)\n at analyze.c:226\n#10 0x10094ca4 in parse_analyze (parseTree=0x100195f8, \n parentParseState=0x200008a4) at analyze.c:86\n\nI still can't follow the logic through the code. And chkpass_in is still\nbeing called with a null pointer according to this.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Mon, 28 May 2001 15:06:07 -0400 (EDT)",
"msg_from": "darcy@druid.net (D'Arcy J.M. Cain)",
"msg_from_op": true,
"msg_subject": "Re: User functions and AIX"
},
{
"msg_contents": "darcy@druid.net (D'Arcy J.M. Cain) writes:\n> I'm also not sure why these functions are involved\n> in reading the chkpass type.\n\nPrecisely my point: they're not. That backtrace is false data.\n\n> Hmm. I just rebooted and reran the test (SELECT 'hello'::chkpass) and\n> it gave me a different stacktrace. It looks like this.\n\n> #0 0x0 in ?? () from (unknown load module)\n> #1 0xd1085a60 in chkpass_in (fcinfo=0x0) at chkpass.c:88\n> #2 0x1004b874 in OidFunctionCall3 (functionId=269952520, arg1=269952532, \n> arg2=269952540, arg3=269952548) at fmgr.c:1136\n> #3 0x1007f350 in stringTypeDatum (tp=0x10172694, string=0x101726a0 \"pendant\", \n> atttypmod=269952680) at parse_type.c:181\n> #4 0x10060630 in parser_typecast_constant (expr=0x10172794, \n> typename=0x101727a0) at parse_expr.c:876\n\nThis one I believe to the extent of the series of function calls, but\nit's still giving you wrong info about the passed parameters, which\nis pretty common if you compiled at -O2 or higher. Try recompiling with\n\"-O0 -g\" if you need trustworthy parameter info from the backtrace.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 28 May 2001 15:15:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: User functions and AIX "
},
{
"msg_contents": "Thus spake Tom Lane\n> darcy@druid.net (D'Arcy J.M. Cain) writes:\n> > I'm also not sure why these functions are involved\n> > in reading the chkpass type.\n> \n> Precisely my point: they're not. That backtrace is false data.\n> \n> > Hmm. I just rebooted and reran the test (SELECT 'hello'::chkpass) and\n> > it gave me a different stacktrace. It looks like this.\n> \n> > #0 0x0 in ?? () from (unknown load module)\n> > #1 0xd1085a60 in chkpass_in (fcinfo=0x0) at chkpass.c:88\n> > #2 0x1004b874 in OidFunctionCall3 (functionId=269952520, arg1=269952532, \n> > arg2=269952540, arg3=269952548) at fmgr.c:1136\n> > #3 0x1007f350 in stringTypeDatum (tp=0x10172694, string=0x101726a0 \"pendant\", \n> > atttypmod=269952680) at parse_type.c:181\n> > #4 0x10060630 in parser_typecast_constant (expr=0x10172794, \n> > typename=0x101727a0) at parse_expr.c:876\n> \n> This one I believe to the extent of the series of function calls, but\n> it's still giving you wrong info about the passed parameters, which\n> is pretty common if you compiled at -O2 or higher. Try recompiling with\n> \"-O0 -g\" if you need trustworthy parameter info from the backtrace.\n\nIs that an AIX thing? I generally get reasonable traces on NetBSD.\n\nAnyway, I took your advice and now I get this.\n\n#0 0x0 in ?? () from (unknown load module)\n#1 0xd1085aac in chkpass_in (fcinfo=0x2ff1dcb8) at chkpass.c:88\n#2 0x1004b874 in OidFunctionCall3 (functionId=269952520, arg1=269952532, \n arg2=269952540, arg3=269952548) at fmgr.c:1136\n#3 0x1007f350 in stringTypeDatum (tp=0x10172694, string=0x101726a0 \"pendant\", \n atttypmod=269952680) at parse_type.c:181\n#4 0x10060630 in parser_typecast_constant (expr=0x10172794, \n typename=0x101727a0) at parse_expr.c:876\n#5 0x10061188 in transformExpr (pstate=0x10172910, expr=0x10172920, \n precedence=269953332) at parse_expr.c:118\n#6 0x10076f28 in transformTargetEntry (pstate=0x258, node=0x5c, \n expr=0x2ff1df70, colname=0x101729f4 \"inner\", resjunk=16 '\\020')\n at parse_target.c:56\n#7 0x10077198 in transformTargetList (pstate=0x10172ab8, \n targetlist=0x10172ac0) at parse_target.c:158\n#8 0x10093c10 in transformSelectStmt (pstate=0x10172b80, stmt=0x10172b88)\n at analyze.c:1835\n#9 0x1009497c in transformStmt (pstate=0x20000890, parseTree=0x2001f43c)\n at analyze.c:226\n#10 0x10094ca4 in parse_analyze (parseTree=0x100195f8, \n parentParseState=0x200008a4) at analyze.c:86\n\nLooking better. It still seems to be the same error I saw to start with\nthough. It seems that the loaded dynamic object can't find the address\nfor palloc() and so jumps to 0. I'm sure that it is an AIX thing but\neven IBM can't seem to find the problem.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Mon, 28 May 2001 17:04:52 -0400 (EDT)",
"msg_from": "darcy@druid.net (D'Arcy J.M. Cain)",
"msg_from_op": true,
"msg_subject": "Re: User functions and AIX"
}
] |
[
{
"msg_contents": "I have been chasing Domingo Alvarez Duarte's report of funny behavior\nwhen assigning an empty string to a \"char\" variable in plpgsql. What\nit comes down to is that text-to-char conversion does not behave very\nwell for zero-length input. charin() returns a null character, leading\nto the following bizarreness:\n\nregression=# select 'z' || (''::\"char\") || 'q';\n ?column?\n----------\n z\n(1 row)\n\nregression=# select length('z' || (''::\"char\") || 'q');\n length\n--------\n 3\n(1 row)\n\nThe concatenation result is 'z\\0q', which doesn't print nicely :-(.\n\ntext_char() produces a completely random result, eg:\n\nregression=# select ''::text::\"char\";\n ?column?\n----------\n ~\n(1 row)\n\nand could even coredump in the worst case, since it tries to fetch the\nfirst character of the text input no matter whether there is one or not.\n\nI propose that both of these operations should return a space character\nfor an empty input string. This is by analogy to space-padding as you'd\nget with char(1). Any objections?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 28 May 2001 12:55:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "charin(), text_char() should return something else for empty input"
},
{
"msg_contents": "I wrote:\n> I propose that both of these operations should return a space character\n> for an empty input string. This is by analogy to space-padding as you'd\n> get with char(1). Any objections?\n\nAn alternative approach is to make charin and text_char map empty\nstrings to the null character (\\0), and conversely make charout and\nchar_text map the null character to empty strings. charout already\nacts that way, in effect, since it has to produce a null-terminated\nC string. This way would have the advantage that there would still\nbe a reversible dump and reload representation for a \"char\" field\ncontaining '\\0', whereas space-padding would cause such a field to\nbecome ' ' after reload. But it's a little strange if you think that\n\"char\" ought to behave the same as char(1).\n\nComments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 28 May 2001 14:37:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: charin(),\n text_char() should return something else for empty input"
},
{
"msg_contents": "At 02:37 PM 5/28/01 -0400, Tom Lane wrote:\n>I wrote:\n>> I propose that both of these operations should return a space character\n>> for an empty input string. This is by analogy to space-padding as you'd\n>> get with char(1). Any objections?\n>\n>An alternative approach is to make charin and text_char map empty\n>strings to the null character (\\0), and conversely make charout and\n>char_text map the null character to empty strings. charout already\n>acts that way, in effect, since it has to produce a null-terminated\n>C string. This way would have the advantage that there would still\n>be a reversible dump and reload representation for a \"char\" field\n>containing '\\0', whereas space-padding would cause such a field to\n>become ' ' after reload. But it's a little strange if you think that\n>\"char\" ought to behave the same as char(1).\n>\n>Comments?\n\nI personally wouldn't expect \"char\" to behave exactly as \"char(1)\",\nbecause I understand it to be a one-byte variable which holds a single\n(not zero or one) character.\n\nMapping '' to ' ' doesn't make a lot of sense to me. It isn't what\nI'd expect.\n\nI think the behavior you describe in this note is better.\n\n\n\n- Don Baccus, Portland OR <dhogaza@pacifier.com>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Mon, 28 May 2001 12:25:09 -0700",
"msg_from": "Don Baccus <dhogaza@pacifier.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: charin(), text_char() should return\n\tsomething else for empty input"
},
{
"msg_contents": "Don Baccus <dhogaza@pacifier.com> writes:\n> Mapping '' to ' ' doesn't make a lot of sense to me. It isn't what\n> I'd expect.\n> I think the behavior you describe in this note is better.\n\nI'm coming to that conclusion as well. If you look closely, both\ncharin() and charout() act that way already; so the second proposal\nboils down to making the text <=> char conversion functions act in\naccordance with the way that char's I/O conversions already act.\nThat seems a less drastic change than altering both I/O and conversion\nbehavior.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 28 May 2001 15:59:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Re: charin(),\n\ttext_char() should return something else for empty input"
},
{
"msg_contents": "On Mon, May 28, 2001 at 02:37:32PM -0400, Tom Lane wrote:\n> I wrote:\n> > I propose that both of these operations should return a space character\n> > for an empty input string. This is by analogy to space-padding as you'd\n> > get with char(1). Any objections?\n> \n> An alternative approach is to make charin and text_char map empty\n> strings to the null character (\\0), and conversely make charout and\n> char_text map the null character to empty strings. charout already\n> acts that way, in effect, since it has to produce a null-terminated\n> C string. This way would have the advantage that there would still\n> be a reversible dump and reload representation for a \"char\" field\n> containing '\\0', whereas space-padding would cause such a field to\n> become ' ' after reload. But it's a little strange if you think that\n> \"char\" ought to behave the same as char(1).\n\nDoes the standard require any particular behavior in with NUL \ncharacters? I'd like to see PG move toward treating them as ordinary \ncontrol characters. I realize that at best it will take a long time \nto get there. C is irretrievably mired in the \"NUL is a terminator\"\nswamp, but SQL isn't C.\n\nNathan Myers\nncm@zembu.com\n",
"msg_date": "Tue, 29 May 2001 13:03:35 -0700",
"msg_from": "ncm@zembu.com (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: Re: charin(),\n\ttext_char() should return something else for empty input"
},
{
"msg_contents": "Nathan Myers writes:\n\n> Does the standard require any particular behavior in with NUL\n> characters?\n\nThe standard describes the behaviour of the character types in terms of\ncharacter sets. This decouples glyphs, encoding, and storage. So\ntheoretically you could (AFAICT) define a character set that encodes some\nmeaningful character with code zero, but the implementation is not\nrequired to handle this zero byte internally, it could catch it during\ninput and represent it with an escape code.\n\nThe standard also defines some possible \"built-in\" character sets, such as\nLATIN1 and UTF16. Most of these do not naturally contain a character that\nis encoded with the zero byte. In the case of the ISO8BIT/ASCII_FULL\ncharset, the standard explicitly says that the zero byte is not contained\nin the character set.\n\nIn general, I don't see a point in accepting a zero byte in character\nstrings. If you want to store binary data there are binary data types (or\neffort could be invested in them).\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Tue, 29 May 2001 23:20:47 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: charin(), text_char() should return something else\n\tfor empty input"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> In general, I don't see a point in accepting a zero byte in character\n> strings. If you want to store binary data there are binary data types (or\n> effort could be invested in them).\n\nIf we were starting in a green field then I'd think it worthwhile to\nmaintain null-byte-cleanness for the textual datatypes. At this point,\nthough, the amount of pain involved seems to vastly outweigh the value.\nThe major problem is that I/O conventions not based on null-terminated\nstrings would break all existing user-defined datatypes. (Changing our\nown code is one thing, breaking users' code is something else.) There\nare minor-by-comparison problems like not being able to use strcoll()\nfor locale-sensitive comparisons anymore...\n\nI agree with Peter that spending some extra effort on bytea and/or\nsimilar types is probably a more profitable answer.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 29 May 2001 18:55:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Re: charin(),\n\ttext_char() should return something else for empty input"
}
] |
[
{
"msg_contents": "Full implementation of R-Tree using GiST is available from\nhttp://www.sai.msu.su/~megera/postgres/gist/\n\nCHANGES:\n Mon May 28 19:42:14 MSD 2001\n\n 1. Full implementation of R-tree using GiST - gist_box_ops,gist_poly_ops\n 2. gist_poly_ops is lossy\n 3. NULLs support\n 4. works with multi-key GiST\n\nNOTICE:\n This version will works only with postgresql version 7.1 and above\n because of changes in interface of function calling.\n\n\nTom, implementation of gist_poly_ops has a workaround of discussed problem -\nwe store in first field the length of key in bytes). As soon as we find\na solution we'll change this.\n\n\n From my message;\ncompress fully supports fixed-length and varlena types. The problem is\nindex_formtuple - types of key and column could be different\n(example - polygon, where column has varlena type but key is fixed-length)\nAs a workaround one could use the samy type for key and column.\n1st integer field in strcuture BOX3D should be length of this structure\nin bytes.\n\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n\n\n",
"msg_date": "Mon, 28 May 2001 20:01:59 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "R-Tree implementation using GiST (compatible with multi-key GiST)"
},
{
"msg_contents": "Oleg Bartunov <oleg@sai.msu.su> writes:\n> Full implementation of R-Tree using GiST is available from\n> http://www.sai.msu.su/~megera/postgres/gist/\n\nCommitted as a contrib module.\n\nAt some point we'll probably want to move this into the mainframe,\nbut I left it as a separate package for now.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 31 May 2001 14:36:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: R-Tree implementation using GiST (compatible with multi-key GiST)"
}
] |
[
{
"msg_contents": "Anybody know where to get data and scripts to make regression\ntest for R-Rtree ? We just finished full implementation of R-Tree\nusing GiST with multi-key index support and would like to run\nregression test.\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Mon, 28 May 2001 20:07:20 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "Regression tes for R-tree"
}
] |
[
{
"msg_contents": "\nFirst, this is still a v7.1 system ... its totally possible that this is\nlong fixed, and I'm way overdue to get it to v7.1.2, which I'll gladly\naccept as a response ...\n\nThat said ... seems like a very painful way to arrive at 1 row ... :)\n\ntable structure:\n\nglobalmatch=# \\d locations\n Table \"locations\"\n Attribute | Type | Modifier\n-----------+---------+--------------------------------------------------------\n gid | integer | not null default nextval('locationstmp_gid_seq'::text)\n city | text |\n state | text |\n country | text |\n zip | text |\n location | point |\nIndices: locations_zip,\n locationstmp_gid_key\n\nglobalmatch=# \\d locations_zip\nIndex \"locations_zip\"\n Attribute | Type\n-----------+------\n zip | text\nbtree\n\nglobalmatch=# EXPLAIN SELECT count(location) from locations WHERE zip = '80012';\nNOTICE: QUERY PLAN:\n\nAggregate (cost=2950.18..2950.18 rows=1 width=16)\n -> Seq Scan on locations (cost=0.00..2939.64 rows=4217 width=16)\n\nEXPLAIN\n\nglobalmatch=# SELECT count(location) from locations WHERE zip = '80012';\n count\n-------\n 1\n(1 row)\n\nglobalmatch=# SELECT count(location) from locations;\n count\n--------\n 123571\n(1 row)\n\n\n\n",
"msg_date": "Mon, 28 May 2001 23:39:52 -0400 (EDT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "*really* simple select doesn't use indices ..."
},
{
"msg_contents": "\nOkay, just bit the bullet, upgraded to v7.1.2, and the problem still\npersists:\n\nglobalmatch=# vacuum verbose analyze locations;\nNOTICE: --Relation locations--\nNOTICE: Pages 1395: Changed 0, reaped 0, Empty 0, New 0; Tup 123571: Vac 0, Keep/VTL 0/0, Crash 0, UnUsed 0, MinLen 76, MaxLen 124; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0. CPU 0.11s/0.00u sec.\nNOTICE: Index locationstmp_gid_key: Pages 272; Tuples 123571. CPU 0.01s/0.15u sec.\nNOTICE: Index locations_zip: Pages 320; Tuples 123571. CPU 0.02s/0.14u sec.\nNOTICE: Index locations_country: Pages 342; Tuples 123571. CPU 0.03s/0.13u sec.\nNOTICE: --Relation pg_toast_9373225--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, Crash 0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0. CPU 0.00s/0.00u sec.\nNOTICE: Index pg_toast_9373225_idx: Pages 1; Tuples 0. CPU 0.00s/0.00u sec.\nNOTICE: Analyzing...\nVACUUM\nglobalmatch=# explain SELECT location from locations WHERE zip = '80012';\nNOTICE: QUERY PLAN:\n\nSeq Scan on locations (cost=0.00..2939.64 rows=4217 width=16)\n\nEXPLAIN\nglobalmatch=# select version();\n version\n---------------------------------------------------------------------\n PostgreSQL 7.1.2 on i386-unknown-freebsd4.3, compiled by GCC 2.95.3\n(1 row)\n\n\nOn Mon, 28 May 2001, Marc G. Fournier wrote:\n\n>\n> First, this is still a v7.1 system ... its totally possible that this is\n> long fixed, and I'm way overdue to get it to v7.1.2, which I'll gladly\n> accept as a response ...\n>\n> That said ... seems like a very painful way to arrive at 1 row ... :)\n>\n> table structure:\n>\n> globalmatch=# \\d locations\n> Table \"locations\"\n> Attribute | Type | Modifier\n> -----------+---------+--------------------------------------------------------\n> gid | integer | not null default nextval('locationstmp_gid_seq'::text)\n> city | text |\n> state | text |\n> country | text |\n> zip | text |\n> location | point |\n> Indices: locations_zip,\n> locationstmp_gid_key\n>\n> globalmatch=# \\d locations_zip\n> Index \"locations_zip\"\n> Attribute | Type\n> -----------+------\n> zip | text\n> btree\n>\n> globalmatch=# EXPLAIN SELECT count(location) from locations WHERE zip = '80012';\n> NOTICE: QUERY PLAN:\n>\n> Aggregate (cost=2950.18..2950.18 rows=1 width=16)\n> -> Seq Scan on locations (cost=0.00..2939.64 rows=4217 width=16)\n>\n> EXPLAIN\n>\n> globalmatch=# SELECT count(location) from locations WHERE zip = '80012';\n> count\n> -------\n> 1\n> (1 row)\n>\n> globalmatch=# SELECT count(location) from locations;\n> count\n> --------\n> 123571\n> (1 row)\n>\n>\n>\n>\n\nMarc G. Fournier scrappy@hub.org\nSystems Administrator @ hub.org\nscrappy@{postgresql|isc}.org ICQ#7615664\n\n",
"msg_date": "Tue, 29 May 2001 08:44:34 -0400 (EDT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "appendum: Re: *really* simple select doesn't use indices ..."
},
{
"msg_contents": "Marc,\n\nThe column 'zip' is of type text. As such, indices will not be used except\nin the case when the where clause is WHERE zip ~ '^<text>' for btree\nindices.\n\nGavin\n\nOn Tue, 29 May 2001, Marc G. Fournier wrote:\n\n> \n> Okay, just bit the bullet, upgraded to v7.1.2, and the problem still\n> persists:\n> \n> globalmatch=# vacuum verbose analyze locations;\n> NOTICE: --Relation locations--\n> NOTICE: Pages 1395: Changed 0, reaped 0, Empty 0, New 0; Tup 123571: Vac 0, Keep/VTL 0/0, Crash 0, UnUsed 0, MinLen 76, MaxLen 124; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0. CPU 0.11s/0.00u sec.\n> NOTICE: Index locationstmp_gid_key: Pages 272; Tuples 123571. CPU 0.01s/0.15u sec.\n> NOTICE: Index locations_zip: Pages 320; Tuples 123571. CPU 0.02s/0.14u sec.\n> NOTICE: Index locations_country: Pages 342; Tuples 123571. CPU 0.03s/0.13u sec.\n> NOTICE: --Relation pg_toast_9373225--\n> NOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, Crash 0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0. CPU 0.00s/0.00u sec.\n> NOTICE: Index pg_toast_9373225_idx: Pages 1; Tuples 0. CPU 0.00s/0.00u sec.\n> NOTICE: Analyzing...\n> VACUUM\n> globalmatch=# explain SELECT location from locations WHERE zip = '80012';\n> NOTICE: QUERY PLAN:\n> \n> Seq Scan on locations (cost=0.00..2939.64 rows=4217 width=16)\n> \n> EXPLAIN\n> globalmatch=# select version();\n> version\n> ---------------------------------------------------------------------\n> PostgreSQL 7.1.2 on i386-unknown-freebsd4.3, compiled by GCC 2.95.3\n> (1 row)\n> \n> \n> On Mon, 28 May 2001, Marc G. Fournier wrote:\n> \n> >\n> > First, this is still a v7.1 system ... its totally possible that this is\n> > long fixed, and I'm way overdue to get it to v7.1.2, which I'll gladly\n> > accept as a response ...\n> >\n> > That said ... seems like a very painful way to arrive at 1 row ... :)\n> >\n> > table structure:\n> >\n> > globalmatch=# \\d locations\n> > Table \"locations\"\n> > Attribute | Type | Modifier\n> > -----------+---------+--------------------------------------------------------\n> > gid | integer | not null default nextval('locationstmp_gid_seq'::text)\n> > city | text |\n> > state | text |\n> > country | text |\n> > zip | text |\n> > location | point |\n> > Indices: locations_zip,\n> > locationstmp_gid_key\n> >\n> > globalmatch=# \\d locations_zip\n> > Index \"locations_zip\"\n> > Attribute | Type\n> > -----------+------\n> > zip | text\n> > btree\n> >\n> > globalmatch=# EXPLAIN SELECT count(location) from locations WHERE zip = '80012';\n> > NOTICE: QUERY PLAN:\n> >\n> > Aggregate (cost=2950.18..2950.18 rows=1 width=16)\n> > -> Seq Scan on locations (cost=0.00..2939.64 rows=4217 width=16)\n> >\n> > EXPLAIN\n> >\n> > globalmatch=# SELECT count(location) from locations WHERE zip = '80012';\n> > count\n> > -------\n> > 1\n> > (1 row)\n> >\n> > globalmatch=# SELECT count(location) from locations;\n> > count\n> > --------\n> > 123571\n> > (1 row)\n> >\n> >\n> >\n> >\n> \n> Marc G. Fournier scrappy@hub.org\n> Systems Administrator @ hub.org\n> scrappy@{postgresql|isc}.org ICQ#7615664\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n",
"msg_date": "Tue, 29 May 2001 23:06:29 +1000 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": false,
"msg_subject": "Re: appendum: Re: *really* simple select doesn't use\n indices ..."
},
{
"msg_contents": "Gavin Sherry <swm@linuxworld.com.au> writes:\n> The column 'zip' is of type text. As such, indices will not be used except\n> in the case when the where clause is WHERE zip ~ '^<text>' for btree\n> indices.\n\nUh ... nonsense.\n\n> On Tue, 29 May 2001, Marc G. Fournier wrote:\n>> globalmatch=# vacuum verbose analyze locations;\n>> NOTICE: --Relation locations--\n>> NOTICE: Pages 1395: Changed 0, reaped 0, Empty 0, New 0; Tup 123571: Vac 0, Keep/VTL 0/0, Crash 0, UnUsed 0, MinLen 76, MaxLen 124; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0. CPU 0.11s/0.00u sec.\n\n>> globalmatch=# explain SELECT location from locations WHERE zip = '80012';\n>> NOTICE: QUERY PLAN:\n>> \n>> Seq Scan on locations (cost=0.00..2939.64 rows=4217 width=16)\n\nOkay, so it thinks that \"zip = '80012'\" will match 4217 out of 123571\nrows, which is more than enough to drive it to a sequential scan\n(with an average of more than three matched rows on every page of the\nrelation, there'd be no I/O savings at all from consulting the index).\n\nSince the real number of matches is only 1, this estimate is obviously\nway off. In 7.1 the estimate is being driven by the frequency of the\nmost common value in the column --- what is the most common value?\nIf you're lucky, the most common value is a dummy (empty string, maybe)\nthat you could replace by NULL with a few simple changes in application\nlogic. 7.1 is smart enough to distinguish NULL from real data values\nin its estimates. If you're not lucky, there really are a few values\nthat are far more common than average, in which case you're stuck unless\nyou want to run development sources. Current sources should do a lot\nbetter on that kind of data distribution.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 29 May 2001 09:54:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: appendum: Re: *really* simple select doesn't use indices ... "
},
{
"msg_contents": "On Tue, 29 May 2001, Tom Lane wrote:\n\n> Gavin Sherry <swm@linuxworld.com.au> writes:\n> > The column 'zip' is of type text. As such, indices will not be used except\n> > in the case when the where clause is WHERE zip ~ '^<text>' for btree\n> > indices.\n>\n> Uh ... nonsense.\n\nOh good, I was worried there for a sec ... :)\n\n> > On Tue, 29 May 2001, Marc G. Fournier wrote:\n> >> globalmatch=# vacuum verbose analyze locations;\n> >> NOTICE: --Relation locations--\n> >> NOTICE: Pages 1395: Changed 0, reaped 0, Empty 0, New 0; Tup 123571: Vac 0, Keep/VTL 0/0, Crash 0, UnUsed 0, MinLen 76, MaxLen 124; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0. CPU 0.11s/0.00u sec.\n>\n> >> globalmatch=# explain SELECT location from locations WHERE zip = '80012';\n> >> NOTICE: QUERY PLAN:\n> >>\n> >> Seq Scan on locations (cost=0.00..2939.64 rows=4217 width=16)\n>\n> Okay, so it thinks that \"zip = '80012'\" will match 4217 out of 123571\n> rows, which is more than enough to drive it to a sequential scan\n> (with an average of more than three matched rows on every page of the\n> relation, there'd be no I/O savings at all from consulting the index).\n>\n> Since the real number of matches is only 1, this estimate is obviously\n> way off. In 7.1 the estimate is being driven by the frequency of the\n> most common value in the column --- what is the most common value? If\n> you're lucky, the most common value is a dummy (empty string, maybe)\n> that you could replace by NULL with a few simple changes in\n> application logic. 7.1 is smart enough to distinguish NULL from real\n> data values in its estimates. If you're not lucky, there really are a\n> few values that are far more common than average, in which case you're\n> stuck unless you want to run development sources. Current sources\n> should do a lot better on that kind of data distribution.\n\nHit it right on the mark:\n\n zip | cnt\n-------+-------\n | 81403\n 00210 | 1\n 00211 | 1\n\nWill look at the code and see what I can do abuot that NULL issue ...\nthanks :)\n\n",
"msg_date": "Tue, 29 May 2001 11:13:07 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: appendum: Re: *really* simple select doesn't use\n indices ..."
},
{
"msg_contents": "\nThis is one my top two problem with Postgres, the seemingly braindead index\nselection mechanism.\n\nFirst, of course try \"VACUUM ANALYZE'\nThen if the fails, try\n\nset ENABLE_SEQSCAN = off;\n\nThen try your query.\n\n\n\"Marc G. Fournier\" wrote:\n\n> First, this is still a v7.1 system ... its totally possible that this is\n> long fixed, and I'm way overdue to get it to v7.1.2, which I'll gladly\n> accept as a response ...\n>\n> That said ... seems like a very painful way to arrive at 1 row ... :)\n>\n> table structure:\n>\n> globalmatch=# \\d locations\n> Table \"locations\"\n> Attribute | Type | Modifier\n> -----------+---------+--------------------------------------------------------\n> gid | integer | not null default nextval('locationstmp_gid_seq'::text)\n> city | text |\n> state | text |\n> country | text |\n> zip | text |\n> location | point |\n> Indices: locations_zip,\n> locationstmp_gid_key\n>\n> globalmatch=# \\d locations_zip\n> Index \"locations_zip\"\n> Attribute | Type\n> -----------+------\n> zip | text\n> btree\n>\n> globalmatch=# EXPLAIN SELECT count(location) from locations WHERE zip = '80012';\n> NOTICE: QUERY PLAN:\n>\n> Aggregate (cost=2950.18..2950.18 rows=1 width=16)\n> -> Seq Scan on locations (cost=0.00..2939.64 rows=4217 width=16)\n>\n> EXPLAIN\n>\n> globalmatch=# SELECT count(location) from locations WHERE zip = '80012';\n> count\n> -------\n> 1\n> (1 row)\n>\n> globalmatch=# SELECT count(location) from locations;\n> count\n> --------\n> 123571\n> (1 row)\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n",
"msg_date": "Tue, 29 May 2001 10:45:54 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: *really* simple select doesn't use indices ..."
}
] |
[
{
"msg_contents": "\n> IBM is trying to find the answer to this but I thought I would throw\n> this out here to see if anyone can help me. I am compiling a user\n> defined type on AIX and it fails when I try to use it. The type is\n> chkpass and it is in the contrib directory. It fails with a core dump\n> at line 88 in chkpass.c. The line reads as follows.\n> \n> result = (chkpass *) palloc(sizeof(chkpass));\n> \n> The top of the backtrace looks like this.\n> \n> #0 0x0 in ?? () from (unknown load module)\n> #1 0xd1087a60 in chkpass_in (fcinfo=0x0) at chkpass.c:88\n> #2 0x10045cf4 in or_clause (clause=0x0) at clauses.c:211\n> #3 0x10075d68 in int82ge (fcinfo=0x1015cfc8) at int8.c:343\n> #4 0x1005909c in _readArrayRef () at readfuncs.c:924\n> #5 0x10059b68 in _readSeqScan () at readfuncs.c:600\n> \n> It looks like the dynamically loaded object (chkpass.so) \n> can't determine\n> the address of palloc() from the parent. I assume I need a \n> flag for the\n> compile either on the main build to export the addresses or \n> on the build\n> of chkpass to tell it where to look up the addresses. Anyone been\n> through this that might be able to shed some light?\n\nTell me your link line, OS and compiler version. \nAnd have you forgotten to include -bI:postgres.imp ?\nIn general it is imho a good idea to copy the appropriate\ncompile and link flags from the regression test, that compiles\nshared libs in .../contrib.\n\nAndreas\n",
"msg_date": "Tue, 29 May 2001 09:28:26 +0200",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: User functions and AIX"
},
{
"msg_contents": "Thus spake Zeugswetter Andreas SB\n> > IBM is trying to find the answer to this but I thought I would throw\n...\n\n> Tell me your link line, OS and compiler version. \n> And have you forgotten to include -bI:postgres.imp ?\n\nBingo! I can't believe that IBM has been wrestling with this for a week.\nPart of the reason we are thinking of going with IBM is for the support.\n\nHere is my Makefile now. I'm not sure about that -lc there as I get duplicate\nsymbol warnings but it appears to work fine.\n\n#\n# Local PostgreSQL types\n# Written by D'Arcy J.M. Cain (darcy@druid.net)\n#\n# $Id: Makefile,v 1.1 2000/06/23 17:03:40 root Exp $\n\nPGDIR = /usr/local/pgsql\nPGINCDIR = /home/darcy/postgresql-7.1/src/include\nPGLIBDIR = /usr/local/pgsql/lib\nCFLAGS = -g -O0 -pipe -ansi -Wall -Wshadow -Wpointer-arith -Wcast-qual \\\n -I ${PGINCDIR} -L ${PGLIBDIR} \\\n -Wwrite-strings -Wmissing-prototypes\nOBJS = chkpass.o\nSH_OBJS = chkpass.so\n\n.SUFFIXES: .so\n\n.o.so:\n ld -G -o $@ $< -L ${PGLIBDIR} -bI:/usr/local/pgsql/lib/postgres.imp \\\n -bexpall -bnoentry -lc\n\n.c.o:\n gcc ${CFLAGS} -c $<\n\nall: ${SH_OBJS}\n\ninstall: all\n cp ${SH_OBJS} ${PGDIR}/modules\n sed \"s+%%PGDIR%%+${PGDIR}+g\" < chkpass.sql > ${PGDIR}/modules/chkpass.sql\n\nclean:\n rm -f ${OBJS} ${SH_OBJS}\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Tue, 29 May 2001 08:25:24 -0400 (EDT)",
"msg_from": "darcy@druid.net (D'Arcy J.M. Cain)",
"msg_from_op": false,
"msg_subject": "Re: AW: User functions and AIX"
}
] |
[
{
"msg_contents": "\n> > > > > You mean it is restored in session that is running the transaction ?\n> > > \n> > > Depends on what you mean with restored. It first reads the heap page,\n> > > sees that it needs an older version and thus reads it from the \"rollback segment\".\n> > \n> > So are whole pages stored in rollback segments or just the modified data?\n> \n> This is implementation dependent. Storing whole pages is much easy to do,\n> but obviously it's better to store just modified data.\n\nI am not sure it is necessarily better. Seems to be a tradeoff here.\npros of whole pages:\n\ta possible merge with physical log (for first modification of a page after checkpoint \n\t\tthere would be no overhead compared to current since it is already written now)\n\tin a clever implementation a page already in the \"rollback segment\" might satisfy the \n\t\tmodification of another row on that page, and thus would not need any additional io.\n\nAndreas\n",
"msg_date": "Tue, 29 May 2001 09:35:01 +0200",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: AW: Plans for solving the VACUUM problem"
}
] |
[
{
"msg_contents": "\n> An alternative approach is to make charin and text_char map empty\n> strings to the null character (\\0), and conversely make charout and\n> char_text map the null character to empty strings. charout already\n> acts that way, in effect, since it has to produce a null-terminated\n> This way would have the advantage that there would still\n> be a reversible dump and reload representation for a \"char\" field\n> containing '\\0'\n\nI more like this. IIRC some implementations allow storing a \\0 in char(n) also.\nThen it is 8bit clean and can be used for a 1 byte number. Such values\ncan usually only be inserted and selected with host variables.\n\nAndreas\n",
"msg_date": "Tue, 29 May 2001 10:02:02 +0200",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: Re: charin(), text_char() should return something e\n\tlse for empty input"
}
] |
[
{
"msg_contents": "I tried to use the unixdate contrib, and got the following:\n\nattack=# \\i unixdate.sql\npsql:unixdate.sql:21: ERROR: ProcedureCreate: there is no builtin\nfunction named \"-\"\npsql:unixdate.sql:25: ERROR: Function 'abstime_datetime(int4)' does\nnot exist\n Unable to identify a function that satisfies the given\nargument types\n You may need to add explicit typecasts\npsql:unixdate.sql:29: ERROR: ProcedureCreate: there is no builtin\nfunction named \"-\"\npsql:unixdate.sql:33: ERROR: Function 'reltime_timespan(int4)' does\nnot exist\n Unable to identify a function that satisfies the given\nargument types\n You may need to add explicit typecasts\nattack=# \n\n\nAny ideas? (I need SOMETHING that takes a unix timestamp and turns it\nto timestamp. )\n\n\nThanks!\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Tue, 29 May 2001 05:11:15 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "/contrib/unixdate: Broke in cvs tip."
},
{
"msg_contents": "Larry Rosenman <ler@lerctr.org> writes:\n> I tried to use the unixdate contrib, and got the following:\n\nI think unixdate is suffering from bit-rot. Most or all of what it\ndoes is now part of the mainframe anyway.\n\n> Any ideas? (I need SOMETHING that takes a unix timestamp and turns it\n> to timestamp. )\n\nThere are a number of ways. I tend to rely on the binary equivalence\nbetween int4 and abstime:\n\nregression=# select now()::abstime::int4;\n ?column?\n-----------\n 991145365\n(1 row)\n\nregression=# select 991145365::int4::abstime::timestamp;\n ?column?\n------------------------\n 2001-05-29 10:09:25-04\n(1 row)\n\nbut the more officially supported way to do the former is\ndate_part('epoch', timestamp), and I think there is also an\nApproved Way to do the latter. (Thomas?)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 29 May 2001 10:13:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: /contrib/unixdate: Broke in cvs tip. "
},
{
"msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [010529 10:07]:\n> Larry Rosenman <ler@lerctr.org> writes:\n> > I tried to use the unixdate contrib, and got the following:\n> \n> I think unixdate is suffering from bit-rot. Most or all of what it\n> does is now part of the mainframe anyway.\nAha! I couldn't find what I needed...\n> \n> > Any ideas? (I need SOMETHING that takes a unix timestamp and turns it\n> > to timestamp. )\n> \n> There are a number of ways. I tend to rely on the binary equivalence\n> between int4 and abstime:\n> \n> regression=# select now()::abstime::int4;\n> ?column?\n> -----------\n> 991145365\n> (1 row)\n> \n> regression=# select 991145365::int4::abstime::timestamp;\n> ?column?\n> ------------------------\n> 2001-05-29 10:09:25-04\n> (1 row)\nThis ugly one worked for me, but I'd still like to see\na more general way to move from a unix timestamp to datetime with out\nthe 3 casts....\n> \n> but the more officially supported way to do the former is\n> date_part('epoch', timestamp), and I think there is also an\n> Approved Way to do the latter. (Thomas?)\nThanks again, Tom!\n\nLER\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Tue, 29 May 2001 19:56:59 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: /contrib/unixdate: Broke in cvs tip."
}
] |
[
{
"msg_contents": "\n> > > IBM is trying to find the answer to this but I thought I would throw ...\n> \n> > Tell me your link line, OS and compiler version. \n> > And have you forgotten to include -bI:postgres.imp ?\n> \n> Bingo! I can't believe that IBM has been wrestling with this for a week.\n> Part of the reason we are thinking of going with IBM is for the support.\n\nShared libs are obviously not their strong side :-)\nBasically we are very happy with their RS6000's and AIX though.\n\n> Here is my Makefile now. I'm not sure about that -lc there \n> as I get duplicate symbol warnings but it appears to work fine.\n\nthey don't matter\n\n> CFLAGS = -g -O0 -pipe -ansi -Wall -Wshadow -Wpointer-arith \n\ngcc and not xlc :-) actually xlc produces faster code, but I don't think that makes a \nnoticeable difference.\n\n> .o.so:\n> ld -G -o $@ $< -L ${PGLIBDIR} -bI:/usr/local/pgsql/lib/postgres.imp \\\n> -bexpall -bnoentry -lc\n\nAlways use the compiler for linking instead of ld:\n gcc -Wl,-H512 -Wl,-bM:SRE -o $@ $< -L ${PGLIBDIR} -bI:/usr/local/pgsql/lib/postgres.imp \\\n\t-bexpall -bnoentry\n\nYou are not allowed to leave anything unresolved, thus do not use -G, or you won't notice\nunresolved externals (-G includes -berok which you don't want at all).\n\nAndreas\n",
"msg_date": "Tue, 29 May 2001 14:57:43 +0200",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: AW: User functions and AIX"
},
{
"msg_contents": "Thus spake Zeugswetter Andreas SB\n> > Bingo! I can't believe that IBM has been wrestling with this for a week.\n> > Part of the reason we are thinking of going with IBM is for the support.\n> \n> Shared libs are obviously not their strong side :-)\nTell me about it.\n\n> Basically we are very happy with their RS6000's and AIX though.\n\nWith PostgreSQL? See below.\n\n> > Here is my Makefile now. I'm not sure about that -lc there \n> > as I get duplicate symbol warnings but it appears to work fine.\n> \n> they don't matter\n> \n> > CFLAGS = -g -O0 -pipe -ansi -Wall -Wshadow -Wpointer-arith \n> \n> gcc and not xlc :-) actually xlc produces faster code, but I don't think that makes a \n> noticeable difference.\n\nHmm. Should I get rid of gcc and build PostgreSQL with xlc do you think?\nSome people have told me that gcc is actually faster.\n\n> > .o.so:\n> > ld -G -o $@ $< -L ${PGLIBDIR} -bI:/usr/local/pgsql/lib/postgres.imp \\\n> > -bexpall -bnoentry -lc\n> \n> Always use the compiler for linking instead of ld:\n> gcc -Wl,-H512 -Wl,-bM:SRE -o $@ $< -L ${PGLIBDIR} -bI:/usr/local/pgsql/lib/postgres.imp \\\n> \t-bexpall -bnoentry\n\nI'll do that. Thanks.\n\n> You are not allowed to leave anything unresolved, thus do not use -G, or you won't notice\n> unresolved externals (-G includes -berok which you don't want at all).\n\nI wasn't sure if I needed that. I will remove it.\n\n\nOK, so I built it and loaded my database. I tried to load a very big\ntable (383969 rows) and the copy failed because it was too big. I split\nthe input into smaller chunks but when I ran it I got the following error.\n\nERROR: copy: line 1, Memory exhausted in AllocSetAlloc(858864139)\n\nThere is no way that I could have used that much memory in the first row.\nI dropped the table and recreated it and the load worked. Although it\nworks now I still feel a little uneasy.\n\nThanks for your help.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Tue, 29 May 2001 14:57:07 -0400 (EDT)",
"msg_from": "darcy@druid.net (D'Arcy J.M. Cain)",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: User functions and AIX"
}
] |
[
{
"msg_contents": "Hi\n\nHas anyoone had similar problems ?\n\nI have problems satting SHMALL on some linux (RedHat 6.2) computers\n\nwhen I try to set them to 128MB (on a 256MB computer),\nit has no effect.\n\n[root@amphora2 /root]# echo 134217728 >/proc/sys/kernel/shmmax\n[root@amphora2 /root]# echo 134217728 >/proc/sys/kernel/shmall\n[root@amphora2 /root]# cat /proc/sys/kernel/shmall\n4194304\n[root@amphora2 /root]# cat /proc/sys/kernel/shmmax \n134217728\n\n------------------\nHannu\n",
"msg_date": "Tue, 29 May 2001 15:28:54 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": true,
"msg_subject": "problems setting shared memory on linux"
}
] |
[
{
"msg_contents": "Hi,\n\nto continue discussion about pg_index.haskeytype\n(see http://fts.postgresql.org/db/mw/msg.html?mid=117845\n http://fts.postgresql.org/db/mw/msg.html?mid=119796\n)\n\nI'd like to remind that pg_index.haskeytype *was used* up to 7.1\nversion. It's indicate that type of key is different from that of column !\n\nin 7.0.2 it was possible to use following syntax:\n\ncreate table TT (i int);\ncreate index TTx on TT using gist (i:intrange gist_intrange_ops);\n\ni:intrange indicates that type of key is intrange while column 'i' has\ntype int. So, index_formtuple used length of key from intrange.\npg_index.haskeytype was a flag which indicates to index_formtuple to use\ninformation about key from other type. In this example - use intrange\ninstead of int.\n\nIn 7.1 version pg_index.haskeytype is always true, i.e. type of\nkey is always the same as type of column and syntax 'i:intrange'\nbecame unavailable. Notice, that for fixed length columns it's\nnecessary that lengths of key and column must be the same.\n\nI don't think we need to restore pg_index.haskeytype and syntax\nbecause:\n\n1. This is not suited for multi-key GiST - we need something like this\n for each subkey\n2. Syntax is very ugly and useless - all information (type of key, losiness)\n should be stored somewhere in system tables.\n\nIn such sense,\nwe vote pg_index.haskeytype could be totally removed from current sources.\n\nTom, would you like to implement storage for type of key suitable for\nmulti-key ? It's very important for correct implementation of GiST\nwe're working on. We have implemented multi-key GiST (posted) and\nR-Tree ops for GiST (full implementation, posted with workaround\nfor poly_ops - key is variable length but length is constant\nand equal 36 bytes = sizeof(BOX)+key_length ). Currently we're\nimplementing B-Tree opses but we have problems because while\ntypes of key and column are fixed but length of key is greater than that\nof column type. Again, we need index_formtuple which knows type of key !\n\n\n\tRegards,\n\t\tOleg\n\n\n7.0.3 backend/catalog/index.c\n indexForm->indhaskeytype = 0;\n while (attributeList != NIL)\n {\n IndexKey = (IndexElem *) lfirst(attributeList);\n if (IndexKey->typename != NULL)\n {\n indexForm->indhaskeytype = 1;\n break;\n }\n attributeList = lnext(attributeList);\n }\n7.1\n\nindexForm->indhaskeytype = true; /* not actually used anymore */\n\n7.0.3\nbackend/parser/gram.y\n\nindex_elem: attr_name opt_type opt_class\n {\n $$ = makeNode(IndexElem);\n $$->name = $1;\n $$->args = NIL;\n $$->class = $3;\n $$->typename = $2;\n }\n ;\n\n7.1\nindex_elem: attr_name opt_class\n {\n $$ = makeNode(IndexElem);\n $$->name = $1;\n $$->args = NIL;\n $$->class = $2;\n }\n ;\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n\n",
"msg_date": "Tue, 29 May 2001 17:12:20 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "haskeytype and index_formtuple"
},
{
"msg_contents": "Patch fixes memory leak in multi-key GiST is available from\nhttp://www.sai.msu.su/~megera/postgres/gist/\nIt should be applied after original multi-key GiST patch.\nWe have tested it inserting about 300,000 records into database\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Wed, 30 May 2001 19:15:23 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "Patch for multi-key GiST "
},
{
"msg_contents": "Your patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it withing the next 48 hours.\n\n\n> Patch fixes memory leak in multi-key GiST is available from\n> http://www.sai.msu.su/~megera/postgres/gist/\n> It should be applied after original multi-key GiST patch.\n> We have tested it inserting about 300,000 records into database\n> \n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 30 May 2001 12:18:33 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Patch for multi-key GiST"
}
] |
[
{
"msg_contents": "\"Mikheev, Vadim\" wrote:\n> \n> > I know people who rollback most of their transactions\n> > (actually they use it to emulate temp tables when reporting).\n> \n> Shouldn't they use TEMP tables? -:)\n\nThey probably should.\n\nActually they did it on Oracle, so it shows that it can be done \neven with O-smgr ;)\n\n> > OTOH it is possible to do without rolling back at all as\n> > MySQL folks have shown us ;)\n> \n> Not with SDB tables which support transactions.\n\nMy point was that MySQL was used quite a long time without it \nand still quite many useful applications were produced.\n\nBTW, do you know what strategy is used by BSDDB/SDB for \nrollback/undo ?\n\n---------------\nHannu\n",
"msg_date": "Tue, 29 May 2001 22:07:16 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": true,
"msg_subject": "Re: Plans for solving the VACUUM problem"
},
{
"msg_contents": "> > > Seems overwrite smgr has mainly advantages in terms of\n> > > speed for operations other than rollback.\n> > \n> > ... And rollback is required for < 5% transactions ...\n> \n> This obviously depends on application. \n\nSmall number of aborted transactions was used to show\nuseless of UNDO in terms of space cleanup - that's why\nI use same argument to show usefulness of O-smgr -:)\n\n> I know people who rollback most of their transactions\n> (actually they use it to emulate temp tables when reporting). \n\nShouldn't they use TEMP tables? -:)\n\n> OTOH it is possible to do without rolling back at all as\n> MySQL folks have shown us ;)\n\nNot with SDB tables which support transactions.\n\nVadim\n",
"msg_date": "Tue, 29 May 2001 10:49:12 -0700",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": false,
"msg_subject": "RE: Plans for solving the VACUUM problem"
}
] |
[
{
"msg_contents": "> > > So are whole pages stored in rollback segments or just\n> > > the modified data?\n> > \n> > This is implementation dependent. Storing whole pages is\n> > much easy to do, but obviously it's better to store just\n> > modified data.\n> \n> I am not sure it is necessarily better. Seems to be a tradeoff here.\n> pros of whole pages:\n> \ta possible merge with physical log (for first\n> modification of a page after checkpoint\n> \t\tthere would be no overhead compared to current \n> since it is already written now)\n\nUsing WAL as RS data storage is questionable.\n\n> \tin a clever implementation a page already in the\n> \"rollback segment\" might satisfy the \n> \t\tmodification of another row on that page, and \n> thus would not need any additional io.\n\nThis would be possible only if there was no commit (same SCN)\nbetween two modifications.\n\nBut, aren't we too deep on overwriting smgr (O-smgr) implementation?\nIt's doable. It has advantages in terms of IO active transactions\nmust do to follow MVCC. It has drawback in terms of required\ndisk space (and, oh yeh, it's not easy to implement -:)).\nSo, any other opinions about value of O-smgr?\n\nVadim\n",
"msg_date": "Tue, 29 May 2001 10:39:59 -0700",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: AW: Plans for solving the VACUUM problem"
}
] |
[
{
"msg_contents": "\nhello all\n\nI don't know what to do...\nthe pg_log file is too big..\nanyone can help me?\n\n\nthanks\n",
"msg_date": "29 May 2001 17:46:31 -0000",
"msg_from": "\"gabriel\" <gabriel@workingnetsp.com.br>",
"msg_from_op": true,
"msg_subject": "pg_log ??"
},
{
"msg_contents": "gabriel writes:\n\n> I don't know what to do...\n> the pg_log file is too big..\n> anyone can help me?\n\nHow big exactly? What version of PostgreSQL do you have? Has this\ninstallation been under heavy load for a long time?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Tue, 29 May 2001 23:03:36 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_log ??"
}
] |
[
{
"msg_contents": "Has it been established to satisfaction that a BSD-licensed gettext\nimplementation is available and working? (Note that we still need\nxgettext and msgmerge from GNU, but this only on the maintainer side.)\n\nAre there any other concerns about the use of the gettext API, the\navailable implementations, or how this will work together with PostgreSQL?\n\nIf Yes and No, respectively, I'd like to put this to use.\n\nBesides the source code changes to mark all translatable strings (which\ncan be reduced if we make use of short cuts like \"all calls to elog() are\ncandidates\"), and eliminate English dependent coding (printf(\"%d row%s\",\nn, n!=1?\"s\":\"\");), most of this will be makefile work in the style of\nMakefile.shlib to take care of building, installing, and maintaining the\nmessage catalogs. Plus a lot of translation work, of course. :-)\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Tue, 29 May 2001 19:48:34 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Proceeding with gettext"
},
{
"msg_contents": "\nPeter, can you give a little sample of how an elog() call would look in\nthe new system? Thanks.\n\n> Has it been established to satisfaction that a BSD-licensed gettext\n> implementation is available and working? (Note that we still need\n> xgettext and msgmerge from GNU, but this only on the maintainer side.)\n> \n> Are there any other concerns about the use of the gettext API, the\n> available implementations, or how this will work together with PostgreSQL?\n> \n> If Yes and No, respectively, I'd like to put this to use.\n> \n> Besides the source code changes to mark all translatable strings (which\n> can be reduced if we make use of short cuts like \"all calls to elog() are\n> candidates\"), and eliminate English dependent coding (printf(\"%d row%s\",\n> n, n!=1?\"s\":\"\");), most of this will be makefile work in the style of\n> Makefile.shlib to take care of building, installing, and maintaining the\n> message catalogs. Plus a lot of translation work, of course. :-)\n> \n> -- \n> Peter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 29 May 2001 20:54:33 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proceeding with gettext"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> Peter, can you give a little sample of how an elog() call would look in\n> the new system? Thanks.\n\nNo change.\n\nelog() would call gettext() internally, so the code change there is\nlocalized to elog.c. For other function calls that contain translatable\nmessage strings it looks like (example from psql):\n\n fprintf(stderr, gettext(\"Invalid command \\\\%s. Try \\\\? for help.\\n\"), my_line);\n\nSo obviously it would pay off if the communication of a program was\nencapsulated in a limited number of functions.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Wed, 30 May 2001 17:19:01 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: Proceeding with gettext"
},
{
"msg_contents": "> Bruce Momjian writes:\n> \n> > Peter, can you give a little sample of how an elog() call would look in\n> > the new system? Thanks.\n> \n> No change.\n> \n> elog() would call gettext() internally, so the code change there is\n> localized to elog.c. For other function calls that contain translatable\n> message strings it looks like (example from psql):\n> \n> fprintf(stderr, gettext(\"Invalid command \\\\%s. Try \\\\? for help.\\n\"), my_line);\n> \n> So obviously it would pay off if the communication of a program was\n> encapsulated in a limited number of functions.\n\nSounds great!\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 30 May 2001 11:21:57 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proceeding with gettext"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n>> Peter, can you give a little sample of how an elog() call would look in\n>> the new system? Thanks.\n\n> No change.\n\nWouldn't there need to be changes in the %-escape usage in elog message\nstrings?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 30 May 2001 12:32:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proceeding with gettext "
},
{
"msg_contents": "Tom Lane writes:\n\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> >> Peter, can you give a little sample of how an elog() call would look in\n> >> the new system? Thanks.\n>\n> > No change.\n>\n> Wouldn't there need to be changes in the %-escape usage in elog message\n> strings?\n\nNo, for the programmer nothing changes.\n\nAre you referring to the possibility that the parameters need to appear in\na different order in the translated message? This can be handled if the\ntranslator uses the %1$s style place holder in the translated string.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Wed, 30 May 2001 19:04:00 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: Proceeding with gettext "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n>> Wouldn't there need to be changes in the %-escape usage in elog message\n>> strings?\n\n> Are you referring to the possibility that the parameters need to appear in\n> a different order in the translated message? This can be handled if the\n> translator uses the %1$s style place holder in the translated string.\n\nOh, I see: the translated message can use that form, but the original\nneed not. Okay.\n\nAre you planning to tackle any of the other elog-related stuff that's\nbeen discussed in the past? (Automatic file/linenumber information,\nrelabeling the set of severity codes, that sort of thing?) Sooner or\nlater we're going to have to bite the bullet and edit every elog\ninstance in the system. I'm just wondering if that flag day is implicit\nin this change, or if we can put it off awhile longer.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 30 May 2001 13:14:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proceeding with gettext "
},
{
"msg_contents": "Tom Lane writes:\n\n> Are you planning to tackle any of the other elog-related stuff that's\n> been discussed in the past? (Automatic file/linenumber information,\n> relabeling the set of severity codes, that sort of thing?)\n\nPlanning... I don't think I'm interested in the file/line information,\ngiven the cruftyness it would require. I can find that information with\nfind | xargs grep just fine. Error codes are high on my agenda, but the\nspecification still needs to be finalized.\n\n> Sooner or later we're going to have to bite the bullet and edit every\n> elog instance in the system.\n\nI don't think I want to do that.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sat, 2 Jun 2001 16:47:48 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: Proceeding with gettext "
}
] |
[
{
"msg_contents": "At 10:49 AM 5/29/01 -0700, Mikheev, Vadim wrote:\n\n>> I know people who rollback most of their transactions\n>> (actually they use it to emulate temp tables when reporting). \n>\n>Shouldn't they use TEMP tables? -:)\n\nWhich is a very good point. Pandering to poor practice at the\nexpense of good performance for better-designed applications\nisn't a good idea.\n\n\n\n- Don Baccus, Portland OR <dhogaza@pacifier.com>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Tue, 29 May 2001 10:55:33 -0700",
"msg_from": "Don Baccus <dhogaza@pacifier.com>",
"msg_from_op": true,
"msg_subject": "RE: Plans for solving the VACUUM problem"
}
] |
[
{
"msg_contents": "Hi there,\n\nI see that pgsql replication is on TODO list. I wonder whether there is related sites about this issue or some developed resources. \n\nThanks.\n\nRuke Wang\nSoftware Engineer\nServgate Technologies, Inc.\n(408)324-5717\n\n\n\n\n\n\n\nHi there,\n \nI see that pgsql replication is on TODO list. \nI wonder whether there is related sites about this issue or some developed \nresources. \n \nThanks.\n \n\nRuke Wang\nSoftware Engineer\nServgate Technologies, Inc.\n(408)324-5717",
"msg_date": "Tue, 29 May 2001 11:36:13 -0700",
"msg_from": "\"Ruke Wang\" <ruke@servgate.com>",
"msg_from_op": true,
"msg_subject": "database synchronization"
}
] |
[
{
"msg_contents": "> > > OTOH it is possible to do without rolling back at all as\n> > > MySQL folks have shown us ;)\n> > \n> > Not with SDB tables which support transactions.\n> \n> My point was that MySQL was used quite a long time without it \n> and still quite many useful applications were produced.\n\nAnd my point was that needless to talk about rollbacks in\nnon-transaction system and in transaction system one has to\nimplement rollback somehow.\n\n> BTW, do you know what strategy is used by BSDDB/SDB for \n> rollback/undo ?\n\nAFAIR, they use O-smgr => UNDO is required.\n\nVadim\n",
"msg_date": "Tue, 29 May 2001 13:37:03 -0700",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: Plans for solving the VACUUM problem"
}
] |
[
{
"msg_contents": "Are these columns in pg_class:\n\nrelukeys | relfkeys | relhaspkey\n\nunused or what?\n\nChris\n\n",
"msg_date": "Wed, 30 May 2001 16:59:40 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Unused pg_class columns"
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> Are these columns in pg_class:\n> relukeys | relfkeys | relhaspkey\n> unused or what?\n\nThey may be unused by the backend, but that doesn't mean that\napplications don't look at them. I find references to relhaspkey\nin pgaccess, for example.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 30 May 2001 10:17:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unused pg_class columns "
},
{
"msg_contents": "> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > Are these columns in pg_class:\n> > relukeys | relfkeys | relhaspkey\n> > unused or what?\n> \n> They may be unused by the backend, but that doesn't mean that\n> applications don't look at them. I find references to relhaspkey\n> in pgaccess, for example.\n\nSo pgaccess just reads it but doesn't write it, right? Seems we should\nmark this in the code so we can delete them some day, and perhaps remove\nthe refrence to it in pgaccess.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 30 May 2001 10:33:23 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unused pg_class columns"
},
{
"msg_contents": "\nI have marked these columns as unused in pg_class.h.\n\n> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > Are these columns in pg_class:\n> > relukeys | relfkeys | relhaspkey\n> > unused or what?\n> \n> They may be unused by the backend, but that doesn't mean that\n> applications don't look at them. I find references to relhaspkey\n> in pgaccess, for example.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 30 May 2001 10:39:35 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unused pg_class columns"
},
{
"msg_contents": "> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > Are these columns in pg_class:\n> > relukeys | relfkeys | relhaspkey\n> > unused or what?\n> \n> They may be unused by the backend, but that doesn't mean that\n> applications don't look at them. I find references to relhaspkey\n> in pgaccess, for example.\n\nI removed the reference in pgaccess. After a few releases of pgaccess\nwe can remove the column.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 30 May 2001 10:43:36 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unused pg_class columns"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I have marked these columns as unused in pg_class.h.\n\nKeep your hands off 'em!\n\nThere are other purposes for system catalogs besides the internal\nconvenience of the backend, you know. \"Unused at the moment by the\nbackend\" does not mean \"removable\" --- you have no way to know what\nuser code you may break by removing such info.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 30 May 2001 10:49:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unused pg_class columns "
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I removed the reference in pgaccess. After a few releases of pgaccess\n> we can remove the column.\n\nPut it back please. Or have you become the unilateral arbiter of what\nfunctions pgaccess has?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 30 May 2001 10:51:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unused pg_class columns "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I have marked these columns as unused in pg_class.h.\n> \n> Keep your hands off 'em!\n> \n> There are other purposes for system catalogs besides the internal\n> convenience of the backend, you know. \"Unused at the moment by the\n> backend\" does not mean \"removable\" --- you have no way to know what\n> user code you may break by removing such info.\n\nI didn't remove them. I marked them as unused, like other columns\nalready marked as unused in the file.\n\nFor example:\n\n int2 relrefs; /* # of references to this rel (not used) */\n\nwas already marked as \"(not used)\".\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 30 May 2001 10:52:57 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unused pg_class columns"
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I removed the reference in pgaccess. After a few releases of pgaccess\n> > we can remove the column.\n> \n> Put it back please. Or have you become the unilateral arbiter of what\n> functions pgaccess has?\n\nPgaccess is referencing system columns that are never set. What\npossible value could they be except to confuse users, and this column\nwas visible in pgaccess.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 30 May 2001 10:55:54 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unused pg_class columns"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Pgaccess is referencing system columns that are never set.\n\nI wasn't aware that our standard procedure upon finding a bug was to rip\nout the feature, rather than fixing the bug ... this could make life a\n*lot* simpler.\n\nIf relhaspkey isn't set by table creation, then let's set it. Easy\nenough, and it seems like a clearly useful column. As for the other\ncolumns at issue, those are not ancient history: they were added in\nrelease 6.4, by Vadim according to the CVS logs. Perhaps you should ask\nhim what they are intended for, rather than assuming that they are fair\ngame for removal.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 30 May 2001 11:37:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unused pg_class columns "
},
{
"msg_contents": "Tom Lane writes:\n\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I removed the reference in pgaccess. After a few releases of pgaccess\n> > we can remove the column.\n>\n> Put it back please. Or have you become the unilateral arbiter of what\n> functions pgaccess has?\n\nIf pgaccess only reads the column and the backend doesn't use it, it seems\nreasonable to remove the reference. It looks like it was a mistake to\nbegin with.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Wed, 30 May 2001 17:39:42 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Unused pg_class columns "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Pgaccess is referencing system columns that are never set.\n> \n> I wasn't aware that our standard procedure upon finding a bug was to rip\n> out the feature, rather than fixing the bug ... this could make life a\n> *lot* simpler.\n\nAttached is a patch of all my pgaccess changes this morning. If someone\nwants to code the hasprimary test, feel free. You can look at psql to\nsee new code that does tests for primary keys. It was recently\ncommitted to CVS.\n\nIf people would prefer me to comment out the primary key display rather\nthan remove it, I can do that.\n\n> If relhaspkey isn't set by table creation, then let's set it. Easy\n> enough, and it seems like a clearly useful column. As for the other\n> columns at issue, those are not ancient history: they were added in\n> release 6.4, by Vadim according to the CVS logs. Perhaps you should ask\n> him what they are intended for, rather than assuming that they are fair\n> game for removal.\n\nI only marked as \"not used\", not as \"remove me\". Seems clear enough.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n? pgaccess\nIndex: lib/tables.tcl\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/bin/pgaccess/lib/tables.tcl,v\nretrieving revision 1.7\nretrieving revision 1.9\ndiff -c -r1.7 -r1.9\n*** lib/tables.tcl\t2001/02/26 05:15:48\t1.7\n--- lib/tables.tcl\t2001/05/30 15:37:38\t1.9\n***************\n*** 44,50 ****\n \tset PgAcVar(tblinfo,isunique) {}\n \tset PgAcVar(tblinfo,isclustered) {}\n \tset PgAcVar(tblinfo,indexfields) {}\n! \twpg_select $CurrentDB \"select attnum,attname,typname,attlen,attnotnull,atttypmod,usename,usesysid,pg_class.oid,relpages,reltuples,relhaspkey,relhasrules,relacl from pg_class,pg_user,pg_attribute,pg_type where (pg_class.relname='$PgAcVar(tblinfo,tablename)') and (pg_class.oid=pg_attribute.attrelid) and (pg_class.relowner=pg_user.usesysid) and (pg_attribute.atttypid=pg_type.oid) order by attnum\" rec {\n \t\tset fsize $rec(attlen)\n \t\tset fsize1 $rec(atttypmod)\n \t\tset ftype $rec(typname)\n--- 44,50 ----\n \tset PgAcVar(tblinfo,isunique) {}\n \tset PgAcVar(tblinfo,isclustered) {}\n \tset PgAcVar(tblinfo,indexfields) {}\n! \twpg_select $CurrentDB \"select attnum,attname,typname,attlen,attnotnull,atttypmod,usename,usesysid,pg_class.oid,relpages,reltuples,relhasrules,relacl from pg_class,pg_user,pg_attribute,pg_type where (pg_class.relname='$PgAcVar(tblinfo,tablename)') and (pg_class.oid=pg_attribute.attrelid) and (pg_class.relowner=pg_user.usesysid) and (pg_attribute.atttypid=pg_type.oid) order by attnum\" rec {\n \t\tset fsize $rec(attlen)\n \t\tset fsize1 $rec(atttypmod)\n \t\tset ftype $rec(typname)\n***************\n*** 68,78 ****\n \t\tset PgAcVar(tblinfo,numtuples) $rec(reltuples)\n \t\tset PgAcVar(tblinfo,numpages) $rec(relpages)\n \t\tset PgAcVar(tblinfo,permissions) $rec(relacl)\n- \t\tif {$rec(relhaspkey)==\"t\"} {\n- \t\t\tset PgAcVar(tblinfo,hasprimarykey) [intlmsg Yes]\n- \t\t} else {\n- \t\t\tset PgAcVar(tblinfo,hasprimarykey) [intlmsg No]\n- \t\t}\n \t\tif {$rec(relhasrules)==\"t\"} {\n \t\t\tset PgAcVar(tblinfo,hasrules) [intlmsg Yes]\n \t\t} else {\n--- 68,73 ----\n***************\n*** 80,86 ****\n \t\t}\n \t}\n \tset PgAcVar(tblinfo,indexlist) {}\n! \twpg_select $CurrentDB \"select oid,indexrelid from pg_index where (pg_class.relname='$PgAcVar(tblinfo,tablename)') and (pg_class.oid=pg_index.indrelid)\" rec {\n \t\tlappend PgAcVar(tblinfo,indexlist) $rec(oid)\n \t\twpg_select $CurrentDB \"select relname from pg_class where oid=$rec(indexrelid)\" rec1 {\n \t\t\t.pgaw:TableInfo.f2.fl.ilb insert end $rec1(relname)\n--- 75,81 ----\n \t\t}\n \t}\n \tset PgAcVar(tblinfo,indexlist) {}\n! \twpg_select $CurrentDB \"select pg_index.oid,indexrelid from pg_index, pg_class where (pg_class.relname='$PgAcVar(tblinfo,tablename)') and (pg_class.oid=pg_index.indrelid)\" rec {\n \t\tlappend PgAcVar(tblinfo,indexlist) $rec(oid)\n \t\twpg_select $CurrentDB \"select relname from pg_class where oid=$rec(indexrelid)\" rec1 {\n \t\t\t.pgaw:TableInfo.f2.fl.ilb insert end $rec1(relname)\n***************\n*** 1723,1735 ****\n \t\t-anchor w -borderwidth 1 \\\n \t\t-relief sunken -text {} -textvariable PgAcVar(tblinfo,ownerid) \\\n \t\t-width 200 \n- \tlabel $base.f0.fi.l9 \\\n- \t\t-borderwidth 0 \\\n- \t\t-relief raised -text [intlmsg {Has primary key ?}]\n- \tlabel $base.f0.fi.l10 \\\n- \t\t-anchor w -borderwidth 1 \\\n- \t\t-relief sunken -text {} \\\n- \t\t-textvariable PgAcVar(tblinfo,hasprimarykey) -width 200 \n \tlabel $base.f0.fi.l11 \\\n \t\t-borderwidth 0 \\\n \t\t-relief raised -text [intlmsg {Has rules ?}]\n--- 1718,1723 ----\n***************\n*** 1893,1903 ****\n \tgrid $base.f0.fi.l8 \\\n \t\t-in .pgaw:TableInfo.f0.fi -column 1 -row 3 -columnspan 1 -rowspan 1 -padx 2 \\\n \t\t-pady 2 \n- \tgrid $base.f0.fi.l9 \\\n- \t\t-in .pgaw:TableInfo.f0.fi -column 0 -row 4 -columnspan 1 -rowspan 1 -sticky w \n- \tgrid $base.f0.fi.l10 \\\n- \t\t-in .pgaw:TableInfo.f0.fi -column 1 -row 4 -columnspan 1 -rowspan 1 -padx 2 \\\n- \t\t-pady 2 \n \tgrid $base.f0.fi.l11 \\\n \t\t-in .pgaw:TableInfo.f0.fi -column 0 -row 5 -columnspan 1 -rowspan 1 -sticky w \n \tgrid $base.f0.fi.l12 \\\n--- 1881,1886 ----",
"msg_date": "Wed, 30 May 2001 11:42:24 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unused pg_class columns"
}
] |
[
{
"msg_contents": "\n> > > > So are whole pages stored in rollback segments or just\n> > > > the modified data?\n> > > \n> > > This is implementation dependent. Storing whole pages is\n> > > much easy to do, but obviously it's better to store just\n> > > modified data.\n> > \n> > I am not sure it is necessarily better. Seems to be a tradeoff here.\n> > pros of whole pages:\n> > \ta possible merge with physical log (for first\n> > modification of a page after checkpoint\n> > \t\tthere would be no overhead compared to current \n> > since it is already written now)\n> \n> Using WAL as RS data storage is questionable.\n\nNo, I meant the other way around. Move the physical log pages away from WAL \nfiles to the \"rollback segment\" (imho \"snapshot area\" would be a better name)\n\n> > \tin a clever implementation a page already in the\n> > \"rollback segment\" might satisfy the \n> > \t\tmodification of another row on that page, and \n> > thus would not need any additional io.\n> \n> This would be possible only if there was no commit (same SCN)\n> between two modifications.\n\nI don't think someone else's commit matters unless it touches the same page.\nIn that case a reader would possibly need to chain back to an older version \ninside the snapshot area, and then it gets complicated even in the whole page \ncase. A good concept could probably involve both whole page and change\nonly, and let the optimizer decide what to do.\n\n> But, aren't we too deep on overwriting smgr (O-smgr) implementation?\n\nYes, but some understanding of the possibilities needs to be sorted out \nto allow good decicsions, no ?\n\nAndreas\n",
"msg_date": "Wed, 30 May 2001 12:18:07 +0200",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: AW: Plans for solving the VACUUM problem"
},
{
"msg_contents": "\nI have relucantly added this UNDO/VACUUM thread to TODO.detail. People\ncan review the discussion via a link on the TODO page or in CVS. \nWhenever we resolve this issue, I will gladly remove these emails.\n\n\n> \n> > > > > So are whole pages stored in rollback segments or just\n> > > > > the modified data?\n> > > > \n> > > > This is implementation dependent. Storing whole pages is\n> > > > much easy to do, but obviously it's better to store just\n> > > > modified data.\n> > > \n> > > I am not sure it is necessarily better. Seems to be a tradeoff here.\n> > > pros of whole pages:\n> > > \ta possible merge with physical log (for first\n> > > modification of a page after checkpoint\n> > > \t\tthere would be no overhead compared to current \n> > > since it is already written now)\n> > \n> > Using WAL as RS data storage is questionable.\n> \n> No, I meant the other way around. Move the physical log pages away from WAL \n> files to the \"rollback segment\" (imho \"snapshot area\" would be a better name)\n> \n> > > \tin a clever implementation a page already in the\n> > > \"rollback segment\" might satisfy the \n> > > \t\tmodification of another row on that page, and \n> > > thus would not need any additional io.\n> > \n> > This would be possible only if there was no commit (same SCN)\n> > between two modifications.\n> \n> I don't think someone else's commit matters unless it touches the same page.\n> In that case a reader would possibly need to chain back to an older version \n> inside the snapshot area, and then it gets complicated even in the whole page \n> case. A good concept could probably involve both whole page and change\n> only, and let the optimizer decide what to do.\n> \n> > But, aren't we too deep on overwriting smgr (O-smgr) implementation?\n> \n> Yes, but some understanding of the possibilities needs to be sorted out \n> to allow good decicsions, no ?\n> \n> Andreas\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 30 May 2001 16:16:24 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: Plans for solving the VACUUM problem"
}
] |
[
{
"msg_contents": "Could I ask a huge favour of the experienced PostgreSQL hackers to make a\nsimple page on the postgreSQL.org website listing TODO items that newbie\nhackers can get stuck into? I was thinking of doing elog() myself, but then\nagain, I'm not experienced enough in PostgreSQL to do something that the\ndevelopers would actually like.\n\nI know this is a big ask, but once its done its done, and we can all get on\nwith the job. Not trying to criticize anyone, this is not a flame! Just\nasking for a place for newbies to start hacking.\n\n:-)\n\nAppreciated very much.\n\n--\nJames\n\n\n",
"msg_date": "Wed, 30 May 2001 14:03:58 +0100",
"msg_from": "\"James Buchanan\" <jamesb@northnet.com.au>",
"msg_from_op": true,
"msg_subject": "place for newbie postgresql hackers to work"
},
{
"msg_contents": "> I know this is a big ask, but once its done its done, and we can \n> all get on\n> with the job. Not trying to criticize anyone, this is not a flame! Just\n> asking for a place for newbies to start hacking.\n\nWhy not start with Thomas Swan's earlier posting?:\n\n> I just got bit by the identifier name is too long and will be truncated \n> limitation in Postgresql.\n> \n> AFIAA there is a limit of 64 characters for identifiers (names of \n> tables, sequences, indexes, etc...)\n> \n> I had just started to get in the habit of using serial data types until \n> I made to tables with long names and the automatic sequence names that \n> were generated conflicted, *ouch* ...\n> \n> Is there the possibility of a name conflict resolution during the table \n> creation phase similar to \"the name I want to assign is already taken, \n> so I'll pick a different name...\" on the serial data type?\n\nChris\n",
"msg_date": "Wed, 6 Jun 2001 12:42:41 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "RE: place for newbie postgresql hackers to work"
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n> > I know this is a big ask, but once its done its done, and we can\n> > all get on\n> > with the job. Not trying to criticize anyone, this is not a flame! Just\n> > asking for a place for newbies to start hacking.\n>\n> Why not start with Thomas Swan's earlier posting?:\n\n Because first, exactly that would require a good concept and\n (at least I expect it to) a fair amount of system catalog\n changes - not the best *newbe* starter.\n\n I like the idea though. Let the experienced developers\n identify *legwork* TODO items.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Wed, 6 Jun 2001 11:13:02 -0400 (EDT)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: place for newbie postgresql hackers to work"
},
{
"msg_contents": "James Buchanan writes:\n\n> Could I ask a huge favour of the experienced PostgreSQL hackers to make a\n> simple page on the postgreSQL.org website listing TODO items that newbie\n> hackers can get stuck into?\n\nIf there were such places then everyone would be working there and soon\nthere wouldn't be anything left to do there. ;-)\n\nTraditionally, people work on things that they personally like to see\nimproved. Reading the mailing lists will point out areas that are\ncurrently little maintained, and also things that people often complain\nabout but which are not easy to fix.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Wed, 6 Jun 2001 17:25:14 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: place for newbie postgresql hackers to work"
},
{
"msg_contents": "> > Why not start with Thomas Swan's earlier posting?:\n>\n> Because first, exactly that would require a good concept and\n> (at least I expect it to) a fair amount of system catalog\n> changes - not the best *newbe* starter.\n>\n> I like the idea though. Let the experienced developers\n> identify *legwork* TODO items.\n\n From what I can see - there are none. It took me months of reading the\nPostgreSQL source code before I could muck with it. (And I still just did\nit for fun!) Even the 'basic' stuff like psql and pg_dump modifications\nrequire a knowledge of how foreign keys are stored...\n\nChris\n\n",
"msg_date": "Thu, 7 Jun 2001 10:14:13 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "RE: place for newbie postgresql hackers to work"
},
{
"msg_contents": "> > I like the idea though. Let the experienced developers\n> > identify *legwork* TODO items.\n> \n> From what I can see - there are none. It took me months of reading the\n> PostgreSQL source code before I could muck with it. (And I still just did\n> it for fun!) Even the 'basic' stuff like psql and pg_dump modifications\n> require a knowledge of how foreign keys are stored...\n\nGood point. Many of the trivial changes spin into difficult territory. \nWe can suggest items if people ask us.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 7 Jun 2001 11:14:50 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: place for newbie postgresql hackers to work"
},
{
"msg_contents": "PostgreSQL Hackers,\n\nI would like to ask, where can newbie hackers (apprentices) be of use in the\npgsql project?\n\nThanks Bruce for that work you did showing the TCP/IP diagramme, etc... I\ndid learn something! Now, the task of self learning from there. For the\nserver, how to learn the format of PGSQL files? How is data stored? How is\nit retrieved so darn quickly? How are index files constructed? So much\nglorious knowledge to be had. Source code: start with [...................]\nC code file and branch out from there. Please fill in the blanks for me? I\nneed the start to discover for myself. Learning how to fish is better than\nbeing handed a fish.\n\nWith Thanks,\nJames\n\n\n----- Original Message -----\nFrom: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\nTo: \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>\nCc: \"Jan Wieck\" <JanWieck@yahoo.com>; \"James Buchanan\"\n<jamesb@northnet.com.au>; \"PostgreSQL-development\"\n<pgsql-hackers@postgresql.org>\nSent: Thursday, June 07, 2001 4:14 PM\nSubject: Re: [HACKERS] place for newbie postgresql hackers to work\n\n\n> > > I like the idea though. Let the experienced developers\n> > > identify *legwork* TODO items.\n> >\n> > From what I can see - there are none. It took me months of reading the\n> > PostgreSQL source code before I could muck with it. (And I still just\ndid\n> > it for fun!) Even the 'basic' stuff like psql and pg_dump modifications\n> > require a knowledge of how foreign keys are stored...\n>\n> Good point. Many of the trivial changes spin into difficult territory.\n> We can suggest items if people ask us.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n\n",
"msg_date": "Fri, 8 Jun 2001 02:28:27 +0100",
"msg_from": "\"James Buchanan\" <jamesb@northnet.com.au>",
"msg_from_op": true,
"msg_subject": "Re: place for newbie postgresql hackers to work"
}
] |
[
{
"msg_contents": "\nTom, with all the work you've been doing inside planner and optimizer, has\nthere been anything done for 7.1.2 to make how a query is written cause\nthe backend to be more intelligent?\n\nI'm playing with a query that I just don't like, since its taking ~3min to\nrun ...\n\nIt started as:\n\nEXPLAIN SELECT distinct s.gid, s.created, count(i.title) AS images\n FROM status s LEFT JOIN images i ON (s.gid = i.gid AND i.active), personal_data pd, relationship_wanted rw\n WHERE s.active AND s.status != 0\n AND (s.gid = pd.gid AND pd.gender = 0)\n AND (s.gid = rw.gid AND rw.gender = 1 )\n AND ( ( age('now', pd.dob) > '26 years' ) AND ( age('now', pd.dob) < '46 years' ) )\n AND country IN ( 'US' )\n GROUP BY s.gid,s.created\n ORDER BY images desc;\nNOTICE: QUERY PLAN:\n\nUnique (cost=2365.87..2365.88 rows=1 width=37)\n -> Sort (cost=2365.87..2365.87 rows=1 width=37)\n -> Aggregate (cost=2365.86..2365.86 rows=1 width=37)\n -> Group (cost=2365.86..2365.86 rows=1 width=37)\n -> Sort (cost=2365.86..2365.86 rows=1 width=37)\n -> Nested Loop (cost=167.62..2365.85 rows=1 width=37)\n -> Nested Loop (cost=0.00..600.30 rows=1 width=8)\n -> Index Scan using personal_data_gender on personal_data pd (cost=0.00..590.79 rows=4 width=4)\n -> Index Scan using relationship_wanted_gid on relationship_wanted rw (cost=0.00..2.12 rows=1 width=4)\n -> Materialize (cost=1508.62..1508.62 rows=17128 width=29)\n -> Hash Join (cost=167.62..1508.62 rows=17128 width=29)\n -> Seq Scan on status s (cost=0.00..566.24 rows=17128 width=12)\n -> Hash (cost=149.70..149.70 rows=7170 width=17)\n -> Seq Scan on images i (cost=0.00..149.70 rows=7170 width=17)\n\nEXPLAIN\n\nAnd, after playing a bit, I've got it to:\n\n2EXPLAIN SELECT distinct s.gid, s.created, count(i.title) AS images\n FROM status s LEFT JOIN images i ON (s.gid = i.gid AND i.active), relationship_wanted rw\n WHERE s.active AND s.status != 0\n AND EXISTS ( SELECT gid\n FROM relationship_wanted\n WHERE gender = 1 )\n AND EXISTS ( SELECT gid\n FROM personal_data\n WHERE gender = 0\n AND ( ( age('now', dob) > '26 years' ) AND ( age('now', dob) < '46 years' ) )\n AND country IN ( 'US' ) )\n GROUP BY s.gid,s.created\n ORDER BY images desc;\nNOTICE: QUERY PLAN:\n\nUnique (cost=313742358.09..314445331.35 rows=9372977 width=29)\n InitPlan\n -> Seq Scan on relationship_wanted (cost=0.00..1006.03 rows=1446 width=4)\n -> Index Scan using personal_data_gender on personal_data (cost=0.00..590.79 rows=4 width=4)\n -> Sort (cost=313742358.09..313742358.09 rows=93729769 width=29)\n -> Aggregate (cost=285211774.88..292241507.54 rows=93729769 width=29)\n -> Group (cost=285211774.88..289898263.32 rows=937297688 width=29)\n -> Sort (cost=285211774.88..285211774.88 rows=937297688 width=29)\n -> Result (cost=167.62..24262791.77 rows=937297688 width=29)\n -> Nested Loop (cost=167.62..24262791.77 rows=937297688 width=29)\n -> Hash Join (cost=167.62..1508.62 rows=17128 width=29)\n -> Seq Scan on status s (cost=0.00..566.24 rows=17128 width=12)\n -> Hash (cost=149.70..149.70 rows=7170 width=17)\n -> Seq Scan on images i (cost=0.00..149.70 rows=7170 width=17)\n -> Seq Scan on relationship_wanted rw (cost=0.00..869.22 rows=54722 width=0)\n\nEXPLAIN\n\nNot much of an improvement ...\n\nThe 'personal_data' EXISTS clause:\n\nSELECT gid\n FROM personal_data\n WHERE gender = 0\n AND ( ( age('now', dob) > '26 years' ) AND ( age('now', dob) < '46 years' ) )\n AND country IN ( 'US' ) ;\n\nNOTICE: QUERY PLAN:\n\nIndex Scan using personal_data_gender on personal_data (cost=0.00..590.79 rows=4 width=4)\n\nEXPLAIN\n\nreturns 1893 rows, while status contains 26260 rows ... status and\npersonal_data have a 1-to-1 relationship, so out of 26260 rows in status,\n*max* I'm ever going to deal with are the 1893 that are found in\npersonal_data ...\n\nso, what I'd like to do is have the subselect on personal_data used first,\nso as to reduce the set of data that the rest of the query will work only\non those 1893 gid's, instead of all 26260 of them ...\n\nMake sense?\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n",
"msg_date": "Wed, 30 May 2001 14:55:48 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "intelligence in writing a query ..."
},
{
"msg_contents": "The Hermit Hacker <scrappy@hub.org> writes:\n> 2EXPLAIN SELECT distinct s.gid, s.created, count(i.title) AS images\n> FROM status s LEFT JOIN images i ON (s.gid = i.gid AND i.active), relationship_wanted rw\n> WHERE s.active AND s.status != 0\n> AND EXISTS ( SELECT gid\n> FROM relationship_wanted\n> WHERE gender = 1 )\n> AND EXISTS ( SELECT gid\n> FROM personal_data\n> WHERE gender = 0\n> AND ( ( age('now', dob) > '26 years' ) AND ( age('now', dob) < '46 years' ) )\n> AND country IN ( 'US' ) )\n> GROUP BY s.gid,s.created\n> ORDER BY images desc;\n\nI don't understand what you're trying to do here. The inner SELECTs\naren't dependent on anything in the outer query, so what are they for?\n\n> ... status and\n> personal_data have a 1-to-1 relationship,\n\nThen why have two tables? Merge them into one table and save yourself a\njoin.\n\nAlso, since status.gid is (I assume) unique, what's the use of the\nDISTINCT clause at the top level? Seems like that's costing you\na useless sort & unique pass ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 30 May 2001 17:35:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: intelligence in writing a query ... "
},
{
"msg_contents": "At 05:35 PM 5/30/01 -0400, Tom Lane wrote:\n\n>Also, since status.gid is (I assume) unique, what's the use of the\n>DISTINCT clause at the top level? Seems like that's costing you\n>a useless sort & unique pass ...\n\n>> 2EXPLAIN SELECT distinct s.gid, s.created, count(i.title) AS images\n...\n>> GROUP BY s.gid,s.created\n\nHe's already paying for a sort due to the GROUP BY but of course that\nmakes the DISTINCT meaningless since s.gid and s.created are already\ngrouped...\n\n\n\n- Don Baccus, Portland OR <dhogaza@pacifier.com>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n",
"msg_date": "Wed, 30 May 2001 15:53:36 -0700",
"msg_from": "Don Baccus <dhogaza@pacifier.com>",
"msg_from_op": false,
"msg_subject": "Re: intelligence in writing a query ... "
}
] |
[
{
"msg_contents": "Hi,\n\nI need to implement a cache for query plans as part of my BSc thesis. Does\nanybody know what happened to Karel Zak's patch?\n\nI'm also looking for some comments & tips about how to implement a cache for\nquery plans and how to deal with the implementation of shared memory in\nPSQL.\n\nGreetings,\nRoberto\n\nPS: Sorry for my english :(\n\n-----------\nFirst they ignore you.\nThen they laugh at you.\nThen they fight you.\nThen you win.\n\nMahatma Gandhi\n\n\n",
"msg_date": "Wed, 30 May 2001 15:00:53 -0300",
"msg_from": "\"Roberto Abalde\" <roberto.abalde@galego21.org>",
"msg_from_op": true,
"msg_subject": "Cache for query plans"
},
{
"msg_contents": "Roberto Abalde wrote:\n> Hi,\n>\n> I need to implement a cache for query plans as part of my BSc thesis. Does\n> anybody know what happened to Karel Zak's patch?\n\n Dunno.\n\n> I'm also looking for some comments & tips about how to implement a cache for\n> query plans and how to deal with the implementation of shared memory in\n> PSQL.\n\n Query trees and -plans are only handled on the backend side.\n And don't forget that dealing with such thing on the client\n side, which allmost runs under clients memory control, opens\n a can of worms with respect to security and other issues.\n Plus it's not a psql only thing. If you want applications to\n benefit from it, it has to be implemented on the libpq level.\n\n That said, general purpose query cacheing using shared memory\n must at least run the parser on each query string. It could\n then build some hash key based on the querytree node\n structure and hold query trees and -plans in shared memory.\n If it finds a query with the same key in it's cache, it'll\n take a closer look if the cached and actual query only differ\n in const nodes and instead of planning/optimizing again it'll\n just use the cached one. A query shouldn't make it into the\n cache just at first occurence, but it's hash key does and is\n subsequently counted up. Combined with some ageing and the\n rest of required herbs it'll serve a good meal for some types\n of applications. Thus, it has to be a per database\n configurable feature.\n\n Explicit PREPARE and EXECUTE statements (or whatever keywords\n actually used) are IMHO better candidates for a per session\n (backend) non-shared implementation, because how could\n another client be sure that a given query identifier actually\n does or doesn't exist and what querystring is/should be\n associated with it?\n\n> Greetings,\n> Roberto\n>\n> PS: Sorry for my english :(\n\n What's wrong with it?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Wed, 30 May 2001 15:11:48 -0400 (EDT)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Cache for query plans"
},
{
"msg_contents": "On Wed, May 30, 2001 at 03:00:53PM -0300, Roberto Abalde wrote:\n> Hi,\n> \n> I need to implement a cache for query plans as part of my BSc thesis. Does\n> anybody know what happened to Karel Zak's patch?\n> \n\n\n Hi,\n\n\n my patch is on my ftp and nobody works on it, but I mean it's good\nbegin for some next work. I not sure with implement this experimental\npatch (but usable) to official sources. For example Jan has more complex\nidea about query plan cache ... but first time we must solve some\nsub-problems like memory management in shared memory that is transparently\nfor starndard routines like copy query plan ... and Tom isn't sure with\nquery cache in shared memory...etc. Too much queries, but less answers :-)\n\n\n\t\t\t\tKarel\n> \n> PS: Sorry for my english :(\n\n\n Do you anytime read any my mail :-)\n\n\n\t\t\tKarel\n\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Thu, 31 May 2001 22:23:26 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": false,
"msg_subject": "Re: Cache for query plans"
}
] |
[
{
"msg_contents": "I'm trying to centralize data in a unique db, like it:\n\n\n\n\t---\t\t---\t\t---\n |DB |\t |DB |\t |DB |\n\t---\t\t---\t\t---\n \\ | /\n \\ | /\n\t \\\t ----------- /\n\t \t|\t |/\n | \"Big\" DB |\n | with |\n | centralized|\n | data |\n ------------\n\nInformation:\n\n\t1) The \"small\" DBs sync data with the \"big\" DB.\n\t2) We have 4 linux box above.\n\t3) The big DB has a copy of all data. Just the \"small\"\n\t DBs inserts data in the \"big\" one.\n\n\nThe question: Does Postgres do it?\n\n[]'s\n\n\nPaulo Angelo\n\n",
"msg_date": "Wed, 30 May 2001 18:07:26 -0300 (EST)",
"msg_from": "Paulo Angelo <pa@bsb.conectiva.com.br>",
"msg_from_op": true,
"msg_subject": "Sync Data"
},
{
"msg_contents": "Hi Paulo,\n\nIt is unclear exactly what you want PostgreSQL to do but it seems to me\nthat if you want to Sync data into a central database at some point or\nother during the day, you should design your application to take care of\nit, not rely on PostgreSQL to do anything special.\n\nThat is, simply move all the data from the smaller databases to the\ncentral database with a scheduled processing system. \n\nGavin\n\nOn Wed, 30 May 2001, Paulo Angelo wrote:\n\n> I'm trying to centralize data in a unique db, like it:\n> \n> \n> \n> \t---\t\t---\t\t---\n> |DB |\t |DB |\t |DB |\n> \t---\t\t---\t\t---\n> \\ | /\n> \\ | /\n> \t \\\t ----------- /\n> \t \t|\t |/\n> | \"Big\" DB |\n> | with |\n> | centralized|\n> | data |\n> ------------\n> \n> Information:\n> \n> \t1) The \"small\" DBs sync data with the \"big\" DB.\n> \t2) We have 4 linux box above.\n> \t3) The big DB has a copy of all data. Just the \"small\"\n> \t DBs inserts data in the \"big\" one.\n> \n> \n> The question: Does Postgres do it?\n> \n> []'s\n> \n> \n> Paulo Angelo\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n",
"msg_date": "Fri, 1 Jun 2001 17:54:57 +1000 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Sync Data"
}
] |
[
{
"msg_contents": "I was remembering tonight some of the strange fixes we made in the early\ndays of PostgreSQL. I particularly remember the LIKE optimization I did\nin gram.y to allow queries that are anchored to the beginning of a\nstring to use an index.\n\nIt was a crazy patch, and everyone who saw it grumbled. The problem was\nthat no one could think of a better solution. Finally the proper fix\nwas made to the optimizer and the code removed. I was glad to see it\ngo, if only so I didn't have to hear complaints about it. :-)\n\nThe good thing about the patch is that it gave us a feature for two\nyears while we gained experience to make the right fix, and it was easy\nto remove once the time came.\n\nI am now wondering if we are agonizing too much about changes to\nPostgreSQL. We are much more successful, and much more reliable than we\nused to be, but I wonder whether we are limiting improvements because\nthey are not the _right_ fix. \n\nI am not advocating that we start throwing all sorts of stuff into the\nbackend. This certainly would leave us with a big mess. I am just\nnoticing that we are hitting road blocks where we can't find the perfect\nsolution, so we do nothing, even when users are complaining they need a\ncertain feature. I think we can all remember recent discussions or TODO\nitems where this happened. \n\nSeem like it would be a good idea sometimes add feature, even an\nimperfect one, until we can make a better fix, because sometimes, the\nperfect fix is years away. I never expected my gram.y hack to last as\nlong as it did.\n\nLet me make a suggestion. Next time we have a partial fix for\nsomething, but don't want to add it, let's add the item on the TODO list\nunder the heading \"Imperfect Fixes,\" where we list items we have fixed\nbut need more work. This way, we will be able to give users a feature,\nbut we will not forget to revisit the item and make a _perfect_ fix\nlater.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 30 May 2001 23:42:16 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Imperfect solutions"
},
{
"msg_contents": "> Let me make a suggestion. Next time we have a partial fix for\n> something, but don't want to add it, let's add the item on the TODO list\n> under the heading \"Imperfect Fixes,\" where we list items we have fixed\n> but need more work. This way, we will be able to give users a feature,\n> but we will not forget to revisit the item and make a _perfect_ fix\n> later.\n\nThe first thing you should add to that list is 'Inheritance of constraints'.\nAt the moment myself and Stephan are beavering away making it so that\nconstraints are recusively added and removed - however if ever we make a\npg_constraints catalog, and a one-to-many constraint->table mapping catalog,\nall our code will need to be (minimally) changed.\n\nAlso, what about foreign keys? At the moment it is incredibly complicated\nto determine all the foreign keys on a table, what column(s) they're defined\nover, what column(s) they reference and what their behaviour is. And just\ntry writing code (like I am) that tries to drop them by name, let alone list\nthem!!!\n\nLastly - pg_dump can happily dump foreign keys as raw triggers, but the\nperfect solution (methinks) would be to dump them as alter table add\nconstraints. Makes it easier to move between different database products.\n\nMy 2c.\n\nChris\n\n",
"msg_date": "Thu, 31 May 2001 12:12:38 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "RE: Imperfect solutions"
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> Also, what about foreign keys? At the moment it is incredibly complicated\n> to determine all the foreign keys on a table, what column(s) they're defined\n> over, what column(s) they reference and what their behaviour is. And just\n> try writing code (like I am) that tries to drop them by name, let alone list\n> them!!!\n\nIndeed. You're looking at the aftermath of an \"imperfect fix\" to add\nforeign keys. With all due respect to Jan and Stephan, who did a great\njob adding the feature at all, there are still a lot of things that need\nto be fixed in that area. The trouble with imperfect fixes is that they\ntend to get institutionalized if they're left in the code for any length\nof time --- people write more code that depends on the hack, or works\naround some of its shortcomings, or whatever, and so it gets harder and\nharder to rip out the hack and replace it with something better.\nEspecially if the original author moves on to other challenges instead\nof continuing to work on improving his first try. Other people are\nlikely to have less understanding of the code's shortcomings.\n\nI don't object to imperfect fixes when they buy us a useful amount of\nfunctionality in a critical area (as indeed the current foreign-key code\ndoes). But I have more of a problem with doing things that way for\nmarginal feature additions. I think that in the long run the downside\noutweighs the upside in cases like that.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 31 May 2001 01:09:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Imperfect solutions "
},
{
"msg_contents": "> I don't object to imperfect fixes when they buy us a useful amount of\n> functionality in a critical area (as indeed the current foreign-key code\n> does). But I have more of a problem with doing things that way for\n> marginal feature additions. I think that in the long run the downside\n> outweighs the upside in cases like that.\n\nWhat got me thinking about this is that I don't think my gram.y fix\nwould be accepted given the current review process, and that is bad\nbecause we would have to live with no LIKE optimization for 1-2 years\nuntil we learned how to do it right.\n\nI think there are a few rules we can use to decide how to deal with\nimperfect solutions:\n\n\tAre the fixes easy to add _and_ easy to rip out later?\n\tDo the fixes affect all queries, or only queries that use the feature?\n\tDo the fixes adversely affect any older queries?\n\tDo the fixes make the system more unstable?\n\nForeign key is a good example of a fix that is hard to rip out. My\ngram.y fix is an example of a fix that affects all queries. Fixes that\ncause older queries or dumps to fail affect all users. I don't think we\nhave accepted fixes that adversely affect older queries or make the\nsystem unstable because they are just too much trouble.\n\nLet's look at the %TYPE fix as an example. It is easy to add and easy\nto rip out. It doesn't affect all queries, just queries that use the\nfeature. It doesn't affect older queries. I think the only argument\nagainst it is that it makes the system appear more unstable because\npeople may think that %TYPE is tracking table changes.\n\nI am slightly concerned we are waiting for perfect solutions and\noverlooking useful solutions.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 31 May 2001 09:52:41 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Imperfect solutions"
},
{
"msg_contents": "> > Let me make a suggestion. Next time we have a partial fix for\n> > something, but don't want to add it, let's add the item on the TODO list\n> > under the heading \"Imperfect Fixes,\" where we list items we have fixed\n> > but need more work. This way, we will be able to give users a feature,\n> > but we will not forget to revisit the item and make a _perfect_ fix\n> > later.\n> \n> The first thing you should add to that list is 'Inheritance of constraints'.\n> At the moment myself and Stephan are beavering away making it so that\n> constraints are recusively added and removed - however if ever we make a\n> pg_constraints catalog, and a one-to-many constraint->table mapping catalog,\n> all our code will need to be (minimally) changed.\n\n\n> \n> Also, what about foreign keys? At the moment it is incredibly complicated\n> to determine all the foreign keys on a table, what column(s) they're defined\n> over, what column(s) they reference and what their behaviour is. And just\n> try writing code (like I am) that tries to drop them by name, let alone list\n> them!!!\n> \n> Lastly - pg_dump can happily dump foreign keys as raw triggers, but the\n> perfect solution (methinks) would be to dump them as alter table add\n> constraints. Makes it easier to move between different database products.\n\nI already have on the TODO list:\n\n\t* Make constraints clearer in dump file\n\nIn fact I have a whole referential integrity section of the TODO list. \nPlease let me know what needs to be added. Let me add:\n\n\t* Make foreign keys easier to identify\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 31 May 2001 10:03:21 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Imperfect solutions"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> What got me thinking about this is that I don't think my gram.y fix\n> would be accepted given the current review process,\n\nNot to put too fine a point on it: the project has advanced a long way\nsince you did that code. Our standards *should* be higher than they\nwere then.\n\n> and that is bad\n> because we would have to live with no LIKE optimization for 1-2 years\n> until we learned how to do it right.\n\nWe still haven't learned how to do it right, actually. I think the\nhistory of the LIKE indexing problem is a perfect example of why fixes\nthat work for some people but not others don't survive long. We put out\nseveral attempts at making it work reliably in non-ASCII locales, but\nnone of them have withstood the test of actual usage.\n\n> I think there are a few rules we can use to decide how to deal with\n> imperfect solutions:\n\nYou forgot\n\n* will the fix institutionalize user-visible behavior that will in the\n long run be considered the wrong thing?\n\n* will the fix contort new code that is written in the same vicinity,\n thereby making it harder and harder to replace as time goes on?\n\nThe first of these is the core of my concern about %TYPE.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 31 May 2001 10:07:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Imperfect solutions "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > What got me thinking about this is that I don't think my gram.y fix\n> > would be accepted given the current review process,\n> \n> Not to put too fine a point on it: the project has advanced a long way\n> since you did that code. Our standards *should* be higher than they\n> were then.\n\nYes, agreed. But at the time that was the best we could do. My\nquestion is whether we should be less willing to accept partial fixes\nnow than in the past. Probably yes, but have we gone too far?\n\nLook at some of the imperfect solutions we have rejected recently, all\nfrom the TODO list:\n\n* Improve control over user privileges, including table creation and\n lock use [privileges] (Karel, others)\n* Remove unused files during database vacuum or postmaster startup\n* Add table name mapping for numeric file names\n* Add ALTER TABLE DROP COLUMN feature [drop]\n* Cache most recent query plan(s) (Karel) [prepare]\n\nNow that I look at it, the list is pretty short, so we may be fine.\n\n> > and that is bad\n> > because we would have to live with no LIKE optimization for 1-2 years\n> > until we learned how to do it right.\n> \n> We still haven't learned how to do it right, actually. I think the\n> history of the LIKE indexing problem is a perfect example of why fixes\n> that work for some people but not others don't survive long. We put out\n> several attempts at making it work reliably in non-ASCII locales, but\n> none of them have withstood the test of actual usage.\n\nAgreed. But what options do we have? If we do nothing, there is no\noptimization at all.\n\n> > I think there are a few rules we can use to decide how to deal with\n> > imperfect solutions:\n> \n> You forgot\n> \n> * will the fix institutionalize user-visible behavior that will in the\n> long run be considered the wrong thing?\n\nYes, good point. User-visible changes are a big deal and have to be\nstudied carefully.\n\n> * will the fix contort new code that is written in the same vicinity,\n> thereby making it harder and harder to replace as time goes on?\n\nAgain, a good point, related to rip-out-ability.\n\n> The first of these is the core of my concern about %TYPE.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 31 May 2001 10:35:18 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Imperfect solutions"
},
{
"msg_contents": "On Thu, 31 May 2001, Tom Lane wrote:\n\n> Indeed. You're looking at the aftermath of an \"imperfect fix\" to add\n> foreign keys. With all due respect to Jan and Stephan, who did a great\n> job adding the feature at all, there are still a lot of things that need\n> to be fixed in that area. The trouble with imperfect fixes is that they\n\nUgh yes. Actually all of the constraints seem to have this problem to\nsome degree. Unique doesn't quite work right for updates where rows \nmay \"temporarily\" be of the same value, check constraints using user\nfunctions can be violated if those functions do sql statements and column\nrenames cause dump/restore to fail. Fk has at least the following \n(in no order and probably incomplete due to just waking up):\n\nTemp tables can shadow pk/fk tables\n - If we have schemas and temp tables are in their own, we can probably\n fix this just with fully qualifying.\n - Otherwise, we'd probably want to refer to the table by oid, but that\n would require having some way to do that in SPI or to replace the\n SPI calls. (Getting the name from the oid isn't sufficient,\n obviously)\n\nInheritance\n - Plenty of discussion about this already\n - An additional wrinkle comes in if we allow/are going to allow users\n to rename base table columns in inherited tables.\n\nAlter Table Rename\n - Either we need to store oids or follow name changes. I'd actually\n prefer the latter if possible, but that requires a dependency system.\n (Especially if we were to go with only storing the text of check\n constraints.)\n\nGeneral\n - For update locks are too strong? Do we need a self conflicting lock\n on the pk table rows? Is there some generally better way to handle\n this? How does this tie into the problem Jan noted before?\n - We probably need a way to check the entire table at once rather than\n per row checks. This would make alter table more reasonable for\n dump/restore (right now on large tables it would try each row's\n check separately - ugh)\n - Deferred constraints are broken in a few cases. Update/insert trigger\n on fk needs to make sure the row is still there at check time, no \n action trigger needs to make sure there hasn't been another row with\n the key values inserted. Other actions are questionable, has anyone\n actually figured out what the correct behavior is? I think that\n running actual actions immediately may be the correct thing, but in\n any case, they'd probably need checks like the no action trigger\n (what happens if the delete/insert is done within one statement\n due to triggers or whatever)\n - Match partial - Complicated. To do this completely means almost\n a separate implementation since stuff like the above checks wouldn't\n work in this case and means that we have to recognize things where\n the user has updated two pk rows referenced by a single fk row to\n distinct key values, since that's an error condition.\n\nStorage/Reporting\n - We really need something that stores the fk information better than\n what we have (we may want to see if we can generalize more constraints\n into the system as well, but we'd have to see)\n - We'll want to make dump/restores show the constraint in a better\n fashion. This may need the above, and we'd still need to have\n backward compatibility (one of the reasons switching to storing\n oids would be interesting)\n \n\n",
"msg_date": "Thu, 31 May 2001 09:03:54 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: Imperfect solutions "
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nOn Thursday 31 May 2001 10:07, Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> We still haven't learned how to do it right, actually. I think the\n> history of the LIKE indexing problem is a perfect example of why fixes\n> that work for some people but not others don't survive long. We put out\n> several attempts at making it work reliably in non-ASCII locales, but\n> none of them have withstood the test of actual usage.\n\nWhile this subject is fresh, let me ask the obvious questions:\n1.)\tWhat locales do we know are problematic?\n2.)\tWhat will happen to user queries and data in those locales?\n3.)\tWhat has been fixed for this (last I remember there was an index \ncorruption issue, and multiple collation problems)? The 7.1 HISTORY has the \nblanket statement 'Many multi-byte/Unicode/locale fixes (Tatsuo and others)' \ninstead of a list of the actual bugs fixed.\n\nLooking through the archives Ifind some details, such as the function \nlocale_is_like_safe() , and I see other details -- but a concise picture of \nwhat one can expect operating in a non-locale_is_like_safe() (which \ncurrently includes ONLY the C and POSIX locales) locale would be, IMHO, \nuseful information that people wouldn't have to dredge around for -- and \nshould probably go into the current locale docs under the Problems heading.\n- --\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.4 (GNU/Linux)\nComment: For info see http://www.gnupg.org\n\niD8DBQE7FnLa5kGGI8vV9eERAhaaAKDQjz0l+3JWnEv4Gc6HDvKFWjIXnQCdE3V7\nXdWmIpkzQ8syjU7KrkzEwcM=\n=mZ7Q\n-----END PGP SIGNATURE-----\n",
"msg_date": "Thu, 31 May 2001 12:35:35 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Non-ASCII locales (was:Re: Imperfect solutions)"
},
{
"msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> Looking through the archives Ifind some details, such as the function \n> locale_is_like_safe() , and I see other details -- but a concise picture of \n> what one can expect operating in a non-locale_is_like_safe() (which \n> currently includes ONLY the C and POSIX locales) locale would be,\n\nAs of 7.1, LIKE will always work correctly in non-C locales, because it\nwill never try to use an index. Concise enough?\n\nWhat we need, and don't have, is reliable information about which\nlocales the pre-7.1 indexing hack was actually safe in. A complicating\nfactor is that non-C locale definitions are probably platform-specific.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 31 May 2001 12:50:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Non-ASCII locales (was:Re: Imperfect solutions) "
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nOn Thursday 31 May 2001 12:50, Tom Lane wrote:\n> As of 7.1, LIKE will always work correctly in non-C locales, because it\n> will never try to use an index. Concise enough?\n\nYes, thank you.\n- --\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.4 (GNU/Linux)\nComment: For info see http://www.gnupg.org\n\niD8DBQE7Fnzq5kGGI8vV9eERAgPJAJ9eHNedUAS4VTHkjwbg3oxt9c8cCACeMmEH\nHQPugYw+AZbD1v6cd2dycN4=\n=zCvc\n-----END PGP SIGNATURE-----\n",
"msg_date": "Thu, 31 May 2001 13:18:32 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Non-ASCII locales (was:Re: Imperfect solutions)"
},
{
"msg_contents": "On Thu, May 31, 2001 at 10:07:36AM -0400, Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > What got me thinking about this is that I don't think my gram.y fix\n> > would be accepted given the current review process,\n> \n> Not to put too fine a point on it: the project has advanced a long way\n> since you did that code. Our standards *should* be higher than they\n> were then.\n> \n> > and that is bad\n> > because we would have to live with no LIKE optimization for 1-2 years\n> > until we learned how to do it right.\n> \n> We still haven't learned how to do it right, actually. I think the\n> history of the LIKE indexing problem is a perfect example of why fixes\n> that work for some people but not others don't survive long. We put out\n> several attempts at making it work reliably in non-ASCII locales, but\n> none of them have withstood the test of actual usage.\n> \n> > I think there are a few rules we can use to decide how to deal with\n> > imperfect solutions:\n> \n> You forgot\n> \n> * will the fix institutionalize user-visible behavior that will in the\n> long run be considered the wrong thing?\n> \n> * will the fix contort new code that is written in the same vicinity,\n> thereby making it harder and harder to replace as time goes on?\n> \n> The first of these is the core of my concern about %TYPE.\n\nThis list points up a problem that needs a better solution than a \nlist: you have to put in questionable features now to get the usage \nexperience you need to do it right later. The set of prospective\nfeatures that meet that description does not resemble the set that\nwould pass all the criteria in the list.\n\nThis is really a familiar problem, with a familiar solution. \nWhen a feature is added that is \"wrong\", make sure it's \"marked\" \nsomehow -- at worst, in the documentation, but ideally with a \nNOTICE or something when it's used -- as experimental. If anybody \ncomplains later that when you ripped it out and redid it correctly, \nyou broke his code, you can just laugh, and add, if you're feeling \ncharitable, \"experimental features are not to be depended on\".\n\n--\nNathan Myers\nncm@zembu.com\n",
"msg_date": "Thu, 31 May 2001 11:38:16 -0700",
"msg_from": "ncm@zembu.com (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: Imperfect solutions"
},
{
"msg_contents": "> > I think there are a few rules we can use to decide how to deal with\n> > imperfect solutions:\n> \n> You forgot\n> \n> * will the fix institutionalize user-visible behavior that will in the\n> long run be considered the wrong thing?\n> \n> * will the fix contort new code that is written in the same vicinity,\n> thereby making it harder and harder to replace as time goes on?\n> \n> The first of these is the core of my concern about %TYPE.\n\nI was thinking about this. Seems if we want to emulate Oracle, we have\nto make %TYPE visible the way it is implemented in the patch. We can\nmake it track table changes or not, but it doesn't seem we have much\nlatitude in how we make it visible to users.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 1 Jun 2001 00:45:45 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Imperfect solutions"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > > I think there are a few rules we can use to decide how to deal with\n> > > imperfect solutions:\n> >\n> > You forgot\n> >\n> > * will the fix institutionalize user-visible behavior that will in the\n> > long run be considered the wrong thing?\n> >\n> > * will the fix contort new code that is written in the same vicinity,\n> > thereby making it harder and harder to replace as time goes on?\n> >\n> > The first of these is the core of my concern about %TYPE.\n> \n> I was thinking about this. Seems if we want to emulate Oracle, we have\n> to make %TYPE visible the way it is implemented in the patch. We can\n> make it track table changes or not, but it doesn't seem we have much\n> latitude in how we make it visible to users.\n\nI think Tom's argument was that just making it visisble will tie us up\nto \nalso keep the semantics, which will be subtly different in PostgreSQL\nand \nOracle and which can't be exactly emulated without emulating\n_everything_\nin Oracle and thereby throwing away unique strengths of PostgreSQL.\n\nFortunately I've not heard very much support for making empty string and \nNULL to be the same ;)\n\n-----------------\nHannu\n",
"msg_date": "Fri, 01 Jun 2001 08:28:36 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Imperfect solutions"
},
{
"msg_contents": "Hi Bruce,\n\nI was just looking at the TODO list and noticed my name in it - cool! (You\nspelled it wrong - but hey :) )\n\nJust thought you might like to add\n\n* ALTER TABLE ADD PRIMARY KEY\n* ALTER TABLE ADD UNIQUE\n\nI thought they were there before, but they're not there any more. I am\ncurrently about 90% finished on a patch that will add the functionality\nlisted above.\n\nChris\n\n",
"msg_date": "Tue, 5 Jun 2001 16:16:06 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "RE: Imperfect solutions"
},
{
"msg_contents": "On Tue, Jun 05, 2001 at 04:16:06PM +0800, Christopher Kings-Lynne wrote:\n> Hi Bruce,\n> \n> I was just looking at the TODO list and noticed my name in it - cool! (You\n> spelled it wrong - but hey :) )\n> \n> Just thought you might like to add\n> \n> * ALTER TABLE ADD PRIMARY KEY\n> * ALTER TABLE ADD UNIQUE\n\n And what\n\n\t ALTER TABLE DROP PRIMARY KEY\n\t ALTER TABLE DROP UNIQUE \n\n BTW, it's a little cosmetic feature if we have CREATE/DROP INDEX :-)\n\n\t\t\t\tKarel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Tue, 5 Jun 2001 10:51:57 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": false,
"msg_subject": "Re: Imperfect solutions"
},
{
"msg_contents": "> > Just thought you might like to add\n> >\n> > * ALTER TABLE ADD PRIMARY KEY\n> > * ALTER TABLE ADD UNIQUE\n>\n> And what\n>\n> \t ALTER TABLE DROP PRIMARY KEY\n> \t ALTER TABLE DROP UNIQUE\n>\n> BTW, it's a little cosmetic feature if we have CREATE/DROP INDEX :-)\n\nThose two points are already mentioned - I have another 90% patch ready to\ngo that will add that functionality as well...\n\nChris\n\n",
"msg_date": "Tue, 5 Jun 2001 16:55:32 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "RE: Imperfect solutions"
},
{
"msg_contents": "On Tue, 5 Jun 2001, Christopher Kings-Lynne wrote:\n\n> > > Just thought you might like to add\n> > >\n> > > * ALTER TABLE ADD PRIMARY KEY\n> > > * ALTER TABLE ADD UNIQUE\n> >\n> > And what\n> >\n> > \t ALTER TABLE DROP PRIMARY KEY\n> > \t ALTER TABLE DROP UNIQUE\n> >\n> > BTW, it's a little cosmetic feature if we have CREATE/DROP INDEX :-)\n> \n> Those two points are already mentioned - I have another 90% patch ready to\n> go that will add that functionality as well...\n\nAs a question, are you doing anything to handle dropping referenced unique\nconstraints or are we just waiting on that until a referencing system\nis built?\n\n\n",
"msg_date": "Tue, 5 Jun 2001 08:38:08 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "RE: Imperfect solutions"
},
{
"msg_contents": "> > Those two points are already mentioned - I have another 90%\n> patch ready to\n> > go that will add that functionality as well...\n>\n> As a question, are you doing anything to handle dropping referenced unique\n> constraints or are we just waiting on that until a referencing system\n> is built?\n\nBy that do you mean: what happens when you drop a primary key that is\nreferenced by a foreign key?\n\nMy answer: Forgot about that ;) I'll see what I can do but anytime\ninvestigation of foreign keys is required it's a real pain. Foreign keys\nare kinda next on my list for work, so I might look at it then if it's too\ndifficult right now. (I've got a query that can find all foreign keys on a\nrelation, and what they relate to, that I'm going to add to psql).\n\nMy other questions then are:\n\nDoes anything else (other than fk's) ever reference a primary key?\nWhat can reference a unique key?\n\nChris\n\n",
"msg_date": "Wed, 6 Jun 2001 09:44:24 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "RE: Imperfect solutions"
},
{
"msg_contents": "\nOn Wed, 6 Jun 2001, Christopher Kings-Lynne wrote:\n\n> > > Those two points are already mentioned - I have another 90%\n> > patch ready to\n> > > go that will add that functionality as well...\n> >\n> > As a question, are you doing anything to handle dropping referenced unique\n> > constraints or are we just waiting on that until a referencing system\n> > is built?\n> \n> By that do you mean: what happens when you drop a primary key that is\n> referenced by a foreign key?\n> \n> My answer: Forgot about that ;) I'll see what I can do but anytime\n> investigation of foreign keys is required it's a real pain. Foreign keys\n> are kinda next on my list for work, so I might look at it then if it's too\n> difficult right now. (I've got a query that can find all foreign keys on a\n> relation, and what they relate to, that I'm going to add to psql).\n\nI wouldn't worry all that much about it since you could still break it\nwith drop index, but I wanted to know if you'd done anything with it\nand if so how general it was.\n\nHow'd you do the splitting of the arguments to get the columns referenced?\nThat was the biggest problem I was having, trying to get the bytea split\nup. (Well, without writing a function to do it for me)\n\n> My other questions then are:\n> \n> Does anything else (other than fk's) ever reference a primary key?\n> What can reference a unique key?\n\nForeign keys are the only one I know of, but they can reference either.\n\n",
"msg_date": "Tue, 5 Jun 2001 18:58:46 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "RE: Imperfect solutions"
},
{
"msg_contents": "> (I've got a query that can find all\n> foreign keys on a\n> > relation, and what they relate to, that I'm going to add to psql).\n>\n> How'd you do the splitting of the arguments to get the columns referenced?\n> That was the biggest problem I was having, trying to get the bytea split\n> up. (Well, without writing a function to do it for me)\n\nMy original functionality for showing foreign keys was implemented in PHP,\nso all I had to do was go:\n\n$tgargs = explode('\\000', $row['tgargs']);\n\nIt's going to be harder to do that in C I guess...\n\nChris\n\n",
"msg_date": "Wed, 6 Jun 2001 10:09:55 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "RE: Imperfect solutions"
},
{
"msg_contents": "> Hi Bruce,\n> \n> I was just looking at the TODO list and noticed my name in it - cool! (You\n> spelled it wrong - but hey :) )\n> \n> Just thought you might like to add\n> \n> * ALTER TABLE ADD PRIMARY KEY\n> * ALTER TABLE ADD UNIQUE\n> \n> I thought they were there before, but they're not there any more. I am\n> currently about 90% finished on a patch that will add the functionality\n> listed above.\n\nAdded, name fixed. Thanks.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 6 Jun 2001 00:54:01 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Imperfect solutions"
}
] |
[
{
"msg_contents": "�� \n I can realize this function in the SYBase,but How can i do it in the PostgreSQL?\n \n/****SQL***/\nif not exists(select id from test) insert into test(id) values (280);\n/*********/ \n\n_____________________________________________\n�����Ʒ�����У��� http://shopping.263.net/category21.htm\n��ƷС�ҵ�ӭ������ http://shopping.263.net/category23.htm\n",
"msg_date": "Thu, 31 May 2001 13:42:41 +0800 (CST)",
"msg_from": "\"Eric\" <e-lz@263.net>",
"msg_from_op": true,
"msg_subject": "SQL( \"if ...exists...),how to do it in the PostgreSQL?"
},
{
"msg_contents": "Eric writes:\n\n> I can realize this function in the SYBase,but How can i do it in the PostgreSQL?\n>\n> /****SQL***/\n> if not exists(select id from test) insert into test(id) values (280);\n> /*********/\n\nWrite a function in PL/pgSQL.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Wed, 6 Jun 2001 16:48:34 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: SQL( \"if ...exists...),how to do it in the PostgreSQL?"
},
{
"msg_contents": ">> if not exists(select id from test) insert into test(id) values (280);\n\n> Write a function in PL/pgSQL.\n\nThat particular case could be handled like so:\n\ninsert into test(id) select 280 where not exists(select id from test);\n\nThe select produces either zero or one row depending on whether its\nWHERE is true. Voila, problem solved. It's even nearly standard ;-)\nalthough in something like Oracle you'd have to add \"from dual\", I\nthink.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 06 Jun 2001 11:58:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SQL( \"if ...exists...),how to do it in the PostgreSQL? "
}
] |
[
{
"msg_contents": "Starting pg_dump, this error occured (there is no output dump,\nunfortunately). Getting closer, I got this:\n\ntir=# SELECT pg_get_viewdef(c.relname) AS definition FROM pg_class c\noffset 441 limit 1;\nERROR: cache lookup for proc 4303134 failed\ntir=# SELECT c.relname AS definition FROM pg_class c offset 441 limit 1;\n definition\n------------\n sooe\n(1 row)\n\ntir=# SELECT pg_get_viewdef('sooe');\n pg_get_viewdef\n----------------\n Not a view\n(1 row)\n\nYesterday I created some triggers and functions. I got this since that\ntime. The thing here seems to be a strange internal error, is it? Sorry if\nthis problem is an already reported one.\n\nTIA, Zoltan\n\n Kov\\'acs, Zolt\\'an\n kovacsz@pc10.radnoti-szeged.sulinet.hu\n http://www.math.u-szeged.hu/~kovzol\n ftp://pc10.radnoti-szeged.sulinet.hu/home/kovacsz\n\n",
"msg_date": "Thu, 31 May 2001 14:47:26 +0200 (CEST)",
"msg_from": "Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu>",
"msg_from_op": true,
"msg_subject": "ERROR: cache lookup for proc 43030134 failed"
},
{
"msg_contents": "Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu> writes:\n> Starting pg_dump, this error occured (there is no output dump,\n> unfortunately). Getting closer, I got this:\n\n> tir=# SELECT pg_get_viewdef(c.relname) AS definition FROM pg_class c\n> offset 441 limit 1;\n> ERROR: cache lookup for proc 4303134 failed\n\nI think you've got a view or rule that refers to a function you dropped.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 31 May 2001 09:43:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: cache lookup for proc 43030134 failed "
},
{
"msg_contents": "On Thu, 31 May 2001, Tom Lane wrote:\n\n> Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu> writes:\n> > Starting pg_dump, this error occured (there is no output dump,\n> > unfortunately). Getting closer, I got this:\n> \n> > tir=# SELECT pg_get_viewdef(c.relname) AS definition FROM pg_class c\n> > offset 441 limit 1;\n> > ERROR: cache lookup for proc 4303134 failed\n> \n> I think you've got a view or rule that refers to a function you dropped.\n\nHow can I find out that which view or rule is referring?\n\nTIA, Zoltan\n\n",
"msg_date": "Thu, 31 May 2001 16:39:58 +0200 (CEST)",
"msg_from": "Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu>",
"msg_from_op": true,
"msg_subject": "Re: ERROR: cache lookup for proc 43030134 failed "
},
{
"msg_contents": "On Thu, 31 May 2001, Tom Lane wrote:\n\n> Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu> writes:\n> > Starting pg_dump, this error occured (there is no output dump,\n> > unfortunately). Getting closer, I got this:\n> \n> > tir=# SELECT pg_get_viewdef(c.relname) AS definition FROM pg_class c\n> > offset 441 limit 1;\n> > ERROR: cache lookup for proc 4303134 failed\n> \n> I think you've got a view or rule that refers to a function you dropped.\n\nIt seems that there is a problem with the views. The SELECT you can see\nabove is a part of the definition of pg_views. But consider this:\n\ntir=# SELECT c.relname AS viewname, pg_get_userbyid(c.relowner) AS\nviewowner, pg_get_viewdef(c.relname) AS definition FROM pg_class c WHERE\n(c.relkind = 'v'::\"char\") limit 21 offset 1;\nERROR: cache lookup for proc 4303134 failed\n\nIt means that the 21st line of the result is problematic, because writing\n20 instead of 21 I got no problem. Consider this:\n\ntir=# SELECT c.relname AS viewname, pg_get_userbyid(c.relowner) AS\nviewowner FROM pg_class c WHERE (c.relkind = 'v'::\"char\") offset 21 limit\n1;\n viewname | viewowner\n-------------+-----------\n felhasznalo | postgres\n(1 row)\n\nThis is the problematic view. I selected only its name, not the\ndefinition. But selecting this:\n\ntir=# SELECT c.relname AS viewname, pg_get_userbyid(c.relowner) AS\nviewowner, pg_get_viewdef(c.relname) AS definition FROM pg_class c WHERE\n(c.relkind = 'v'::\"char\") and c.relname = 'felhasznalo';\n viewname | viewowner |\n definition\n-------------+-----------+------------------------------------------------------------------------------------------------------------------------------------\n felhasznalo | postgres | SELECT szemely.az, szemely.nev,\nszemely.teljes_nev FROM szemely WHERE ((1 <= szemely.felhasznalo) AND\n(szemely.felhasznalo <= 2));\n(1 row)\n\nI get no problem, it gives the definition. Why?\n\nTIA, Zoltan\n\n",
"msg_date": "Thu, 31 May 2001 16:55:06 +0200 (CEST)",
"msg_from": "Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu>",
"msg_from_op": true,
"msg_subject": "Re: ERROR: cache lookup for proc 43030134 failed "
},
{
"msg_contents": "Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu> writes:\n> It means that the 21st line of the result is problematic, because writing\n> 20 instead of 21 I got no problem.\n\nI think not. The current implementation of LIMIT fetches one more row\nthan is really needed, IIRC.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 31 May 2001 12:29:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: cache lookup for proc 43030134 failed "
},
{
"msg_contents": "On Thu, 31 May 2001, Tom Lane wrote:\n\n> Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu> writes:\n> > It means that the 21st line of the result is problematic, because writing\n> > 20 instead of 21 I got no problem.\n> \n> I think not. The current implementation of LIMIT fetches one more row\n> than is really needed, IIRC.\n\nTom, the real problem is that I get _different_ output for \n\ntir=# SELECT c.relname AS viewname, pg_get_userbyid(c.relowner) AS\nviewowner, pg_get_viewdef(c.relname) AS definition FROM pg_class c WHERE\n(c.relkind = 'v'::\"char\") limit 21 offset 1;\nERROR: cache lookup for proc 4303134 failed\n\nand\n\ntir=# SELECT c.relname AS viewname, pg_get_userbyid(c.relowner) AS\nviewowner, pg_get_viewdef(c.relname) AS definition FROM pg_class c WHERE\n(c.relkind = 'v'::\"char\") and c.relname = 'felhasznalo';\n viewname | viewowner |\n definition\n-------------+-----------+------------------------------------------------------------------------------------------------------------------------------------\n felhasznalo | postgres | SELECT szemely.az, szemely.nev,\nszemely.teljes_nev FROM szemely WHERE ((1 <= szemely.felhasznalo) AND\n(szemely.felhasznalo <= 2));\n(1 row)\n\nThe second one also _should_ result an ERROR. (As you can see, this view\ndoesn't contain any function. I put an index on the table `szemely' but I\ndropped it. There may be some relation between this error and the dropped\nindex...?)\n\nIn a consequence, I cannot pg_dump my database (which is under\nproduction... :-( Please help! Unfortunately I cannot duplicate this\nproblem from scratch, but I may try to do it.\n\nTIA, Zoltan\n\n",
"msg_date": "Fri, 1 Jun 2001 08:56:01 +0200 (CEST)",
"msg_from": "Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu>",
"msg_from_op": true,
"msg_subject": "Re: ERROR: cache lookup for proc 43030134 failed "
},
{
"msg_contents": "Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu> writes:\n>> I think not. The current implementation of LIMIT fetches one more row\n>> than is really needed, IIRC.\n\n> Tom, the real problem is that I get _different_ output for \n\nThe point is that the problem is probably in the 23rd row of pg_class,\nnot the 22nd.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 01 Jun 2001 09:49:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: cache lookup for proc 43030134 failed "
},
{
"msg_contents": "> > Tom, the real problem is that I get _different_ output for \n> \n> The point is that the problem is probably in the 23rd row of pg_class,\n> not the 22nd.\n\nOK, I see! It works now... :-) Thank you, Tom.\n\nRegards, Zoltan\n\n",
"msg_date": "Sat, 2 Jun 2001 13:59:00 +0200 (CEST)",
"msg_from": "Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu>",
"msg_from_op": true,
"msg_subject": "Re: ERROR: cache lookup for proc 43030134 failed "
}
] |
[
{
"msg_contents": "This is my first post/reply on this list, but I have been listening for a \nwhile now (I mostly read the replication ones ;-). I am interested in \nwhat developers/users are looking for in a replication/sync solution in \npostgresql, and contributing to that effort.\n>I'm trying to centralize data in a unique db, like it:\\\n<snip, cool ascii art removed >\n>Information:\n>\t1) The \"small\" DBs sync data with the \"big\" DB.\nWhat is the connection between small and big Dbs? (LAN/WAN) \nIs there a consistent connection between systems?\n>\t3) The big DB has a copy of all data. \n\tJust the \"small\" DBs inserts data in the \"big\" one.\nAre you looking for synchronous (before the insert commits) or \nasynchronous (after the insert commits)?\nIs there any chance for insert/update conflicts (insert/update same \nrecord in same table) on the \"big\" DB?\n\n>The question: Does Postgres do it?\nShort answer, I don't think the current postgresql version will \naccomplish your needs, but there are some postgresql replication projects \nthat might get you close depending on the answers to my questions above. \nYou can find a list of the projects here. \nhttp://www.greatbridge.org/genpage?replication_top\n\nDarren Johnson\nSource Software Institute\n\n\n",
"msg_date": "Thu, 31 May 2001 14:34:27 GMT",
"msg_from": "djohnson@sourcesoft.org",
"msg_from_op": true,
"msg_subject": "Re:Sync Data "
},
{
"msg_contents": "\nHelo,\n\nOn Thu, 31 May 2001 djohnson@sourcesoft.org wrote:\n\nQuestions:\n\n> \t1) What is the connection between small and big Dbs? (LAN/WAN)\n\n\n>\t2) Is there a consistent connection between systems?\n\n>\t3) Are you looking for synchronous (before the insert commits)\n>\t or asynchronous (after the insert commits)?\n\n>\t4) Is there any chance for insert/update conflicts\n>\t (insert/update same record in same table)\n>\t on the \"big\" DB?\n\n\n-> problem : I have four \"small\" DBs that would sync data thru\n\t Internet during the night (128 kbs/s), there is no\n\t conflicts and there is nobody inserting data in\n\t \"small\" DBs while its syncing.\n\n\tI think it answers all the questions.\n\n\nThanks.\n\n\n>\n> Darren Johnson\n> Source Software Institute\n\n\n\n",
"msg_date": "Thu, 31 May 2001 12:03:27 -0300 (EST)",
"msg_from": "Paulo Angelo <pa@bsb.conectiva.com.br>",
"msg_from_op": false,
"msg_subject": "Re:Sync Data "
}
] |
[
{
"msg_contents": "Hi,\n\nBeing in Australia, it's always been a minor pain building the support\nfor Australian timezone rules by defining USE_AUSTRALIAN_RULES to the\ncompiler. Not to mention the not inconsiderable pain involved in pawing\nthrough the code and documentation trying to work out why the timezones\nwere wrong in the first place.\n\nThis patch makes it a configure option - much easier to use, and much\nmore obvious for the other Aussies who keep wondering why their\ntimezones are all messed up...\n\nObviously 'autoconf' needs to be run after applying the patch.\n\nCheers,\n\nChris,\nOnTheNet\n\n\n--- postgresql-7.1.2/configure.in.orig\tFri May 11 11:34:39 2001\n+++ postgresql-7.1.2/configure.in\tThu May 31 23:54:27 2001\n@@ -150,6 +150,16 @@\n \n \n #\n+# Australian timezone (--enable-australian-tz)\n+#\n+AC_MSG_CHECKING([whether to build with Australian timezone rules])\n+PGAC_ARG_BOOL(enable, australian-tz, no, [ --enable-australian-tz enable Australian timezone rules ],\n+ [AC_DEFINE([USE_AUSTRALIAN_RULES], 1,\n+ [Set to 1 if you want Australian timezone rules (--enable-australian-tz)])])\n+AC_MSG_RESULT([$enable_australian_tz])\n+\n+\n+#\n # Locale (--enable-locale)\n #\n AC_MSG_CHECKING([whether to build with locale support])\n",
"msg_date": "Fri, 1 Jun 2001 00:54:41 +1000",
"msg_from": "Chris Dunlop <chris@onthe.net.au>",
"msg_from_op": true,
"msg_subject": "Australian timezone configure option"
},
{
"msg_contents": "Chris Dunlop <chris@onthe.net.au> writes:\n> This patch makes it a configure option - much easier to use,\n\nSeems like a good idea, but that patch couldn't possibly work as-is.\nWhere's the config.h.in entry? Have you tested it?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 04 Jun 2001 10:25:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Australian timezone configure option "
},
{
"msg_contents": "On Mon, Jun 04, 2001 at 10:25:02AM -0400, Tom Lane wrote:\n> Chris Dunlop <chris@onthe.net.au> writes:\n> > This patch makes it a configure option - much easier to use,\n> \n> Seems like a good idea, but that patch couldn't possibly work as-is.\n> Where's the config.h.in entry? Have you tested it?\n> \n> \t\t\tregards, tom lane\n\n\nOops... overzealeous trimming of the actual patch file I generated\nwhich included all the changes to 'configure' generated by autoconf.\n\nPatch including config.h.in changes below.\n\nTest against the unpatched database, local timezone is \"Australian EST\"\ni.e. GMT+10:\n\n $ psql -c \"select 'Jun 6 02:34:32 EST 2001'::datetime\" template1\n\t ?column? \n ------------------------\n 2001-06-06 17:34:32+10\n (1 row)\n\nNotice the returned time is different to the input time. Against\nthe patched database:\n\n psql -c \"select 'Jun 6 02:34:32 EST 2001'::datetime\" template1\n\t ?column? \n ------------------------\n 2001-06-06 02:34:32+10\n (1 row)\n\nCheers,\n\nChris,\nOnTheNet\n\n\ndiff -ru postgresql-7.1.2.orig/configure.in postgresql-7.1.2/configure.in\n--- postgresql-7.1.2.orig/configure.in\tFri May 11 11:34:39 2001\n+++ postgresql-7.1.2/configure.in\tThu May 31 23:54:27 2001\n@@ -150,6 +150,16 @@\n \n \n #\n+# Australian timezone (--enable-australian-tz)\n+#\n+AC_MSG_CHECKING([whether to build with Australian timezone rules])\n+PGAC_ARG_BOOL(enable, australian-tz, no, [ --enable-australian-tz enable Australian timezone rules ],\n+ [AC_DEFINE([USE_AUSTRALIAN_RULES], 1,\n+ [Set to 1 if you want Australian timezone rules (--enable-australian-tz)])])\n+AC_MSG_RESULT([$enable_australian_tz])\n+\n+\n+#\n # Locale (--enable-locale)\n #\n AC_MSG_CHECKING([whether to build with locale support])\ndiff -ru postgresql-7.1.2.orig/src/include/config.h.in postgresql-7.1.2/src/include/config.h.in\n--- postgresql-7.1.2.orig/src/include/config.h.in\tSun Apr 15 08:55:02 2001\n+++ postgresql-7.1.2/src/include/config.h.in\tThu May 31 23:58:16 2001\n@@ -33,6 +33,9 @@\n /* A canonical string containing the version number, platform, and C compiler */\n #undef PG_VERSION_STR\n \n+/* Set to 1 if you want Australian timezone rules (--enable-australian-tz) */\n+#undef USE_AUSTRALIAN_RULES\n+\n /* Set to 1 if you want LOCALE support (--enable-locale) */\n #undef USE_LOCALE\n \n",
"msg_date": "Wed, 6 Jun 2001 09:55:30 +1000",
"msg_from": "Chris Dunlop <chris@onthe.net.au>",
"msg_from_op": true,
"msg_subject": "Re: Australian timezone configure option"
},
{
"msg_contents": "Chris Dunlop <chris@onthe.net.au> writes:\n> Oops... overzealeous trimming of the actual patch file I generated\n> which included all the changes to 'configure' generated by autoconf.\n> Patch including config.h.in changes below.\n\nThat looks better.\n\nCould we also trouble you for documentation patches? IIRC, there's a\nlist of all interesting configure options somewhere in the\nadministrator's guide.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 05 Jun 2001 20:03:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Australian timezone configure option "
},
{
"msg_contents": "Tom Lane wrote:\n> Could we also trouble you for documentation patches? IIRC, there's a\n> list of all interesting configure options somewhere in the\n> administrator's guide.\n\n\nDocumentation ? We don't need no steenking documentation!...\n\n\ndiff -ur postgresql-7.1.2.orig/doc/src/sgml/installation.sgml postgresql-7.1.2/doc/src/sgml/installation.sgml\n--- postgresql-7.1.2.orig/doc/src/sgml/installation.sgml\tTue May 15 01:11:31 2001\n+++ postgresql-7.1.2/doc/src/sgml/installation.sgml\tWed Jun 6 10:35:30 2001\n@@ -462,6 +462,20 @@\n </varlistentry>\n \n <varlistentry>\n+ <term>--enable-australian-tz</term>\n+ <listitem>\n+ <para>\n+ Enables Australian timezone support. This changes the interpretation\n+ of timezones in input date/time strings from US-centric to\n+ Australian-centric. Specifically, 'EST' is changed from GMT-5 (US\n+ Eastern Standard Time) to GMT+10 (Australian Eastern Standard Time)\n+ and 'CST' is changed from GMT-5:30 (US Central Standard Time) to\n+ GMT+10:30 (Australian Central Standard Time).\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+\n+ <varlistentry>\n <term>--enable-locale</term>\n <listitem>\n <para>\n",
"msg_date": "Wed, 6 Jun 2001 10:43:16 +1000",
"msg_from": "Chris Dunlop <chris@onthe.net.au>",
"msg_from_op": true,
"msg_subject": "Re: Australian timezone configure option"
},
{
"msg_contents": "\nI have decided to make this configurable via postgresql.conf so you\ndon't need separate binaries / configure switch to run in Australia. I\nwill send a patch over for testing.\n\n\n> Tom Lane wrote:\n> > Could we also trouble you for documentation patches? IIRC, there's a\n> > list of all interesting configure options somewhere in the\n> > administrator's guide.\n> \n> \n> Documentation ? We don't need no steenking documentation!...\n> \n> \n> diff -ur postgresql-7.1.2.orig/doc/src/sgml/installation.sgml postgresql-7.1.2/doc/src/sgml/installation.sgml\n> --- postgresql-7.1.2.orig/doc/src/sgml/installation.sgml\tTue May 15 01:11:31 2001\n> +++ postgresql-7.1.2/doc/src/sgml/installation.sgml\tWed Jun 6 10:35:30 2001\n> @@ -462,6 +462,20 @@\n> </varlistentry>\n> \n> <varlistentry>\n> + <term>--enable-australian-tz</term>\n> + <listitem>\n> + <para>\n> + Enables Australian timezone support. This changes the interpretation\n> + of timezones in input date/time strings from US-centric to\n> + Australian-centric. Specifically, 'EST' is changed from GMT-5 (US\n> + Eastern Standard Time) to GMT+10 (Australian Eastern Standard Time)\n> + and 'CST' is changed from GMT-5:30 (US Central Standard Time) to\n> + GMT+10:30 (Australian Central Standard Time).\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> +\n> + <varlistentry>\n> <term>--enable-locale</term>\n> <listitem>\n> <para>\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 11 Jun 2001 18:52:52 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Australian timezone configure option"
},
{
"msg_contents": "Hi all,\n\nCan we *not* make USE_AUSTRALIAN_RULES into a \"configure\" option?\n\nSeems like this would conflict with Tom's suggestion of making it a GUC \noption instead.\n\nI vote for having it as a GUC option instead.\n\nRegards and best wishes,\n\nJustin Clift\n\nOn Wednesday 06 June 2001 10:43, Chris Dunlop wrote:\n> Tom Lane wrote:\n> > Could we also trouble you for documentation patches? IIRC, there's a\n> > list of all interesting configure options somewhere in the\n> > administrator's guide.\n>\n> Documentation ? We don't need no steenking documentation!...\n>\n>\n> diff -ur postgresql-7.1.2.orig/doc/src/sgml/installation.sgml\n> postgresql-7.1.2/doc/src/sgml/installation.sgml ---\n> postgresql-7.1.2.orig/doc/src/sgml/installation.sgml\tTue May 15 01:11:31\n> 2001 +++ postgresql-7.1.2/doc/src/sgml/installation.sgml\tWed Jun 6\n> 10:35:30 2001 @@ -462,6 +462,20 @@\n> </varlistentry>\n>\n> <varlistentry>\n> + <term>--enable-australian-tz</term>\n> + <listitem>\n> + <para>\n> + Enables Australian timezone support. This changes the\n> interpretation + of timezones in input date/time strings from\n> US-centric to + Australian-centric. Specifically, 'EST' is changed\n> from GMT-5 (US + Eastern Standard Time) to GMT+10 (Australian\n> Eastern Standard Time) + and 'CST' is changed from GMT-5:30 (US\n> Central Standard Time) to + GMT+10:30 (Australian Central Standard\n> Time).\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> +\n> + <varlistentry>\n> <term>--enable-locale</term>\n> <listitem>\n> <para>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n",
"msg_date": "Tue, 12 Jun 2001 12:25:11 +1000",
"msg_from": "Justin Clift <aa2@bigpond.net.au>",
"msg_from_op": false,
"msg_subject": "Re: Australian timezone configure option"
},
{
"msg_contents": "On Mon, Jun 11, 2001 at 06:52:52PM -0400, Bruce Momjian wrote:\n> \n> I have decided to make this configurable via postgresql.conf so you\n> don't need separate binaries / configure switch to run in Australia. I\n> will send a patch over for testing.\n\n\nGreat, that's better than a configure option...\n\n\n> > diff -ur postgresql-7.1.2.orig/doc/src/sgml/installation.sgml postgresql-7.1.2/doc/src/sgml/installation.sgml\n> > --- postgresql-7.1.2.orig/doc/src/sgml/installation.sgml\tTue May 15 01:11:31 2001\n> > +++ postgresql-7.1.2/doc/src/sgml/installation.sgml\tWed Jun 6 10:35:30 2001\n> > @@ -462,6 +462,20 @@\n> > </varlistentry>\n> > \n> > <varlistentry>\n> > + <term>--enable-australian-tz</term>\n> > + <listitem>\n> > + <para>\n> > + Enables Australian timezone support. This changes the interpretation\n> > + of timezones in input date/time strings from US-centric to\n> > + Australian-centric. Specifically, 'EST' is changed from GMT-5 (US\n> > + Eastern Standard Time) to GMT+10 (Australian Eastern Standard Time)\n> > + and 'CST' is changed from GMT-5:30 (US Central Standard Time) to\n> > + GMT+10:30 (Australian Central Standard Time).\n> > + </para>\n> > + </listitem>\n> > + </varlistentry>\n> > +\n> > + <varlistentry>\n> > <term>--enable-locale</term>\n> > <listitem>\n> > <para>\n",
"msg_date": "Tue, 12 Jun 2001 12:30:56 +1000",
"msg_from": "Chris Dunlop <chris@onthe.net.au>",
"msg_from_op": true,
"msg_subject": "Re: Australian timezone configure option"
},
{
"msg_contents": "On Tue, Jun 12, 2001 at 12:25:11PM +1000, Justin Clift wrote:\n> Hi all,\n> \n> Can we *not* make USE_AUSTRALIAN_RULES into a \"configure\" option?\n> \n> Seems like this would conflict with Tom's suggestion of making it a GUC \n> option instead.\n> \n> I vote for having it as a GUC option instead.\n\n\nYup, I think Bruce is actually making it a GUC option. \n\n\n> \n> Regards and best wishes,\n> \n> Justin Clift\n> \n",
"msg_date": "Tue, 12 Jun 2001 12:32:07 +1000",
"msg_from": "Chris Dunlop <chris@onthe.net.au>",
"msg_from_op": true,
"msg_subject": "Re: Australian timezone configure option"
},
{
"msg_contents": "> On Mon, Jun 11, 2001 at 06:52:52PM -0400, Bruce Momjian wrote:\n> > \n> > I have decided to make this configurable via postgresql.conf so you\n> > don't need separate binaries / configure switch to run in Australia. I\n> > will send a patch over for testing.\n> \n> \n> Great, that's better than a configure option...\n> \n\nI thought so.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 11 Jun 2001 22:39:21 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Australian timezone configure option"
},
{
"msg_contents": "> Hi,\n> \n> Being in Australia, it's always been a minor pain building the support\n> for Australian timezone rules by defining USE_AUSTRALIAN_RULES to the\n> compiler. Not to mention the not inconsiderable pain involved in pawing\n> through the code and documentation trying to work out why the timezones\n> were wrong in the first place.\n\nOK, this patch makes Australian_timezones a GUC option. It can be set\nanytime in psql. The code uses a static variable to check if the GUC\nsetting has changed and adjust the C struct accordingly. I have also\nadded code to allow the regression tests to pass even if postgresql.conf\nhas australian_timezones defined.\n\n\ttest=> select datetime('2001-01-01 00:00:00 EST');\n\t timestamp \n\t------------------------\n\t 2001-01-01 00:00:00-05\n\t(1 row)\n\t\n\ttest=> set australian_timezones = true;\n\tSET VARIABLE\n\ttest=> select datetime('2001-01-01 00:00:00 EST');\n\t timestamp \n\t------------------------\n\t 2000-12-31 09:00:00-05\n\t(1 row)\n\ttest=> reset all;\n\tRESET VARIABLE\n\ttest=> select datetime('2001-01-01 00:00:00 EST');\n\t timestamp \n\t------------------------\n\t 2001-01-01 00:00:00-05\n\t(1 row)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: src/backend/utils/adt/datetime.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/utils/adt/datetime.c,v\nretrieving revision 1.64\ndiff -c -r1.64 datetime.c\n*** src/backend/utils/adt/datetime.c\t2001/05/03 22:53:07\t1.64\n--- src/backend/utils/adt/datetime.c\t2001/06/12 03:52:10\n***************\n*** 22,27 ****\n--- 22,28 ----\n #include <limits.h>\n \n #include \"miscadmin.h\"\n+ #include \"utils/guc.h\"\n #include \"utils/datetime.h\"\n \n static int DecodeNumber(int flen, char *field,\n***************\n*** 35,40 ****\n--- 36,42 ----\n static int\tDecodeTimezone(char *str, int *tzp);\n static datetkn *datebsearch(char *key, datetkn *base, unsigned int nel);\n static int\tDecodeDate(char *str, int fmask, int *tmask, struct tm * tm);\n+ static void\tCheckAustralianTimezones(int field);\n \n #define USE_DATE_CACHE 1\n #define ROUND_ALL 0\n***************\n*** 85,91 ****\n * entries by 10 and truncate the text field at MAXTOKLEN characters.\n * the text field is not guaranteed to be NULL-terminated.\n */\n! static datetkn datetktbl[] = {\n /*\t\ttext\t\t\ttoken\tlexval */\n \t{EARLY, RESERV, DTK_EARLY}, /* \"-infinity\" reserved for \"early time\" */\n \t{\"acsst\", DTZ, 63},\t\t\t/* Cent. Australia */\n--- 87,93 ----\n * entries by 10 and truncate the text field at MAXTOKLEN characters.\n * the text field is not guaranteed to be NULL-terminated.\n */\n! datetkn datetktbl[] = {\n /*\t\ttext\t\t\ttoken\tlexval */\n \t{EARLY, RESERV, DTK_EARLY}, /* \"-infinity\" reserved for \"early time\" */\n \t{\"acsst\", DTZ, 63},\t\t\t/* Cent. Australia */\n***************\n*** 117,127 ****\n \t{\"cdt\", DTZ, NEG(30)},\t\t/* Central Daylight Time */\n \t{\"cet\", TZ, 6},\t\t\t\t/* Central European Time */\n \t{\"cetdst\", DTZ, 12},\t\t/* Central European Dayl.Time */\n! #if USE_AUSTRALIAN_RULES\n! \t{\"cst\", TZ, 63},\t\t\t/* Australia Eastern Std Time */\n! #else\n! \t{\"cst\", TZ, NEG(36)},\t\t/* Central Standard Time */\n! #endif\n \t{DCURRENT, RESERV, DTK_CURRENT},\t/* \"current\" is always now */\n \t{\"dec\", MONTH, 12},\n \t{\"december\", MONTH, 12},\n--- 119,125 ----\n \t{\"cdt\", DTZ, NEG(30)},\t\t/* Central Daylight Time */\n \t{\"cet\", TZ, 6},\t\t\t\t/* Central European Time */\n \t{\"cetdst\", DTZ, 12},\t\t/* Central European Dayl.Time */\n! \t{\"cst\", TZ, NEG(36)},\t\t/* Central Standard Time, may be Australian */\n \t{DCURRENT, RESERV, DTK_CURRENT},\t/* \"current\" is always now */\n \t{\"dec\", MONTH, 12},\n \t{\"december\", MONTH, 12},\n***************\n*** 134,144 ****\n \t{\"eet\", TZ, 12},\t\t\t/* East. Europe, USSR Zone 1 */\n \t{\"eetdst\", DTZ, 18},\t\t/* Eastern Europe */\n \t{EPOCH, RESERV, DTK_EPOCH}, /* \"epoch\" reserved for system epoch time */\n! #if USE_AUSTRALIAN_RULES\n! \t{\"est\", TZ, 60},\t\t\t/* Australia Eastern Std Time */\n! #else\n! \t{\"est\", TZ, NEG(30)},\t\t/* Eastern Standard Time */\n! #endif\n \t{\"feb\", MONTH, 2},\n \t{\"february\", MONTH, 2},\n \t{\"fri\", DOW, 5},\n--- 132,138 ----\n \t{\"eet\", TZ, 12},\t\t\t/* East. Europe, USSR Zone 1 */\n \t{\"eetdst\", DTZ, 18},\t\t/* Eastern Europe */\n \t{EPOCH, RESERV, DTK_EPOCH}, /* \"epoch\" reserved for system epoch time */\n! \t{\"est\", TZ, NEG(30)},\t\t/* Eastern Standard Time, may be Australian */\n \t{\"feb\", MONTH, 2},\n \t{\"february\", MONTH, 2},\n \t{\"fri\", DOW, 5},\n***************\n*** 199,209 ****\n \t{\"pst\", TZ, NEG(48)},\t\t/* Pacific Standard Time */\n \t{\"sadt\", DTZ, 63},\t\t\t/* S. Australian Dayl. Time */\n \t{\"sast\", TZ, 57},\t\t\t/* South Australian Std Time */\n! #if USE_AUSTRALIAN_RULES\n! \t{\"sat\", TZ, 57},\n! #else\n! \t{\"sat\", DOW, 6},\n! #endif\n \t{\"saturday\", DOW, 6},\n \t{\"sep\", MONTH, 9},\n \t{\"sept\", MONTH, 9},\n--- 193,199 ----\n \t{\"pst\", TZ, NEG(48)},\t\t/* Pacific Standard Time */\n \t{\"sadt\", DTZ, 63},\t\t\t/* S. Australian Dayl. Time */\n \t{\"sast\", TZ, 57},\t\t\t/* South Australian Std Time */\n! \t{\"sat\", DOW, 6},\t\t\t/* may be changed to Australian */\n \t{\"saturday\", DOW, 6},\n \t{\"sep\", MONTH, 9},\n \t{\"sept\", MONTH, 9},\n***************\n*** 1618,1623 ****\n--- 1608,1615 ----\n \tint\t\t\ttype;\n \tdatetkn *tp;\n \n+ \tCheckAustralianTimezones(field);\n+ \n #if USE_DATE_CACHE\n \tif ((datecache[field] != NULL)\n \t\t&& (strncmp(lowtoken, datecache[field]->token, TOKMAXLEN) == 0))\n***************\n*** 2455,2457 ****\n--- 2447,2495 ----\n \n \treturn 0;\n }\t/* EncodeTimeSpan() */\n+ \n+ \n+ static void\tCheckAustralianTimezones(int field)\n+ {\n+ \tdatetkn *tp;\n+ \tint prev_Australian_timezones = false;\t/* structure preloaded as false */\n+ \n+ \tif (Australian_timezones != prev_Australian_timezones)\n+ \t{\n+ #if USE_DATE_CACHE\n+ \t\tdatecache[field] = NULL;\n+ #endif\n+ \t\t/* CST */\n+ \t\ttp = datebsearch(\"cst\", datetktbl, szdatetktbl);\n+ \t\tAssert(tp);\n+ \t\ttp->type = TZ;\n+ \t\tif (!Australian_timezones)\n+ \t\t\ttp->value = NEG(36);\t/* Central Standard Time */\n+ \t\telse\n+ \t\t\ttp->value = 63;\t\t\t/* Australia Eastern Std Time */\n+ \n+ \t\t/* EST */\n+ \t\ttp = datebsearch(\"est\", datetktbl, szdatetktbl);\n+ \t\tAssert(tp);\n+ \t\ttp->type = TZ;\n+ \t\tif (!Australian_timezones)\n+ \t\t\ttp->value = NEG(30);\t/* Eastern Standard Time */\n+ \t\telse\n+ \t\t\ttp->value = 60;\t\t\t/* Australia Eastern Std Time */\n+ \n+ \t\t/* SAT */\n+ \t\ttp = datebsearch(\"sat\", datetktbl, szdatetktbl);\n+ \t\tAssert(tp);\n+ \t\tif (!Australian_timezones)\n+ \t\t{\n+ \t\t\ttp->type = DOW;\n+ \t\t\ttp->value = 6;\n+ \t\t}\n+ \t\telse\n+ \t\t{\n+ \t\t\ttp->type = TZ;\n+ \t\t\ttp->value = 57;\n+ \t\t}\n+ \t\tprev_Australian_timezones = Australian_timezones;\n+ \t}\n+ }\nIndex: src/backend/utils/misc/guc.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/utils/misc/guc.c,v\nretrieving revision 1.37\ndiff -c -r1.37 guc.c\n*** src/backend/utils/misc/guc.c\t2001/06/07 04:50:57\t1.37\n--- src/backend/utils/misc/guc.c\t2001/06/12 03:52:11\n***************\n*** 71,76 ****\n--- 71,78 ----\n \n bool\t\tSQL_inheritance = true;\n \n+ bool\t\tAustralian_timezones = false;\n+ \n #ifndef PG_KRB_SRVTAB\n #define PG_KRB_SRVTAB \"\"\n #endif\n***************\n*** 222,227 ****\n--- 224,230 ----\n \t{\"show_source_port\", PGC_SIGHUP, &ShowPortNumber, false},\n \n \t{\"sql_inheritance\", PGC_USERSET, &SQL_inheritance, true},\n+ \t{\"australian_timezones\", PGC_USERSET, &Australian_timezones, false},\n \n \t{\"fixbtree\", PGC_POSTMASTER, &FixBTree, true},\n \n***************\n*** 880,885 ****\n--- 883,889 ----\n \t\tcase PGC_BOOL:\n \t\t\tval = *((struct config_bool *) record)->variable ? \"on\" : \"off\";\n \t\t\tbreak;\n+ \n \t\tcase PGC_INT:\n \t\t\tsnprintf(buffer, 256, \"%d\", *((struct config_int *) record)->variable);\n \t\t\tval = buffer;\n***************\n*** 955,961 ****\n \t\t\telog(FATAL, \"out of memory\");\n \t}\n \telse\n! /* no equal sign in string */\n \t{\n \t\t*name = strdup(string);\n \t\tif (!*name)\n--- 959,965 ----\n \t\t\telog(FATAL, \"out of memory\");\n \t}\n \telse\n! \t/* no equal sign in string */\n \t{\n \t\t*name = strdup(string);\n \t\tif (!*name)\nIndex: src/backend/utils/misc/postgresql.conf.sample\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/utils/misc/postgresql.conf.sample,v\nretrieving revision 1.11\ndiff -c -r1.11 postgresql.conf.sample\n*** src/backend/utils/misc/postgresql.conf.sample\t2001/05/07 23:32:55\t1.11\n--- src/backend/utils/misc/postgresql.conf.sample\t2001/06/12 03:52:11\n***************\n*** 172,174 ****\n--- 172,180 ----\n #trace_lock_oidmin = 16384\n #trace_lock_table = 0\n #endif\n+ \n+ \n+ #\n+ #\tLock Tracing\n+ #\n+ #australian_timezones = false\nIndex: src/include/utils/datetime.h\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/include/utils/datetime.h,v\nretrieving revision 1.18\ndiff -c -r1.18 datetime.h\n*** src/include/utils/datetime.h\t2001/05/03 22:53:07\t1.18\n--- src/include/utils/datetime.h\t2001/06/12 03:52:12\n***************\n*** 182,187 ****\n--- 182,188 ----\n \tchar\t\tvalue;\t\t\t/* this may be unsigned, alas */\n } datetkn;\n \n+ extern datetkn datetktbl[];\n \n /* TMODULO()\n * Macro to replace modf(), which is broken on some platforms.\nIndex: src/include/utils/guc.h\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/include/utils/guc.h,v\nretrieving revision 1.7\ndiff -c -r1.7 guc.h\n*** src/include/utils/guc.h\t2001/06/07 04:50:57\t1.7\n--- src/include/utils/guc.h\t2001/06/12 03:52:12\n***************\n*** 68,72 ****\n--- 68,73 ----\n extern bool Show_btree_build_stats;\n \n extern bool SQL_inheritance;\n+ extern bool Australian_timezones;\n \n #endif\t /* GUC_H */\nIndex: src/test/regress/expected/horology-no-DST-before-1970.out\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/test/regress/expected/horology-no-DST-before-1970.out,v\nretrieving revision 1.12\ndiff -c -r1.12 horology-no-DST-before-1970.out\n*** src/test/regress/expected/horology-no-DST-before-1970.out\t2001/04/06 05:50:25\t1.12\n--- src/test/regress/expected/horology-no-DST-before-1970.out\t2001/06/12 03:52:16\n***************\n*** 4,9 ****\n--- 4,11 ----\n --\n -- date, time arithmetic\n --\n+ -- needed so tests pass\n+ SET australian_timezones = 'off';\n SELECT date '1981-02-03' + time '04:05:06' AS \"Date + Time\";\n Date + Time \n ------------------------------\nIndex: src/test/regress/expected/horology-solaris-1947.out\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/test/regress/expected/horology-solaris-1947.out,v\nretrieving revision 1.10\ndiff -c -r1.10 horology-solaris-1947.out\n*** src/test/regress/expected/horology-solaris-1947.out\t2001/04/06 05:50:25\t1.10\n--- src/test/regress/expected/horology-solaris-1947.out\t2001/06/12 03:52:17\n***************\n*** 4,9 ****\n--- 4,11 ----\n --\n -- date, time arithmetic\n --\n+ -- needed so tests pass\n+ SET australian_timezones = 'off';\n SELECT date '1981-02-03' + time '04:05:06' AS \"Date + Time\";\n Date + Time \n ------------------------------\nIndex: src/test/regress/expected/horology.out\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/test/regress/expected/horology.out,v\nretrieving revision 1.23\ndiff -c -r1.23 horology.out\n*** src/test/regress/expected/horology.out\t2001/04/06 05:50:25\t1.23\n--- src/test/regress/expected/horology.out\t2001/06/12 03:52:19\n***************\n*** 4,9 ****\n--- 4,11 ----\n --\n -- date, time arithmetic\n --\n+ -- needed so tests pass\n+ SET australian_timezones = 'off';\n SELECT date '1981-02-03' + time '04:05:06' AS \"Date + Time\";\n Date + Time \n ------------------------------\nIndex: src/test/regress/expected/timestamp.out\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/test/regress/expected/timestamp.out,v\nretrieving revision 1.12\ndiff -c -r1.12 timestamp.out\n*** src/test/regress/expected/timestamp.out\t2001/05/03 19:00:37\t1.12\n--- src/test/regress/expected/timestamp.out\t2001/06/12 03:52:19\n***************\n*** 4,9 ****\n--- 4,11 ----\n -- Shorthand values\n -- Not directly usable for regression testing since these are not constants.\n -- So, just try to test parser and hope for the best - thomas 97/04/26\n+ -- needed so tests pass\n+ SET australian_timezones = 'off';\n SELECT (timestamp 'today' = (timestamp 'yesterday' + interval '1 day')) as \"True\";\n True \n ------\nIndex: src/test/regress/sql/horology.sql\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/test/regress/sql/horology.sql,v\nretrieving revision 1.14\ndiff -c -r1.14 horology.sql\n*** src/test/regress/sql/horology.sql\t2001/04/06 05:50:29\t1.14\n--- src/test/regress/sql/horology.sql\t2001/06/12 03:52:19\n***************\n*** 1,10 ****\n --\n -- HOROLOGY\n --\n- \n --\n -- date, time arithmetic\n --\n \n SELECT date '1981-02-03' + time '04:05:06' AS \"Date + Time\";\n \n--- 1,11 ----\n --\n -- HOROLOGY\n --\n --\n -- date, time arithmetic\n --\n+ -- needed so tests pass\n+ SET australian_timezones = 'off';\n \n SELECT date '1981-02-03' + time '04:05:06' AS \"Date + Time\";\n \nIndex: src/test/regress/sql/timestamp.sql\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/test/regress/sql/timestamp.sql,v\nretrieving revision 1.7\ndiff -c -r1.7 timestamp.sql\n*** src/test/regress/sql/timestamp.sql\t2000/11/25 05:00:33\t1.7\n--- src/test/regress/sql/timestamp.sql\t2001/06/12 03:52:20\n***************\n*** 1,10 ****\n --\n -- DATETIME\n --\n- \n -- Shorthand values\n -- Not directly usable for regression testing since these are not constants.\n -- So, just try to test parser and hope for the best - thomas 97/04/26\n \n SELECT (timestamp 'today' = (timestamp 'yesterday' + interval '1 day')) as \"True\";\n SELECT (timestamp 'today' = (timestamp 'tomorrow' - interval '1 day')) as \"True\";\n--- 1,11 ----\n --\n -- DATETIME\n --\n -- Shorthand values\n -- Not directly usable for regression testing since these are not constants.\n -- So, just try to test parser and hope for the best - thomas 97/04/26\n+ -- needed so tests pass\n+ SET australian_timezones = 'off';\n \n SELECT (timestamp 'today' = (timestamp 'yesterday' + interval '1 day')) as \"True\";\n SELECT (timestamp 'today' = (timestamp 'tomorrow' - interval '1 day')) as \"True\";",
"msg_date": "Mon, 11 Jun 2001 23:53:59 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Australian timezone configure option"
},
{
"msg_contents": "> > Hi,\n> > \n> > Being in Australia, it's always been a minor pain building the support\n> > for Australian timezone rules by defining USE_AUSTRALIAN_RULES to the\n> > compiler. Not to mention the not inconsiderable pain involved in pawing\n> > through the code and documentation trying to work out why the timezones\n> > were wrong in the first place.\n> \n> OK, this patch makes Australian_timezones a GUC option. It can be set\n> anytime in psql. The code uses a static variable to check if the GUC\n> setting has changed and adjust the C struct accordingly. I have also\n> added code to allow the regression tests to pass even if postgresql.conf\n> has australian_timezones defined.\n\nHere is a diff for the documentation.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: runtime.sgml\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/doc/src/sgml/runtime.sgml,v\nretrieving revision 1.67\ndiff -c -r1.67 runtime.sgml\n*** runtime.sgml\t2001/05/17 17:44:17\t1.67\n--- runtime.sgml\t2001/06/12 04:45:40\n***************\n*** 1201,1206 ****\n--- 1201,1217 ----\n </listitem>\n </varlistentry>\n \n+ <term>AUSTRALIAN_TIMEZONES (<type>bool</type>)</term>\n+ <listitem>\n+ <para>\n+ If set to true, <literal>CST</literal>, <literal>EST</literal>,\n+ and <literal>SAT</literal> are interpreted as Australian\n+ timezones rather than as North American Central/Eastern\n+ Timezones and Saturday. The default is false.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+ \n <varlistentry>\n <indexterm>\n <primary>SSL</primary>",
"msg_date": "Tue, 12 Jun 2001 00:46:38 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Australian timezone configure option"
},
{
"msg_contents": "On Mon, Jun 11, 2001 at 11:53:59PM -0400, Bruce Momjian wrote:\n> > Hi,\n> > \n> > Being in Australia, it's always been a minor pain building the support\n> > for Australian timezone rules by defining USE_AUSTRALIAN_RULES to the\n> > compiler. Not to mention the not inconsiderable pain involved in pawing\n> > through the code and documentation trying to work out why the timezones\n> > were wrong in the first place.\n> \n> OK, this patch makes Australian_timezones a GUC option. It can be set\n> anytime in psql. The code uses a static variable to check if the GUC\n> setting has changed and adjust the C struct accordingly. I have also\n> added code to allow the regression tests to pass even if postgresql.conf\n> has australian_timezones defined.\n\n\nYour patch had one reject against 7.1.2 (a single blank line in guc.c),\nbut it works for me once that was fixed.\n\nBelow is the patch against 7.1.2 I generated from your cvs patch.\n\nI guess some documentation would be nice...\n\nThanks for your effort!\n\n\nCheers,\n\nChris,\nOnTheNet\n\n\n\ndiff -ru postgresql-7.1.2.orig/src/backend/utils/adt/datetime.c postgresql-7.1.2/src/backend/utils/adt/datetime.c\n--- postgresql-7.1.2.orig/src/backend/utils/adt/datetime.c\tFri May 4 08:53:07 2001\n+++ postgresql-7.1.2/src/backend/utils/adt/datetime.c\tTue Jun 12 14:14:38 2001\n@@ -22,6 +22,7 @@\n #include <limits.h>\n \n #include \"miscadmin.h\"\n+#include \"utils/guc.h\"\n #include \"utils/datetime.h\"\n \n static int DecodeNumber(int flen, char *field,\n@@ -35,6 +36,7 @@\n static int\tDecodeTimezone(char *str, int *tzp);\n static datetkn *datebsearch(char *key, datetkn *base, unsigned int nel);\n static int\tDecodeDate(char *str, int fmask, int *tmask, struct tm * tm);\n+static void\tCheckAustralianTimezones(int field);\n \n #define USE_DATE_CACHE 1\n #define ROUND_ALL 0\n@@ -85,7 +87,7 @@\n * entries by 10 and truncate the text field at MAXTOKLEN characters.\n * the text field is not guaranteed to be NULL-terminated.\n */\n-static datetkn datetktbl[] = {\n+datetkn datetktbl[] = {\n /*\t\ttext\t\t\ttoken\tlexval */\n \t{EARLY, RESERV, DTK_EARLY}, /* \"-infinity\" reserved for \"early time\" */\n \t{\"acsst\", DTZ, 63},\t\t\t/* Cent. Australia */\n@@ -117,11 +119,7 @@\n \t{\"cdt\", DTZ, NEG(30)},\t\t/* Central Daylight Time */\n \t{\"cet\", TZ, 6},\t\t\t\t/* Central European Time */\n \t{\"cetdst\", DTZ, 12},\t\t/* Central European Dayl.Time */\n-#if USE_AUSTRALIAN_RULES\n-\t{\"cst\", TZ, 63},\t\t\t/* Australia Eastern Std Time */\n-#else\n-\t{\"cst\", TZ, NEG(36)},\t\t/* Central Standard Time */\n-#endif\n+\t{\"cst\", TZ, NEG(36)},\t\t/* Central Standard Time, may be Australian */\n \t{DCURRENT, RESERV, DTK_CURRENT},\t/* \"current\" is always now */\n \t{\"dec\", MONTH, 12},\n \t{\"december\", MONTH, 12},\n@@ -134,11 +132,7 @@\n \t{\"eet\", TZ, 12},\t\t\t/* East. Europe, USSR Zone 1 */\n \t{\"eetdst\", DTZ, 18},\t\t/* Eastern Europe */\n \t{EPOCH, RESERV, DTK_EPOCH}, /* \"epoch\" reserved for system epoch time */\n-#if USE_AUSTRALIAN_RULES\n-\t{\"est\", TZ, 60},\t\t\t/* Australia Eastern Std Time */\n-#else\n-\t{\"est\", TZ, NEG(30)},\t\t/* Eastern Standard Time */\n-#endif\n+\t{\"est\", TZ, NEG(30)},\t\t/* Eastern Standard Time, may be Australian */\n \t{\"feb\", MONTH, 2},\n \t{\"february\", MONTH, 2},\n \t{\"fri\", DOW, 5},\n@@ -199,11 +193,7 @@\n \t{\"pst\", TZ, NEG(48)},\t\t/* Pacific Standard Time */\n \t{\"sadt\", DTZ, 63},\t\t\t/* S. Australian Dayl. Time */\n \t{\"sast\", TZ, 57},\t\t\t/* South Australian Std Time */\n-#if USE_AUSTRALIAN_RULES\n-\t{\"sat\", TZ, 57},\n-#else\n-\t{\"sat\", DOW, 6},\n-#endif\n+\t{\"sat\", DOW, 6},\t\t\t/* may be changed to Australian */\n \t{\"saturday\", DOW, 6},\n \t{\"sep\", MONTH, 9},\n \t{\"sept\", MONTH, 9},\n@@ -1618,6 +1608,8 @@\n \tint\t\t\ttype;\n \tdatetkn *tp;\n \n+\tCheckAustralianTimezones(field);\n+\n #if USE_DATE_CACHE\n \tif ((datecache[field] != NULL)\n \t\t&& (strncmp(lowtoken, datecache[field]->token, TOKMAXLEN) == 0))\n@@ -2455,3 +2447,49 @@\n \n \treturn 0;\n }\t/* EncodeTimeSpan() */\n+\n+\n+static void\tCheckAustralianTimezones(int field)\n+{\n+\tdatetkn *tp;\n+\tint prev_Australian_timezones = false;\t/* structure preloaded as false */\n+\n+\tif (Australian_timezones != prev_Australian_timezones)\n+\t{\n+#if USE_DATE_CACHE\n+\t\tdatecache[field] = NULL;\n+#endif\n+\t\t/* CST */\n+\t\ttp = datebsearch(\"cst\", datetktbl, szdatetktbl);\n+\t\tAssert(tp);\n+\t\ttp->type = TZ;\n+\t\tif (!Australian_timezones)\n+\t\t\ttp->value = NEG(36);\t/* Central Standard Time */\n+\t\telse\n+\t\t\ttp->value = 63;\t\t\t/* Australia Eastern Std Time */\n+\n+\t\t/* EST */\n+\t\ttp = datebsearch(\"est\", datetktbl, szdatetktbl);\n+\t\tAssert(tp);\n+\t\ttp->type = TZ;\n+\t\tif (!Australian_timezones)\n+\t\t\ttp->value = NEG(30);\t/* Eastern Standard Time */\n+\t\telse\n+\t\t\ttp->value = 60;\t\t\t/* Australia Eastern Std Time */\n+\n+\t\t/* SAT */\n+\t\ttp = datebsearch(\"sat\", datetktbl, szdatetktbl);\n+\t\tAssert(tp);\n+\t\tif (!Australian_timezones)\n+\t\t{\n+\t\t\ttp->type = DOW;\n+\t\t\ttp->value = 6;\n+\t\t}\n+\t\telse\n+\t\t{\n+\t\t\ttp->type = TZ;\n+\t\t\ttp->value = 57;\n+\t\t}\n+\t\tprev_Australian_timezones = Australian_timezones;\n+\t}\n+}\ndiff -ru postgresql-7.1.2.orig/src/backend/utils/misc/guc.c postgresql-7.1.2/src/backend/utils/misc/guc.c\n--- postgresql-7.1.2.orig/src/backend/utils/misc/guc.c\tFri Mar 23 04:41:47 2001\n+++ postgresql-7.1.2/src/backend/utils/misc/guc.c\tTue Jun 12 14:14:38 2001\n@@ -70,6 +70,8 @@\n \n bool\t\tSQL_inheritance = true;\n \n+bool\t\tAustralian_timezones = false;\n+\n #ifndef PG_KRB_SRVTAB\n #define PG_KRB_SRVTAB \"\"\n #endif\n@@ -220,6 +222,7 @@\n \t{\"show_source_port\", PGC_SIGHUP, &ShowPortNumber, false},\n \n \t{\"sql_inheritance\", PGC_USERSET, &SQL_inheritance, true},\n+\t{\"australian_timezones\", PGC_USERSET, &Australian_timezones, false},\n \n \t{\"fixbtree\", PGC_POSTMASTER, &FixBTree, true},\n \n@@ -856,7 +859,7 @@\n \t\t\telog(FATAL, \"out of memory\");\n \t}\n \telse\n-/* no equal sign in string */\n+\t/* no equal sign in string */\n \t{\n \t\t*name = strdup(string);\n \t\tif (!*name)\ndiff -ru postgresql-7.1.2.orig/src/backend/utils/misc/postgresql.conf.sample postgresql-7.1.2/src/backend/utils/misc/postgresql.conf.sample\n--- postgresql-7.1.2.orig/src/backend/utils/misc/postgresql.conf.sample\tFri Mar 16 16:44:33 2001\n+++ postgresql-7.1.2/src/backend/utils/misc/postgresql.conf.sample\tTue Jun 12 14:14:38 2001\n@@ -172,3 +172,9 @@\n #trace_lock_oidmin = 16384\n #trace_lock_table = 0\n #endif\n+\n+\n+#\n+#\tLock Tracing\n+#\n+#australian_timezones = false\ndiff -ru postgresql-7.1.2.orig/src/include/utils/datetime.h postgresql-7.1.2/src/include/utils/datetime.h\n--- postgresql-7.1.2.orig/src/include/utils/datetime.h\tFri May 4 08:53:07 2001\n+++ postgresql-7.1.2/src/include/utils/datetime.h\tTue Jun 12 14:14:38 2001\n@@ -182,6 +182,7 @@\n \tchar\t\tvalue;\t\t\t/* this may be unsigned, alas */\n } datetkn;\n \n+extern datetkn datetktbl[];\n \n /* TMODULO()\n * Macro to replace modf(), which is broken on some platforms.\ndiff -ru postgresql-7.1.2.orig/src/include/utils/guc.h postgresql-7.1.2/src/include/utils/guc.h\n--- postgresql-7.1.2.orig/src/include/utils/guc.h\tThu Mar 22 15:01:12 2001\n+++ postgresql-7.1.2/src/include/utils/guc.h\tTue Jun 12 14:14:38 2001\n@@ -67,5 +67,6 @@\n extern bool Show_btree_build_stats;\n \n extern bool SQL_inheritance;\n+extern bool Australian_timezones;\n \n #endif\t /* GUC_H */\ndiff -ru postgresql-7.1.2.orig/src/test/regress/expected/horology-no-DST-before-1970.out postgresql-7.1.2/src/test/regress/expected/horology-no-DST-before-1970.out\n--- postgresql-7.1.2.orig/src/test/regress/expected/horology-no-DST-before-1970.out\tFri Apr 6 15:50:25 2001\n+++ postgresql-7.1.2/src/test/regress/expected/horology-no-DST-before-1970.out\tTue Jun 12 14:14:38 2001\n@@ -4,6 +4,8 @@\n --\n -- date, time arithmetic\n --\n+-- needed so tests pass\n+SET australian_timezones = 'off';\n SELECT date '1981-02-03' + time '04:05:06' AS \"Date + Time\";\n Date + Time \n ------------------------------\ndiff -ru postgresql-7.1.2.orig/src/test/regress/expected/horology-solaris-1947.out postgresql-7.1.2/src/test/regress/expected/horology-solaris-1947.out\n--- postgresql-7.1.2.orig/src/test/regress/expected/horology-solaris-1947.out\tFri Apr 6 15:50:25 2001\n+++ postgresql-7.1.2/src/test/regress/expected/horology-solaris-1947.out\tTue Jun 12 14:14:38 2001\n@@ -4,6 +4,8 @@\n --\n -- date, time arithmetic\n --\n+-- needed so tests pass\n+SET australian_timezones = 'off';\n SELECT date '1981-02-03' + time '04:05:06' AS \"Date + Time\";\n Date + Time \n ------------------------------\ndiff -ru postgresql-7.1.2.orig/src/test/regress/expected/horology.out postgresql-7.1.2/src/test/regress/expected/horology.out\n--- postgresql-7.1.2.orig/src/test/regress/expected/horology.out\tFri Apr 6 15:50:25 2001\n+++ postgresql-7.1.2/src/test/regress/expected/horology.out\tTue Jun 12 14:14:38 2001\n@@ -4,6 +4,8 @@\n --\n -- date, time arithmetic\n --\n+-- needed so tests pass\n+SET australian_timezones = 'off';\n SELECT date '1981-02-03' + time '04:05:06' AS \"Date + Time\";\n Date + Time \n ------------------------------\ndiff -ru postgresql-7.1.2.orig/src/test/regress/expected/timestamp.out postgresql-7.1.2/src/test/regress/expected/timestamp.out\n--- postgresql-7.1.2.orig/src/test/regress/expected/timestamp.out\tFri May 4 05:00:37 2001\n+++ postgresql-7.1.2/src/test/regress/expected/timestamp.out\tTue Jun 12 14:14:38 2001\n@@ -4,6 +4,8 @@\n -- Shorthand values\n -- Not directly usable for regression testing since these are not constants.\n -- So, just try to test parser and hope for the best - thomas 97/04/26\n+-- needed so tests pass\n+SET australian_timezones = 'off';\n SELECT (timestamp 'today' = (timestamp 'yesterday' + interval '1 day')) as \"True\";\n True \n ------\ndiff -ru postgresql-7.1.2.orig/src/test/regress/sql/horology.sql postgresql-7.1.2/src/test/regress/sql/horology.sql\n--- postgresql-7.1.2.orig/src/test/regress/sql/horology.sql\tFri Apr 6 15:50:29 2001\n+++ postgresql-7.1.2/src/test/regress/sql/horology.sql\tTue Jun 12 14:14:38 2001\n@@ -1,10 +1,11 @@\n --\n -- HOROLOGY\n --\n-\n --\n -- date, time arithmetic\n --\n+-- needed so tests pass\n+SET australian_timezones = 'off';\n \n SELECT date '1981-02-03' + time '04:05:06' AS \"Date + Time\";\n \ndiff -ru postgresql-7.1.2.orig/src/test/regress/sql/timestamp.sql postgresql-7.1.2/src/test/regress/sql/timestamp.sql\n--- postgresql-7.1.2.orig/src/test/regress/sql/timestamp.sql\tSat Nov 25 16:00:33 2000\n+++ postgresql-7.1.2/src/test/regress/sql/timestamp.sql\tTue Jun 12 14:14:38 2001\n@@ -1,10 +1,11 @@\n --\n -- DATETIME\n --\n-\n -- Shorthand values\n -- Not directly usable for regression testing since these are not constants.\n -- So, just try to test parser and hope for the best - thomas 97/04/26\n+-- needed so tests pass\n+SET australian_timezones = 'off';\n \n SELECT (timestamp 'today' = (timestamp 'yesterday' + interval '1 day')) as \"True\";\n SELECT (timestamp 'today' = (timestamp 'tomorrow' - interval '1 day')) as \"True\";\n",
"msg_date": "Tue, 12 Jun 2001 14:46:54 +1000",
"msg_from": "Chris Dunlop <chris@onthe.net.au>",
"msg_from_op": true,
"msg_subject": "Re: Australian timezone configure option"
},
{
"msg_contents": "> > OK, this patch makes Australian_timezones a GUC option. It can be set\n> > anytime in psql. The code uses a static variable to check if the GUC\n> > setting has changed and adjust the C struct accordingly. I have also\n> > added code to allow the regression tests to pass even if postgresql.conf\n> > has australian_timezones defined.\n> \n> \n> Your patch had one reject against 7.1.2 (a single blank line in guc.c),\n> but it works for me once that was fixed.\n> \n> Below is the patch against 7.1.2 I generated from your cvs patch.\n\nWe will not be adding this to 7.1.X though you are free to use it in\nthat version. It will appear in 7.2.\n\n> I guess some documentation would be nice...\n\nDone.\n\nThanks for testing it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 12 Jun 2001 00:51:35 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Australian timezone configure option"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> OK, this patch makes Australian_timezones a GUC option. It can be set\n> anytime in psql. The code uses a static variable to check if the GUC\n> setting has changed and adjust the C struct accordingly.\n\nThis is a horrid approach. What if you get an error partway through\nscribbling on the static table? Then you've got inconsistent data.\n\nNor do I much care for having to execute a check subroutine before any\nuse of the lookup table (quite aside from speed, are you sure it's being\ncalled before *every* use of the table? how will you make sure that\npeople remember to call it when they add new routines that use the\ntable?). If you're going to scribble on the table, ISTM you should\ndrive that off an assignment-hook callback from the GUC stuff.\n\nBesides which, you forgot to mark the control variable static...\nso it doesn't actually reflect the state of the table.\n\nIt would be a lot cleaner to extend the lookup table structure so that\nyou don't need to change the table contents to track the Aussie-rules\nsetting.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Jun 2001 02:55:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Australian timezone configure option "
},
{
"msg_contents": "(moved to -hackers list, since this is a *feature change* which should\nnot just be discussed in -patches)\n\n>>I have decided to make this configurable via postgresql.conf so you\n>>don't need separate binaries / configure switch to run in Australia.\n>>I will send a patch over for testing.\n> We will not be adding this to 7.1.X though you are free to use it in\n> that version. It will appear in 7.2.\n\nUm, er...\n\nI'm not particularly happy about the solution, and would like to not see\nit in the main source code without further discussion. Sorry I was out\nof town for the extensive two hour discussion on the topic ;)\n\nOne could categorize the \"Australian problem\" as an example of a\nlocalization, and brute-force calls to work around it are a step in the\nwrong direction imho. Particularly since Australia contributes fully 25%\nof the time zone information in our lookup table, including multiple\nsynonyms for several time zones. Just as we might want to send a message\nto M$ that their SQL hacks are not acceptable, we should send a message\nto Australia that playing fast and loose with time zone names should not\nbe tolerated. Hmm, and while we are at it we should do something about\nworld hunger and arms race issues. Patches coming soon ;) ;)\n\nOK, those last few sentences were not serious. But, I would like a\nsolution that starts to address long-term issues in time zone support.\n\nBefore hacking the rather carefully evolved static tables let's consider\nhow to support time localization generally (e.g. language-specific names\nfor months). In the meantime, a compile-time solution for more easily\nsetting the \"CST\" interpretation would seem to be an incremental\nimprovement for \"buildability\" (and this has already been submitted).\n\nHow about implementing an optional db table-based approach for this\nlookup and the other localized info? If it were a compile-time option we\ncould evaluate the performance impact and allow folks to trade\nperformance vs features. And perhaps (later, much later. Weeks later...\n;) that choice could be a GUC parameter. Comments?\n\n - Thomas\n",
"msg_date": "Tue, 12 Jun 2001 15:04:52 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Australian timezone configure option"
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > OK, this patch makes Australian_timezones a GUC option. It can be set\n> > anytime in psql. The code uses a static variable to check if the GUC\n> > setting has changed and adjust the C struct accordingly.\n> \n> This is a horrid approach. What if you get an error partway through\n> scribbling on the static table? Then you've got inconsistent data.\n\nI have set the variable to -1 on entry so it will reload on failure,\nthough it is only doing C lookups on a static table so it is hard to see\nhow it would fail.\n\n> Nor do I much care for having to execute a check subroutine before any\n> use of the lookup table (quite aside from speed, are you sure it's being\n> called before *every* use of the table? how will you make sure that\n> people remember to call it when they add new routines that use the\n> table?). If you're going to scribble on the table, ISTM you should\n> drive that off an assignment-hook callback from the GUC stuff.\n\nBut we don't have such hooks so I did it with as little code as\npossible. The table itself is 'static' so it is only called in this\nfunction. (The old patch had the static removed because I thought I was\ngoing to have to do this stuff in the guc files but now it is all in the\nsame file.)\n\n> Besides which, you forgot to mark the control variable static...\n> so it doesn't actually reflect the state of the table.\n\nFixed. I had that in an ealier version but forgot to add it when I\nmoved it out to a separate function.\n\n> It would be a lot cleaner to extend the lookup table structure so that\n> you don't need to change the table contents to track the Aussie-rules\n> setting.\n\nYes, when we do that we can remove the call and call it instead from the\nGUC setting.\n\nHere is a new version of the patch. I found I didn't need to clear the\ndate cache.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: doc/src/sgml/runtime.sgml\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/doc/src/sgml/runtime.sgml,v\nretrieving revision 1.67\ndiff -c -r1.67 runtime.sgml\n*** doc/src/sgml/runtime.sgml\t2001/05/17 17:44:17\t1.67\n--- doc/src/sgml/runtime.sgml\t2001/06/12 15:02:15\n***************\n*** 1201,1206 ****\n--- 1201,1217 ----\n </listitem>\n </varlistentry>\n \n+ <term>AUSTRALIAN_TIMEZONES (<type>bool</type>)</term>\n+ <listitem>\n+ <para>\n+ If set to true, <literal>CST</literal>, <literal>EST</literal>,\n+ and <literal>SAT</literal> are interpreted as Australian\n+ timezones rather than as North American Central/Eastern\n+ Timezones and Saturday. The default is false.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+ \n <varlistentry>\n <indexterm>\n <primary>SSL</primary>\nIndex: src/backend/utils/adt/datetime.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/utils/adt/datetime.c,v\nretrieving revision 1.64\ndiff -c -r1.64 datetime.c\n*** src/backend/utils/adt/datetime.c\t2001/05/03 22:53:07\t1.64\n--- src/backend/utils/adt/datetime.c\t2001/06/12 15:02:17\n***************\n*** 22,27 ****\n--- 22,28 ----\n #include <limits.h>\n \n #include \"miscadmin.h\"\n+ #include \"utils/guc.h\"\n #include \"utils/datetime.h\"\n \n static int DecodeNumber(int flen, char *field,\n***************\n*** 35,40 ****\n--- 36,42 ----\n static int\tDecodeTimezone(char *str, int *tzp);\n static datetkn *datebsearch(char *key, datetkn *base, unsigned int nel);\n static int\tDecodeDate(char *str, int fmask, int *tmask, struct tm * tm);\n+ static void\tCheckAustralianTimezones(int field);\n \n #define USE_DATE_CACHE 1\n #define ROUND_ALL 0\n***************\n*** 117,127 ****\n \t{\"cdt\", DTZ, NEG(30)},\t\t/* Central Daylight Time */\n \t{\"cet\", TZ, 6},\t\t\t\t/* Central European Time */\n \t{\"cetdst\", DTZ, 12},\t\t/* Central European Dayl.Time */\n! #if USE_AUSTRALIAN_RULES\n! \t{\"cst\", TZ, 63},\t\t\t/* Australia Eastern Std Time */\n! #else\n! \t{\"cst\", TZ, NEG(36)},\t\t/* Central Standard Time */\n! #endif\n \t{DCURRENT, RESERV, DTK_CURRENT},\t/* \"current\" is always now */\n \t{\"dec\", MONTH, 12},\n \t{\"december\", MONTH, 12},\n--- 119,125 ----\n \t{\"cdt\", DTZ, NEG(30)},\t\t/* Central Daylight Time */\n \t{\"cet\", TZ, 6},\t\t\t\t/* Central European Time */\n \t{\"cetdst\", DTZ, 12},\t\t/* Central European Dayl.Time */\n! \t{\"cst\", TZ, NEG(36)},\t\t/* Central Standard Time, may be Australian */\n \t{DCURRENT, RESERV, DTK_CURRENT},\t/* \"current\" is always now */\n \t{\"dec\", MONTH, 12},\n \t{\"december\", MONTH, 12},\n***************\n*** 134,144 ****\n \t{\"eet\", TZ, 12},\t\t\t/* East. Europe, USSR Zone 1 */\n \t{\"eetdst\", DTZ, 18},\t\t/* Eastern Europe */\n \t{EPOCH, RESERV, DTK_EPOCH}, /* \"epoch\" reserved for system epoch time */\n! #if USE_AUSTRALIAN_RULES\n! \t{\"est\", TZ, 60},\t\t\t/* Australia Eastern Std Time */\n! #else\n! \t{\"est\", TZ, NEG(30)},\t\t/* Eastern Standard Time */\n! #endif\n \t{\"feb\", MONTH, 2},\n \t{\"february\", MONTH, 2},\n \t{\"fri\", DOW, 5},\n--- 132,138 ----\n \t{\"eet\", TZ, 12},\t\t\t/* East. Europe, USSR Zone 1 */\n \t{\"eetdst\", DTZ, 18},\t\t/* Eastern Europe */\n \t{EPOCH, RESERV, DTK_EPOCH}, /* \"epoch\" reserved for system epoch time */\n! \t{\"est\", TZ, NEG(30)},\t\t/* Eastern Standard Time, may be Australian */\n \t{\"feb\", MONTH, 2},\n \t{\"february\", MONTH, 2},\n \t{\"fri\", DOW, 5},\n***************\n*** 199,209 ****\n \t{\"pst\", TZ, NEG(48)},\t\t/* Pacific Standard Time */\n \t{\"sadt\", DTZ, 63},\t\t\t/* S. Australian Dayl. Time */\n \t{\"sast\", TZ, 57},\t\t\t/* South Australian Std Time */\n! #if USE_AUSTRALIAN_RULES\n! \t{\"sat\", TZ, 57},\n! #else\n! \t{\"sat\", DOW, 6},\n! #endif\n \t{\"saturday\", DOW, 6},\n \t{\"sep\", MONTH, 9},\n \t{\"sept\", MONTH, 9},\n--- 193,199 ----\n \t{\"pst\", TZ, NEG(48)},\t\t/* Pacific Standard Time */\n \t{\"sadt\", DTZ, 63},\t\t\t/* S. Australian Dayl. Time */\n \t{\"sast\", TZ, 57},\t\t\t/* South Australian Std Time */\n! \t{\"sat\", DOW, 6},\t\t\t/* may be changed to Australian */\n \t{\"saturday\", DOW, 6},\n \t{\"sep\", MONTH, 9},\n \t{\"sept\", MONTH, 9},\n***************\n*** 1618,1623 ****\n--- 1608,1615 ----\n \tint\t\t\ttype;\n \tdatetkn *tp;\n \n+ \tCheckAustralianTimezones(field);\n+ \n #if USE_DATE_CACHE\n \tif ((datecache[field] != NULL)\n \t\t&& (strncmp(lowtoken, datecache[field]->token, TOKMAXLEN) == 0))\n***************\n*** 2455,2457 ****\n--- 2447,2494 ----\n \n \treturn 0;\n }\t/* EncodeTimeSpan() */\n+ \n+ \n+ static void\tCheckAustralianTimezones(int field)\n+ {\n+ \tdatetkn *tp;\n+ \t/* structure preloaded as false */\n+ \tstatic int prev_Australian_timezones = -1;\n+ \n+ \tif (Australian_timezones != prev_Australian_timezones)\n+ \t{\n+ \t\tprev_Australian_timezones = -1;\t/* in case it fails, force reload */\n+ \t\t/* CST */\n+ \t\ttp = datebsearch(\"cst\", datetktbl, szdatetktbl);\n+ \t\tAssert(tp);\n+ \t\ttp->type = TZ;\n+ \t\tif (!Australian_timezones)\n+ \t\t\ttp->value = NEG(36);\t/* Central Standard Time */\n+ \t\telse\n+ \t\t\ttp->value = 63;\t\t\t/* Australia Eastern Std Time */\n+ \n+ \t\t/* EST */\n+ \t\ttp = datebsearch(\"est\", datetktbl, szdatetktbl);\n+ \t\tAssert(tp);\n+ \t\ttp->type = TZ;\n+ \t\tif (!Australian_timezones)\n+ \t\t\ttp->value = NEG(30);\t/* Eastern Standard Time */\n+ \t\telse\n+ \t\t\ttp->value = 60;\t\t\t/* Australia Eastern Std Time */\n+ \n+ \t\t/* SAT */\n+ \t\ttp = datebsearch(\"sat\", datetktbl, szdatetktbl);\n+ \t\tAssert(tp);\n+ \t\tif (!Australian_timezones)\n+ \t\t{\n+ \t\t\ttp->type = DOW;\n+ \t\t\ttp->value = 6;\n+ \t\t}\n+ \t\telse\n+ \t\t{\n+ \t\t\ttp->type = TZ;\n+ \t\t\ttp->value = 57;\n+ \t\t}\n+ \t\tprev_Australian_timezones = Australian_timezones;\n+ \t}\n+ }\nIndex: src/backend/utils/misc/guc.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/utils/misc/guc.c,v\nretrieving revision 1.37\ndiff -c -r1.37 guc.c\n*** src/backend/utils/misc/guc.c\t2001/06/07 04:50:57\t1.37\n--- src/backend/utils/misc/guc.c\t2001/06/12 15:02:17\n***************\n*** 71,76 ****\n--- 71,78 ----\n \n bool\t\tSQL_inheritance = true;\n \n+ bool\t\tAustralian_timezones = false;\n+ \n #ifndef PG_KRB_SRVTAB\n #define PG_KRB_SRVTAB \"\"\n #endif\n***************\n*** 222,227 ****\n--- 224,230 ----\n \t{\"show_source_port\", PGC_SIGHUP, &ShowPortNumber, false},\n \n \t{\"sql_inheritance\", PGC_USERSET, &SQL_inheritance, true},\n+ \t{\"australian_timezones\", PGC_USERSET, &Australian_timezones, false},\n \n \t{\"fixbtree\", PGC_POSTMASTER, &FixBTree, true},\n \n***************\n*** 880,885 ****\n--- 883,889 ----\n \t\tcase PGC_BOOL:\n \t\t\tval = *((struct config_bool *) record)->variable ? \"on\" : \"off\";\n \t\t\tbreak;\n+ \n \t\tcase PGC_INT:\n \t\t\tsnprintf(buffer, 256, \"%d\", *((struct config_int *) record)->variable);\n \t\t\tval = buffer;\n***************\n*** 955,961 ****\n \t\t\telog(FATAL, \"out of memory\");\n \t}\n \telse\n! /* no equal sign in string */\n \t{\n \t\t*name = strdup(string);\n \t\tif (!*name)\n--- 959,965 ----\n \t\t\telog(FATAL, \"out of memory\");\n \t}\n \telse\n! \t/* no equal sign in string */\n \t{\n \t\t*name = strdup(string);\n \t\tif (!*name)\nIndex: src/backend/utils/misc/postgresql.conf.sample\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/utils/misc/postgresql.conf.sample,v\nretrieving revision 1.11\ndiff -c -r1.11 postgresql.conf.sample\n*** src/backend/utils/misc/postgresql.conf.sample\t2001/05/07 23:32:55\t1.11\n--- src/backend/utils/misc/postgresql.conf.sample\t2001/06/12 15:02:17\n***************\n*** 85,108 ****\n \n \n #\n- #\tInheritance\n- #\n- #sql_inheritance = true\n- \n- \n- #\n- #\tDeadlock\n- #\n- #deadlock_timeout = 1000\n- \n- \n- #\n- #\tExpression Depth Limitation\n- #\n- #max_expr_depth = 10000 # min 10\n- \n- \n- #\n #\tWrite-ahead log (WAL)\n #\n #wal_buffers = 8 # min 4\n--- 85,90 ----\n***************\n*** 172,174 ****\n--- 154,166 ----\n #trace_lock_oidmin = 16384\n #trace_lock_table = 0\n #endif\n+ \n+ \n+ #\n+ #\tMisc\n+ #\n+ #sql_inheritance = true\n+ #australian_timezones = false\n+ #deadlock_timeout = 1000\n+ #max_expr_depth = 10000 # min 10\n+ \nIndex: src/include/utils/datetime.h\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/include/utils/datetime.h,v\nretrieving revision 1.18\ndiff -c -r1.18 datetime.h\n*** src/include/utils/datetime.h\t2001/05/03 22:53:07\t1.18\n--- src/include/utils/datetime.h\t2001/06/12 15:02:18\n***************\n*** 182,187 ****\n--- 182,188 ----\n \tchar\t\tvalue;\t\t\t/* this may be unsigned, alas */\n } datetkn;\n \n+ extern datetkn datetktbl[];\n \n /* TMODULO()\n * Macro to replace modf(), which is broken on some platforms.\nIndex: src/include/utils/guc.h\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/include/utils/guc.h,v\nretrieving revision 1.7\ndiff -c -r1.7 guc.h\n*** src/include/utils/guc.h\t2001/06/07 04:50:57\t1.7\n--- src/include/utils/guc.h\t2001/06/12 15:02:18\n***************\n*** 68,72 ****\n--- 68,73 ----\n extern bool Show_btree_build_stats;\n \n extern bool SQL_inheritance;\n+ extern bool Australian_timezones;\n \n #endif\t /* GUC_H */\nIndex: src/test/regress/expected/horology-no-DST-before-1970.out\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/test/regress/expected/horology-no-DST-before-1970.out,v\nretrieving revision 1.12\ndiff -c -r1.12 horology-no-DST-before-1970.out\n*** src/test/regress/expected/horology-no-DST-before-1970.out\t2001/04/06 05:50:25\t1.12\n--- src/test/regress/expected/horology-no-DST-before-1970.out\t2001/06/12 15:02:24\n***************\n*** 4,9 ****\n--- 4,11 ----\n --\n -- date, time arithmetic\n --\n+ -- needed so tests pass\n+ SET australian_timezones = 'off';\n SELECT date '1981-02-03' + time '04:05:06' AS \"Date + Time\";\n Date + Time \n ------------------------------\nIndex: src/test/regress/expected/horology-solaris-1947.out\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/test/regress/expected/horology-solaris-1947.out,v\nretrieving revision 1.10\ndiff -c -r1.10 horology-solaris-1947.out\n*** src/test/regress/expected/horology-solaris-1947.out\t2001/04/06 05:50:25\t1.10\n--- src/test/regress/expected/horology-solaris-1947.out\t2001/06/12 15:02:24\n***************\n*** 4,9 ****\n--- 4,11 ----\n --\n -- date, time arithmetic\n --\n+ -- needed so tests pass\n+ SET australian_timezones = 'off';\n SELECT date '1981-02-03' + time '04:05:06' AS \"Date + Time\";\n Date + Time \n ------------------------------\nIndex: src/test/regress/expected/horology.out\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/test/regress/expected/horology.out,v\nretrieving revision 1.23\ndiff -c -r1.23 horology.out\n*** src/test/regress/expected/horology.out\t2001/04/06 05:50:25\t1.23\n--- src/test/regress/expected/horology.out\t2001/06/12 15:02:25\n***************\n*** 4,9 ****\n--- 4,11 ----\n --\n -- date, time arithmetic\n --\n+ -- needed so tests pass\n+ SET australian_timezones = 'off';\n SELECT date '1981-02-03' + time '04:05:06' AS \"Date + Time\";\n Date + Time \n ------------------------------\nIndex: src/test/regress/expected/timestamp.out\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/test/regress/expected/timestamp.out,v\nretrieving revision 1.12\ndiff -c -r1.12 timestamp.out\n*** src/test/regress/expected/timestamp.out\t2001/05/03 19:00:37\t1.12\n--- src/test/regress/expected/timestamp.out\t2001/06/12 15:02:26\n***************\n*** 4,9 ****\n--- 4,11 ----\n -- Shorthand values\n -- Not directly usable for regression testing since these are not constants.\n -- So, just try to test parser and hope for the best - thomas 97/04/26\n+ -- needed so tests pass\n+ SET australian_timezones = 'off';\n SELECT (timestamp 'today' = (timestamp 'yesterday' + interval '1 day')) as \"True\";\n True \n ------\nIndex: src/test/regress/sql/horology.sql\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/test/regress/sql/horology.sql,v\nretrieving revision 1.14\ndiff -c -r1.14 horology.sql\n*** src/test/regress/sql/horology.sql\t2001/04/06 05:50:29\t1.14\n--- src/test/regress/sql/horology.sql\t2001/06/12 15:02:26\n***************\n*** 1,10 ****\n --\n -- HOROLOGY\n --\n- \n --\n -- date, time arithmetic\n --\n \n SELECT date '1981-02-03' + time '04:05:06' AS \"Date + Time\";\n \n--- 1,11 ----\n --\n -- HOROLOGY\n --\n --\n -- date, time arithmetic\n --\n+ -- needed so tests pass\n+ SET australian_timezones = 'off';\n \n SELECT date '1981-02-03' + time '04:05:06' AS \"Date + Time\";\n \nIndex: src/test/regress/sql/timestamp.sql\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/test/regress/sql/timestamp.sql,v\nretrieving revision 1.7\ndiff -c -r1.7 timestamp.sql\n*** src/test/regress/sql/timestamp.sql\t2000/11/25 05:00:33\t1.7\n--- src/test/regress/sql/timestamp.sql\t2001/06/12 15:02:26\n***************\n*** 1,10 ****\n --\n -- DATETIME\n --\n- \n -- Shorthand values\n -- Not directly usable for regression testing since these are not constants.\n -- So, just try to test parser and hope for the best - thomas 97/04/26\n \n SELECT (timestamp 'today' = (timestamp 'yesterday' + interval '1 day')) as \"True\";\n SELECT (timestamp 'today' = (timestamp 'tomorrow' - interval '1 day')) as \"True\";\n--- 1,11 ----\n --\n -- DATETIME\n --\n -- Shorthand values\n -- Not directly usable for regression testing since these are not constants.\n -- So, just try to test parser and hope for the best - thomas 97/04/26\n+ -- needed so tests pass\n+ SET australian_timezones = 'off';\n \n SELECT (timestamp 'today' = (timestamp 'yesterday' + interval '1 day')) as \"True\";\n SELECT (timestamp 'today' = (timestamp 'tomorrow' - interval '1 day')) as \"True\";",
"msg_date": "Tue, 12 Jun 2001 11:18:29 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Australian timezone configure option"
},
{
"msg_contents": "> (moved to -hackers list, since this is a *feature change* which should\n> not just be discussed in -patches)\n> \n> >>I have decided to make this configurable via postgresql.conf so you\n> >>don't need separate binaries / configure switch to run in Australia.\n> >>I will send a patch over for testing.\n> > We will not be adding this to 7.1.X though you are free to use it in\n> > that version. It will appear in 7.2.\n> \n> Um, er...\n> \n> I'm not particularly happy about the solution, and would like to not see\n> it in the main source code without further discussion. Sorry I was out\n> of town for the extensive two hour discussion on the topic ;)\n> \n> One could categorize the \"Australian problem\" as an example of a\n> localization, and brute-force calls to work around it are a step in the\n> wrong direction imho. Particularly since Australia contributes fully 25%\n> of the time zone information in our lookup table, including multiple\n> synonyms for several time zones. Just as we might want to send a message\n> to M$ that their SQL hacks are not acceptable, we should send a message\n> to Australia that playing fast and loose with time zone names should not\n> be tolerated. Hmm, and while we are at it we should do something about\n> world hunger and arms race issues. Patches coming soon ;) ;)\n> \n> OK, those last few sentences were not serious. But, I would like a\n> solution that starts to address long-term issues in time zone support.\n> \n> Before hacking the rather carefully evolved static tables let's consider\n> how to support time localization generally (e.g. language-specific names\n> for months). In the meantime, a compile-time solution for more easily\n> setting the \"CST\" interpretation would seem to be an incremental\n> improvement for \"buildability\" (and this has already been submitted).\n> \n> How about implementing an optional db table-based approach for this\n> lookup and the other localized info? If it were a compile-time option we\n> could evaluate the performance impact and allow folks to trade\n> performance vs features. And perhaps (later, much later. Weeks later...\n> ;) that choice could be a GUC parameter. Comments?\n\nThis is way beyond where I want to go with the Australian stuff and I\nhave not seen much demand from users for more than a GUC option. \nAustralians wanted a 'configure' flag, I made it GUC which gets reloaded\non first call and can later be assigned to a GUC hook when we get those.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 12 Jun 2001 11:21:16 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Australian timezone configure option"
},
{
"msg_contents": "Thomas Lockhart writes:\n\n> Before hacking the rather carefully evolved static tables let's consider\n> how to support time localization generally (e.g. language-specific names\n> for months). In the meantime, a compile-time solution for more easily\n> setting the \"CST\" interpretation would seem to be an incremental\n> improvement for \"buildability\" (and this has already been submitted).\n\nI'm not particularly happy about \"popularizing\" that compile time option\nbeyond its current state (i.e., get in and edit config.h), and there's a\nreason why I haven't done it myself yet.\n\n--enable-xxx type configure options should, as a matter of principle, not\nreplace one behaviour by another. (The proposed option replaces U.S.\nrules by Australian rules.) In this case it might look like a minor\nissue, but it's a slippery slope. For one thing, packages build by\nAustralians will cease to behave reasonably in the rest of the world.\nThat is the same reason why we don't want people altering NAMEDATALEN and\nBLCKSZ from configure.\n\nA run-time option seems like the appropriate solution.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Tue, 12 Jun 2001 18:19:05 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCHES] Australian timezone configure option"
},
{
"msg_contents": "> Thomas Lockhart writes:\n> \n> > Before hacking the rather carefully evolved static tables let's consider\n> > how to support time localization generally (e.g. language-specific names\n> > for months). In the meantime, a compile-time solution for more easily\n> > setting the \"CST\" interpretation would seem to be an incremental\n> > improvement for \"buildability\" (and this has already been submitted).\n> \n> I'm not particularly happy about \"popularizing\" that compile time option\n> beyond its current state (i.e., get in and edit config.h), and there's a\n> reason why I haven't done it myself yet.\n> \n> --enable-xxx type configure options should, as a matter of principle, not\n> replace one behaviour by another. (The proposed option replaces U.S.\n> rules by Australian rules.) In this case it might look like a minor\n> issue, but it's a slippery slope. For one thing, packages build by\n> Australians will cease to behave reasonably in the rest of the world.\n> That is the same reason why we don't want people altering NAMEDATALEN and\n> BLCKSZ from configure.\n> \n> A run-time option seems like the appropriate solution.\n\nAgreed. Compile-time is just the wrong way to go. We should have\ncompile-time stuff that just relates to the OS/compiler and sometimes\ninstalled software, or simply stuff that just can't be changed without a\nrecompile/initdb like NAMEDATALEN.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 12 Jun 2001 12:21:39 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCHES] Australian timezone configure option"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Here is a new version of the patch.\n\nIt's still horridly ugly. Why not put the three Aussie-specific entries\nin a separate aussie_datetktbl array, and have the lookup look like\n\n if (Australian_timezones)\n {\n tp = datebsearch(lowtoken, aussie_datetktbl, sz_aussie_datetktbl);\n if (tp == NULL)\n tp = datebsearch(lowtoken, datetktbl, szdatetktbl);\n }\n else\n tp = datebsearch(lowtoken, datetktbl, szdatetktbl);\n\ninstead of modifying the lookup table on the fly.\n\n> I found I didn't need to clear the date cache.\n\nHmm, are you sure about that? I'm not.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Jun 2001 12:22:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Australian timezone configure option "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Here is a new version of the patch.\n> \n> It's still horridly ugly. Why not put the three Aussie-specific entries\n> in a separate aussie_datetktbl array, and have the lookup look like\n> \n> if (Australian_timezones)\n> {\n> tp = datebsearch(lowtoken, aussie_datetktbl, sz_aussie_datetktbl);\n> if (tp == NULL)\n> tp = datebsearch(lowtoken, datetktbl, szdatetktbl);\n> }\n> else\n> tp = datebsearch(lowtoken, datetktbl, szdatetktbl);\n> \n> instead of modifying the lookup table on the fly.\n\nI thought about that but the use of the cache had me concerned,\nparticularly for modularity. This way there is only one table to query\nwith one call to the bsearch function.\n\nHowever, I can go with the two-table approach if people prefer it.\n\n> > I found I didn't need to clear the date cache.\n> \n> Hmm, are you sure about that? I'm not.\n\nI checked and it caches a pointer to the struct, not the values\nthemselves, and we don't change the structure, just the secondary values\nand not the key used by the bsearch.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 12 Jun 2001 12:26:20 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Australian timezone configure option"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I found I didn't need to clear the date cache.\n>> \n>> Hmm, are you sure about that? I'm not.\n\n> I checked and it caches a pointer to the struct, not the values\n> themselves, and we don't change the structure, just the secondary values\n> and not the key used by the bsearch.\n\nNow I'm going to object LOUDLY. You cannot convince me that the above\nis a good implementation --- it's a complete crock, and will break the\ninstant someone looks at it sidewise.\n\nMy inclination would actually be to rip out the cache entirely. bsearch\nin a table this size is not so expensive that we need to bypass it, nor\nis it apparent that we are going to see lots of successive lookups for\nthe same keyword anyway. How long has that cache been in there, and\nwhat was the motivation for adding it to begin with?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Jun 2001 12:39:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Australian timezone configure option "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I found I didn't need to clear the date cache.\n> >> \n> >> Hmm, are you sure about that? I'm not.\n> \n> > I checked and it caches a pointer to the struct, not the values\n> > themselves, and we don't change the structure, just the secondary values\n> > and not the key used by the bsearch.\n> \n> Now I'm going to object LOUDLY. You cannot convince me that the above\n> is a good implementation --- it's a complete crock, and will break the\n> instant someone looks at it sidewise.\n> \n> My inclination would actually be to rip out the cache entirely. bsearch\n> in a table this size is not so expensive that we need to bypass it, nor\n> is it apparent that we are going to see lots of successive lookups for\n> the same keyword anyway. How long has that cache been in there, and\n> what was the motivation for adding it to begin with?\n\nI see the CACHE coming in with:\n\n\t1.42 (thomas 16-Feb-00): #define USE_DATE_CACHE 1\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 12 Jun 2001 12:52:22 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Australian timezone configure option"
},
{
"msg_contents": "> > I checked and it caches a pointer to the struct, not the values\n> > themselves, and we don't change the structure, just the secondary values\n> > and not the key used by the bsearch.\n> \n> Now I'm going to object LOUDLY. You cannot convince me that the above\n> is a good implementation --- it's a complete crock, and will break the\n> instant someone looks at it sidewise.\n> \n> My inclination would actually be to rip out the cache entirely. bsearch\n> in a table this size is not so expensive that we need to bypass it, nor\n> is it apparent that we are going to see lots of successive lookups for\n> the same keyword anyway. How long has that cache been in there, and\n> what was the motivation for adding it to begin with?\n\nOK, what do we do with this patch now? We have several Australians who\nwant it and most like it as GUC rather than a configure option. I don't\nwant to add lots more code because I think the GUC capability is simple\nenough. I can remove the CACHE stuff but only if Thomas agrees.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 12 Jun 2001 16:31:26 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Australian timezone configure option"
},
{
"msg_contents": "\n> <varlistentry>\n> + <term>--enable-australian-tz</term>\n> + <listitem>\n> + <para>\n> + Enables Australian timezone support. This changes the\n> interpretation\n> + of timezones in input date/time strings from US-centric to\n> + Australian-centric. Specifically, 'EST' is changed\n> from GMT-5 (US\n> + Eastern Standard Time) to GMT+10 (Australian Eastern\n> Standard Time)\n> + and 'CST' is changed from GMT-5:30 (US Central Standard Time) to\n> + GMT+10:30 (Australian Central Standard Time).\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> +\n> + <varlistentry>\n> <term>--enable-locale</term>\n> <listitem>\n> <para>\n\nWhat about us West Australians living in WST (Western Standard Time) ?\n\nChris\n\n",
"msg_date": "Wed, 13 Jun 2001 09:35:20 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "RE: Australian timezone configure option"
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> What about us West Australians living in WST (Western Standard Time) ?\n\nYou don't have a problem.\n\nIt's only the TZ abbreviations that conflict with US usages that need\nto be driven by a switch. The rest of them are just there...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Jun 2001 21:38:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Australian timezone configure option "
},
{
"msg_contents": "> \n> > <varlistentry>\n> > + <term>--enable-australian-tz</term>\n> > + <listitem>\n> > + <para>\n> > + Enables Australian timezone support. This changes the\n> > interpretation\n> > + of timezones in input date/time strings from US-centric to\n> > + Australian-centric. Specifically, 'EST' is changed\n> > from GMT-5 (US\n> > + Eastern Standard Time) to GMT+10 (Australian Eastern\n> > Standard Time)\n> > + and 'CST' is changed from GMT-5:30 (US Central Standard Time) to\n> > + GMT+10:30 (Australian Central Standard Time).\n> > + </para>\n> > + </listitem>\n> > + </varlistentry>\n> > +\n> > + <varlistentry>\n> > <term>--enable-locale</term>\n> > <listitem>\n> > <para>\n> \n> What about us West Australians living in WST (Western Standard Time) ?\n\nThat is already in there. The setting is for names that conflict with\nother names:\n\n {\"wst\", TZ, 48}, /* West Australian Std Time */\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 12 Jun 2001 21:43:13 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Australian timezone configure option"
},
{
"msg_contents": "> On Mon, Jun 11, 2001 at 11:53:59PM -0400, Bruce Momjian wrote:\n> > > Hi,\n> > > \n> > > Being in Australia, it's always been a minor pain building the support\n> > > for Australian timezone rules by defining USE_AUSTRALIAN_RULES to the\n> > > compiler. Not to mention the not inconsiderable pain involved in pawing\n> > > through the code and documentation trying to work out why the timezones\n> > > were wrong in the first place.\n> > \n> > OK, this patch makes Australian_timezones a GUC option. It can be set\n> > anytime in psql. The code uses a static variable to check if the GUC\n> > setting has changed and adjust the C struct accordingly. I have also\n> > added code to allow the regression tests to pass even if postgresql.conf\n> > has australian_timezones defined.\n\nHere is a new version of the patch. Tom added callbacks to GUC boolean\nvariables so I was able to make a separate Australian lookup table and\nclear the cache if anyone changes the setting.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: doc/src/sgml/runtime.sgml\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/doc/src/sgml/runtime.sgml,v\nretrieving revision 1.67\ndiff -c -r1.67 runtime.sgml\n*** doc/src/sgml/runtime.sgml\t2001/05/17 17:44:17\t1.67\n--- doc/src/sgml/runtime.sgml\t2001/06/13 17:11:04\n***************\n*** 1201,1206 ****\n--- 1201,1217 ----\n </listitem>\n </varlistentry>\n \n+ <term>AUSTRALIAN_TIMEZONES (<type>bool</type>)</term>\n+ <listitem>\n+ <para>\n+ If set to true, <literal>CST</literal>, <literal>EST</literal>,\n+ and <literal>SAT</literal> are interpreted as Australian\n+ timezones rather than as North American Central/Eastern\n+ Timezones and Saturday. The default is false.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+ \n <varlistentry>\n <indexterm>\n <primary>SSL</primary>\nIndex: src/backend/utils/adt/datetime.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/utils/adt/datetime.c,v\nretrieving revision 1.64\ndiff -c -r1.64 datetime.c\n*** src/backend/utils/adt/datetime.c\t2001/05/03 22:53:07\t1.64\n--- src/backend/utils/adt/datetime.c\t2001/06/13 17:11:08\n***************\n*** 22,27 ****\n--- 22,28 ----\n #include <limits.h>\n \n #include \"miscadmin.h\"\n+ #include \"utils/guc.h\"\n #include \"utils/datetime.h\"\n \n static int DecodeNumber(int flen, char *field,\n***************\n*** 36,42 ****\n static datetkn *datebsearch(char *key, datetkn *base, unsigned int nel);\n static int\tDecodeDate(char *str, int fmask, int *tmask, struct tm * tm);\n \n- #define USE_DATE_CACHE 1\n #define ROUND_ALL 0\n \n static int\tDecodePosixTimezone(char *str, int *val);\n--- 37,42 ----\n***************\n*** 117,127 ****\n \t{\"cdt\", DTZ, NEG(30)},\t\t/* Central Daylight Time */\n \t{\"cet\", TZ, 6},\t\t\t\t/* Central European Time */\n \t{\"cetdst\", DTZ, 12},\t\t/* Central European Dayl.Time */\n- #if USE_AUSTRALIAN_RULES\n- \t{\"cst\", TZ, 63},\t\t\t/* Australia Eastern Std Time */\n- #else\n \t{\"cst\", TZ, NEG(36)},\t\t/* Central Standard Time */\n- #endif\n \t{DCURRENT, RESERV, DTK_CURRENT},\t/* \"current\" is always now */\n \t{\"dec\", MONTH, 12},\n \t{\"december\", MONTH, 12},\n--- 117,123 ----\n***************\n*** 134,144 ****\n \t{\"eet\", TZ, 12},\t\t\t/* East. Europe, USSR Zone 1 */\n \t{\"eetdst\", DTZ, 18},\t\t/* Eastern Europe */\n \t{EPOCH, RESERV, DTK_EPOCH}, /* \"epoch\" reserved for system epoch time */\n- #if USE_AUSTRALIAN_RULES\n- \t{\"est\", TZ, 60},\t\t\t/* Australia Eastern Std Time */\n- #else\n \t{\"est\", TZ, NEG(30)},\t\t/* Eastern Standard Time */\n- #endif\n \t{\"feb\", MONTH, 2},\n \t{\"february\", MONTH, 2},\n \t{\"fri\", DOW, 5},\n--- 130,136 ----\n***************\n*** 199,209 ****\n \t{\"pst\", TZ, NEG(48)},\t\t/* Pacific Standard Time */\n \t{\"sadt\", DTZ, 63},\t\t\t/* S. Australian Dayl. Time */\n \t{\"sast\", TZ, 57},\t\t\t/* South Australian Std Time */\n- #if USE_AUSTRALIAN_RULES\n- \t{\"sat\", TZ, 57},\n- #else\n \t{\"sat\", DOW, 6},\n- #endif\n \t{\"saturday\", DOW, 6},\n \t{\"sep\", MONTH, 9},\n \t{\"sept\", MONTH, 9},\n--- 191,197 ----\n***************\n*** 247,252 ****\n--- 235,250 ----\n \n static unsigned int szdatetktbl = sizeof datetktbl / sizeof datetktbl[0];\n \n+ /* Used for SET australian_timezones to override North American ones */\n+ static datetkn australian_datetktbl[] = {\n+ \t{\"cst\", TZ, 63},\t\t\t/* Australia Eastern Std Time */\n+ \t{\"est\", TZ, 60},\t\t\t/* Australia Eastern Std Time */\n+ \t{\"sat\", TZ, 57},\n+ };\n+ \n+ static unsigned int australian_szdatetktbl = sizeof australian_datetktbl /\n+ \t\t\t\t\t\t\t\t\t\t\t sizeof australian_datetktbl[0];\n+ \n static datetkn deltatktbl[] = {\n /*\t\ttext\t\t\ttoken\tlexval */\n \t{\"@\", IGNORE, 0},\t\t\t/* postgres relative time prefix */\n***************\n*** 327,339 ****\n \n static unsigned int szdeltatktbl = sizeof deltatktbl / sizeof deltatktbl[0];\n \n- #if USE_DATE_CACHE\n datetkn *datecache[MAXDATEFIELDS] = {NULL};\n \n datetkn *deltacache[MAXDATEFIELDS] = {NULL};\n \n- #endif\n- \n \n /*\n * Calendar time to Julian date conversions.\n--- 325,334 ----\n***************\n*** 1618,1635 ****\n \tint\t\t\ttype;\n \tdatetkn *tp;\n \n- #if USE_DATE_CACHE\n \tif ((datecache[field] != NULL)\n \t\t&& (strncmp(lowtoken, datecache[field]->token, TOKMAXLEN) == 0))\n \t\ttp = datecache[field];\n \telse\n \t{\n! #endif\n! \t\ttp = datebsearch(lowtoken, datetktbl, szdatetktbl);\n! #if USE_DATE_CACHE\n \t}\n \tdatecache[field] = tp;\n- #endif\n \tif (tp == NULL)\n \t{\n \t\ttype = IGNORE;\n--- 1613,1631 ----\n \tint\t\t\ttype;\n \tdatetkn *tp;\n \n \tif ((datecache[field] != NULL)\n \t\t&& (strncmp(lowtoken, datecache[field]->token, TOKMAXLEN) == 0))\n \t\ttp = datecache[field];\n \telse\n \t{\n! \t\ttp = NULL;\n! \t\tif (Australian_timezones)\n! \t\t\ttp = datebsearch(lowtoken, australian_datetktbl,\n! \t\t\t\t\t\t\t\t\t australian_szdatetktbl);\n! \t\tif (!tp)\n! \t\t\ttp = datebsearch(lowtoken, datetktbl, szdatetktbl);\n \t}\n \tdatecache[field] = tp;\n \tif (tp == NULL)\n \t{\n \t\ttype = IGNORE;\n***************\n*** 1937,1954 ****\n \tint\t\t\ttype;\n \tdatetkn *tp;\n \n- #if USE_DATE_CACHE\n \tif ((deltacache[field] != NULL)\n \t\t&& (strncmp(lowtoken, deltacache[field]->token, TOKMAXLEN) == 0))\n \t\ttp = deltacache[field];\n \telse\n \t{\n- #endif\n \t\ttp = datebsearch(lowtoken, deltatktbl, szdeltatktbl);\n- #if USE_DATE_CACHE\n \t}\n \tdeltacache[field] = tp;\n- #endif\n \tif (tp == NULL)\n \t{\n \t\ttype = IGNORE;\n--- 1933,1946 ----\n***************\n*** 2455,2457 ****\n--- 2447,2458 ----\n \n \treturn 0;\n }\t/* EncodeTimeSpan() */\n+ \n+ \n+ void ClearDateCache(bool dummy)\n+ {\n+ \tint i;\n+ \n+ \tfor (i=0; i < MAXDATEFIELDS; i++)\n+ \t\tdatecache[i] = NULL;\n+ }\nIndex: src/backend/utils/misc/guc.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/utils/misc/guc.c,v\nretrieving revision 1.38\ndiff -c -r1.38 guc.c\n*** src/backend/utils/misc/guc.c\t2001/06/12 22:54:06\t1.38\n--- src/backend/utils/misc/guc.c\t2001/06/13 17:11:17\n***************\n*** 33,38 ****\n--- 33,39 ----\n #include \"parser/parse_expr.h\"\n #include \"storage/proc.h\"\n #include \"tcop/tcopprot.h\"\n+ #include \"utils/datetime.h\"\n \n \n /* XXX these should be in other modules' header files */\n***************\n*** 69,74 ****\n--- 70,77 ----\n \n bool\t\tSQL_inheritance = true;\n \n+ bool\t\tAustralian_timezones = false;\n+ \n #ifndef PG_KRB_SRVTAB\n #define PG_KRB_SRVTAB \"\"\n #endif\n***************\n*** 229,234 ****\n--- 232,240 ----\n \n \t{\"sql_inheritance\", PGC_USERSET, &SQL_inheritance, true, NULL},\n \n+ \t{\"australian_timezones\", PGC_USERSET, &Australian_timezones,\n+ \tfalse, ClearDateCache},\n+ \n \t{\"fixbtree\", PGC_POSTMASTER, &FixBTree, true, NULL},\n \n \t{NULL, 0, NULL, false, NULL}\n***************\n*** 327,334 ****\n \tDEFAULT_CPU_OPERATOR_COST, 0, DBL_MAX, NULL, NULL},\n \n \t{\"geqo_selection_bias\", PGC_USERSET, &Geqo_selection_bias,\n! \t DEFAULT_GEQO_SELECTION_BIAS, MIN_GEQO_SELECTION_BIAS,\n! \t MAX_GEQO_SELECTION_BIAS, NULL, NULL},\n \n \t{NULL, 0, NULL, 0.0, 0.0, 0.0, NULL, NULL}\n };\n--- 333,340 ----\n \tDEFAULT_CPU_OPERATOR_COST, 0, DBL_MAX, NULL, NULL},\n \n \t{\"geqo_selection_bias\", PGC_USERSET, &Geqo_selection_bias,\n! \tDEFAULT_GEQO_SELECTION_BIAS, MIN_GEQO_SELECTION_BIAS,\n! \tMAX_GEQO_SELECTION_BIAS, NULL, NULL},\n \n \t{NULL, 0, NULL, 0.0, 0.0, 0.0, NULL, NULL}\n };\n***************\n*** 360,367 ****\n \t\"\", NULL, NULL},\n \n \t{\"wal_sync_method\", PGC_SIGHUP, &XLOG_sync_method,\n! \t\tXLOG_sync_method_default,\n! \tcheck_xlog_sync_method, assign_xlog_sync_method},\n \n \t{NULL, 0, NULL, NULL, NULL, NULL}\n };\n--- 366,373 ----\n \t\"\", NULL, NULL},\n \n \t{\"wal_sync_method\", PGC_SIGHUP, &XLOG_sync_method,\n! \tXLOG_sync_method_default, check_xlog_sync_method,\n! \tassign_xlog_sync_method},\n \n \t{NULL, 0, NULL, NULL, NULL, NULL}\n };\n***************\n*** 956,961 ****\n--- 962,968 ----\n \t\tcase PGC_BOOL:\n \t\t\tval = *((struct config_bool *) record)->variable ? \"on\" : \"off\";\n \t\t\tbreak;\n+ \n \t\tcase PGC_INT:\n \t\t\tsnprintf(buffer, sizeof(buffer), \"%d\",\n \t\t\t\t\t *((struct config_int *) record)->variable);\nIndex: src/backend/utils/misc/postgresql.conf.sample\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/utils/misc/postgresql.conf.sample,v\nretrieving revision 1.11\ndiff -c -r1.11 postgresql.conf.sample\n*** src/backend/utils/misc/postgresql.conf.sample\t2001/05/07 23:32:55\t1.11\n--- src/backend/utils/misc/postgresql.conf.sample\t2001/06/13 17:11:17\n***************\n*** 85,108 ****\n \n \n #\n- #\tInheritance\n- #\n- #sql_inheritance = true\n- \n- \n- #\n- #\tDeadlock\n- #\n- #deadlock_timeout = 1000\n- \n- \n- #\n- #\tExpression Depth Limitation\n- #\n- #max_expr_depth = 10000 # min 10\n- \n- \n- #\n #\tWrite-ahead log (WAL)\n #\n #wal_buffers = 8 # min 4\n--- 85,90 ----\n***************\n*** 172,174 ****\n--- 154,166 ----\n #trace_lock_oidmin = 16384\n #trace_lock_table = 0\n #endif\n+ \n+ \n+ #\n+ #\tMisc\n+ #\n+ #sql_inheritance = true\n+ #australian_timezones = false\n+ #deadlock_timeout = 1000\n+ #max_expr_depth = 10000 # min 10\n+ \nIndex: src/include/utils/datetime.h\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/include/utils/datetime.h,v\nretrieving revision 1.18\ndiff -c -r1.18 datetime.h\n*** src/include/utils/datetime.h\t2001/05/03 22:53:07\t1.18\n--- src/include/utils/datetime.h\t2001/06/13 17:11:21\n***************\n*** 182,187 ****\n--- 182,188 ----\n \tchar\t\tvalue;\t\t\t/* this may be unsigned, alas */\n } datetkn;\n \n+ extern datetkn datetktbl[];\n \n /* TMODULO()\n * Macro to replace modf(), which is broken on some platforms.\n***************\n*** 264,269 ****\n--- 265,271 ----\n \n extern int\tDecodeSpecial(int field, char *lowtoken, int *val);\n extern int\tDecodeUnits(int field, char *lowtoken, int *val);\n+ extern void ClearDateCache(bool);\n \n extern int\tj2day(int jd);\n \nIndex: src/include/utils/guc.h\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/include/utils/guc.h,v\nretrieving revision 1.8\ndiff -c -r1.8 guc.h\n*** src/include/utils/guc.h\t2001/06/12 22:54:06\t1.8\n--- src/include/utils/guc.h\t2001/06/13 17:11:21\n***************\n*** 70,74 ****\n--- 70,75 ----\n extern bool Show_btree_build_stats;\n \n extern bool SQL_inheritance;\n+ extern bool Australian_timezones;\n \n #endif\t /* GUC_H */\nIndex: src/test/regress/expected/horology-no-DST-before-1970.out\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/test/regress/expected/horology-no-DST-before-1970.out,v\nretrieving revision 1.12\ndiff -c -r1.12 horology-no-DST-before-1970.out\n*** src/test/regress/expected/horology-no-DST-before-1970.out\t2001/04/06 05:50:25\t1.12\n--- src/test/regress/expected/horology-no-DST-before-1970.out\t2001/06/13 17:11:28\n***************\n*** 4,9 ****\n--- 4,11 ----\n --\n -- date, time arithmetic\n --\n+ -- needed so tests pass\n+ SET australian_timezones = 'off';\n SELECT date '1981-02-03' + time '04:05:06' AS \"Date + Time\";\n Date + Time \n ------------------------------\nIndex: src/test/regress/expected/horology-solaris-1947.out\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/test/regress/expected/horology-solaris-1947.out,v\nretrieving revision 1.10\ndiff -c -r1.10 horology-solaris-1947.out\n*** src/test/regress/expected/horology-solaris-1947.out\t2001/04/06 05:50:25\t1.10\n--- src/test/regress/expected/horology-solaris-1947.out\t2001/06/13 17:11:29\n***************\n*** 4,9 ****\n--- 4,11 ----\n --\n -- date, time arithmetic\n --\n+ -- needed so tests pass\n+ SET australian_timezones = 'off';\n SELECT date '1981-02-03' + time '04:05:06' AS \"Date + Time\";\n Date + Time \n ------------------------------\nIndex: src/test/regress/expected/horology.out\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/test/regress/expected/horology.out,v\nretrieving revision 1.23\ndiff -c -r1.23 horology.out\n*** src/test/regress/expected/horology.out\t2001/04/06 05:50:25\t1.23\n--- src/test/regress/expected/horology.out\t2001/06/13 17:11:31\n***************\n*** 4,9 ****\n--- 4,11 ----\n --\n -- date, time arithmetic\n --\n+ -- needed so tests pass\n+ SET australian_timezones = 'off';\n SELECT date '1981-02-03' + time '04:05:06' AS \"Date + Time\";\n Date + Time \n ------------------------------\nIndex: src/test/regress/expected/timestamp.out\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/test/regress/expected/timestamp.out,v\nretrieving revision 1.12\ndiff -c -r1.12 timestamp.out\n*** src/test/regress/expected/timestamp.out\t2001/05/03 19:00:37\t1.12\n--- src/test/regress/expected/timestamp.out\t2001/06/13 17:11:37\n***************\n*** 4,9 ****\n--- 4,11 ----\n -- Shorthand values\n -- Not directly usable for regression testing since these are not constants.\n -- So, just try to test parser and hope for the best - thomas 97/04/26\n+ -- needed so tests pass\n+ SET australian_timezones = 'off';\n SELECT (timestamp 'today' = (timestamp 'yesterday' + interval '1 day')) as \"True\";\n True \n ------\nIndex: src/test/regress/sql/horology.sql\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/test/regress/sql/horology.sql,v\nretrieving revision 1.14\ndiff -c -r1.14 horology.sql\n*** src/test/regress/sql/horology.sql\t2001/04/06 05:50:29\t1.14\n--- src/test/regress/sql/horology.sql\t2001/06/13 17:11:38\n***************\n*** 1,10 ****\n --\n -- HOROLOGY\n --\n- \n --\n -- date, time arithmetic\n --\n \n SELECT date '1981-02-03' + time '04:05:06' AS \"Date + Time\";\n \n--- 1,11 ----\n --\n -- HOROLOGY\n --\n --\n -- date, time arithmetic\n --\n+ -- needed so tests pass\n+ SET australian_timezones = 'off';\n \n SELECT date '1981-02-03' + time '04:05:06' AS \"Date + Time\";\n \nIndex: src/test/regress/sql/timestamp.sql\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/test/regress/sql/timestamp.sql,v\nretrieving revision 1.7\ndiff -c -r1.7 timestamp.sql\n*** src/test/regress/sql/timestamp.sql\t2000/11/25 05:00:33\t1.7\n--- src/test/regress/sql/timestamp.sql\t2001/06/13 17:11:38\n***************\n*** 1,10 ****\n --\n -- DATETIME\n --\n- \n -- Shorthand values\n -- Not directly usable for regression testing since these are not constants.\n -- So, just try to test parser and hope for the best - thomas 97/04/26\n \n SELECT (timestamp 'today' = (timestamp 'yesterday' + interval '1 day')) as \"True\";\n SELECT (timestamp 'today' = (timestamp 'tomorrow' - interval '1 day')) as \"True\";\n--- 1,11 ----\n --\n -- DATETIME\n --\n -- Shorthand values\n -- Not directly usable for regression testing since these are not constants.\n -- So, just try to test parser and hope for the best - thomas 97/04/26\n+ -- needed so tests pass\n+ SET australian_timezones = 'off';\n \n SELECT (timestamp 'today' = (timestamp 'yesterday' + interval '1 day')) as \"True\";\n SELECT (timestamp 'today' = (timestamp 'tomorrow' - interval '1 day')) as \"True\";",
"msg_date": "Wed, 13 Jun 2001 13:16:51 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Australian timezone configure option"
},
{
"msg_contents": "> Now I'm going to object LOUDLY. You cannot convince me that the above\n> is a good implementation --- it's a complete crock, and will break the\n> instant someone looks at it sidewise.\n\nBut it hasn't broken in years of use and maintenance, so that does not\nsound like an issue. Eventually, we may want full localization with\ncharacter sets *and* date/time conventions, and all of this can be\nrethought at that time.\n\n> My inclination would actually be to rip out the cache entirely. bsearch\n> in a table this size is not so expensive that we need to bypass it, nor\n> is it apparent that we are going to see lots of successive lookups for\n> the same keyword anyway. How long has that cache been in there, and\n> what was the motivation for adding it to begin with?\n\nThe cache lookup came from the gods themselves at Berkeley ;)\n\n - Thomas\n",
"msg_date": "Thu, 14 Jun 2001 00:29:18 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: Australian timezone configure option"
},
{
"msg_contents": "On Wed, Jun 13, 2001 at 01:16:51PM -0400, Bruce Momjian wrote:\n> > On Mon, Jun 11, 2001 at 11:53:59PM -0400, Bruce Momjian wrote:\n> > > > Hi,\n> > > > \n> > > > Being in Australia, it's always been a minor pain building the support\n> > > > for Australian timezone rules by defining USE_AUSTRALIAN_RULES to the\n> > > > compiler. Not to mention the not inconsiderable pain involved in pawing\n> > > > through the code and documentation trying to work out why the timezones\n> > > > were wrong in the first place.\n> > > \n> > > OK, this patch makes Australian_timezones a GUC option. It can be set\n> > > anytime in psql. The code uses a static variable to check if the GUC\n> > > setting has changed and adjust the C struct accordingly. I have also\n> > > added code to allow the regression tests to pass even if postgresql.conf\n> > > has australian_timezones defined.\n> \n> Here is a new version of the patch. Tom added callbacks to GUC boolean\n> variables so I was able to make a separate Australian lookup table and\n> clear the cache if anyone changes the setting.\n\nWith the new Australian timezone patch requiring Tom's callback patch\nwe're diverging more and more from the standard 7.1.2 code base, so for\nmy own 7.1.2 systems I'm going to revert to my original 'configure'\npatch.\n\nThanks for your efforts on this (and everything else of course), and\nlooking forward to the 7.2 code!\n\nCheers,\n\nChris,\nOnTheNet\n",
"msg_date": "Thu, 14 Jun 2001 10:42:08 +1000",
"msg_from": "Chris Dunlop <chris@onthe.net.au>",
"msg_from_op": true,
"msg_subject": "Re: Australian timezone configure option"
},
{
"msg_contents": "> > My inclination would actually be to rip out the cache entirely. bsearch\n> > in a table this size is not so expensive that we need to bypass it, nor\n> > is it apparent that we are going to see lots of successive lookups for\n> > the same keyword anyway. How long has that cache been in there, and\n> > what was the motivation for adding it to begin with?\n> \n> The cache lookup came from the gods themselves at Berkeley ;)\n\nWow, I didn't know that. I like the phrase.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 13 Jun 2001 20:56:48 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Australian timezone configure option"
},
{
"msg_contents": "> > Here is a new version of the patch. Tom added callbacks to GUC boolean\n> > variables so I was able to make a separate Australian lookup table and\n> > clear the cache if anyone changes the setting.\n> \n> With the new Australian timezone patch requiring Tom's callback patch\n> we're diverging more and more from the standard 7.1.2 code base, so for\n> my own 7.1.2 systems I'm going to revert to my original 'configure'\n> patch.\n> \n> Thanks for your efforts on this (and everything else of course), and\n> looking forward to the 7.2 code!\n\nYes, I was concerned about that. If you need GUC, the second patch I\nsubmitted with fix for static and cache lookup should be fine in 7.1.X.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 13 Jun 2001 20:58:24 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Australian timezone configure option"
},
{
"msg_contents": "Thomas Lockhart <lockhart@alumni.caltech.edu> writes:\n>> Now I'm going to object LOUDLY. You cannot convince me that the above\n>> is a good implementation --- it's a complete crock, and will break the\n>> instant someone looks at it sidewise.\n\n> But it hasn't broken in years of use and maintenance, so that does not\n> sound like an issue.\n\nUh, I was complaining about Bruce's idea of scribbling on the datetkn\ntable and expecting that to change what the datecache records. That's\nnot something we've been doing for years and years, and no I don't think\nit's maintainable.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Jun 2001 21:20:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Australian timezone configure option "
},
{
"msg_contents": "> Uh, I was complaining about Bruce's idea of scribbling on the datetkn\n> table and expecting that to change what the datecache records. That's\n> not something we've been doing for years and years, and no I don't think\n> it's maintainable.\n\nThanks for the clarification on what you actually meant.\n\n - Thomas\n",
"msg_date": "Thu, 14 Jun 2001 13:54:59 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Australian timezone configure option"
}
] |
[
{
"msg_contents": "Hi,\n\nNew version of contrib-intarray for postgresql version 7.1 and above\nis available from http://www.sai.msu.su/~megera/postgres/gist/\n\nChanges:\n 1.Support for new interface of function calling (7.1 and above)\n 2.Optimization for gist__intbig_ops (special treating of degenerated\n signatures)\n\nThis version is independent from our work on multi-key GiST, so\nif 7.1.3 is planned please use it. Also, I'd like to see it in\ncurrent CVS to be in sync with development\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Thu, 31 May 2001 18:16:56 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "New version of contrib-intarray "
},
{
"msg_contents": "Your patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n> Hi,\n> \n> New version of contrib-intarray for postgresql version 7.1 and above\n> is available from http://www.sai.msu.su/~megera/postgres/gist/\n> \n> Changes:\n> 1.Support for new interface of function calling (7.1 and above)\n> 2.Optimization for gist__intbig_ops (special treating of degenerated\n> signatures)\n> \n> This version is independent from our work on multi-key GiST, so\n> if 7.1.3 is planned please use it. Also, I'd like to see it in\n> current CVS to be in sync with development\n> \n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 31 May 2001 11:57:32 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: New version of contrib-intarray"
},
{
"msg_contents": "Oleg Bartunov <oleg@sai.msu.su> writes:\n> New version of contrib-intarray for postgresql version 7.1 and above\n> is available from http://www.sai.msu.su/~megera/postgres/gist/\n\nI'm going to be making commits later today that clean up the \"char*\nthat should be Datum\" ugliness in GiST. I think the intarray part\nof these changes overlap what you've done, so we're facing a bit of\na merge problem. You should have let me know that you had more stuff\nin the pipeline.\n\n> This version is independent from our work on multi-key GiST, so\n> if 7.1.3 is planned please use it.\n\nI do not think we should be committing such changes into the 7.1.*\nbranch. At this point only critical bug fixes are going to go into\nthat branch.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 31 May 2001 12:14:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: New version of contrib-intarray "
},
{
"msg_contents": "On Thu, 31 May 2001, Tom Lane wrote:\n\n> Oleg Bartunov <oleg@sai.msu.su> writes:\n> > New version of contrib-intarray for postgresql version 7.1 and above\n> > is available from http://www.sai.msu.su/~megera/postgres/gist/\n>\n> I'm going to be making commits later today that clean up the \"char*\n> that should be Datum\" ugliness in GiST. I think the intarray part\n> of these changes overlap what you've done, so we're facing a bit of\n> a merge problem. You should have let me know that you had more stuff\n> in the pipeline.\n>\n\nWe have been waiting for applying our patch to current sources\nto be in sync. We'll change sources of new version of contrib-intarray\ntaking into account your comments.\n\nOur TODO:\n\n 1. Implementation some btree_ops using GiST\n But we need a solution of problem with different types of\n key and column compatible with multi-key GiST index\n\n 2. After resolving the problem 1. we need to remove workaround in\n rtree GiSt code\n\n 3. regression test for built-in rtree and based on it regression test for\n GiST.\n\n 4. documentation and Guide for programmers\n\n\nWe badly need a solution from item 1 !!! We don't know internals of postgres\nenough to resolve this problem ourselves. Do you have any idea ?\n\n\n> > This version is independent from our work on multi-key GiST, so\n> > if 7.1.3 is planned please use it.\n>\n> I do not think we should be committing such changes into the 7.1.*\n> branch. At this point only critical bug fixes are going to go into\n> that branch.\n\nagreed. we'll put new versions on our page.\n\n>\n> \t\t\tregards, tom lane\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Fri, 1 Jun 2001 12:17:28 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "Re: New version of contrib-intarray "
},
{
"msg_contents": "On Fri, 1 Jun 2001, Oleg Bartunov wrote:\n\n> On Thu, 31 May 2001, Tom Lane wrote:\n>\n> > Oleg Bartunov <oleg@sai.msu.su> writes:\n> > > New version of contrib-intarray for postgresql version 7.1 and above\n> > > is available from http://www.sai.msu.su/~megera/postgres/gist/\n> >\n> > I'm going to be making commits later today that clean up the \"char*\n> > that should be Datum\" ugliness in GiST. I think the intarray part\n> > of these changes overlap what you've done, so we're facing a bit of\n> > a merge problem. You should have let me know that you had more stuff\n> > in the pipeline.\n> >\n>\n> We have been waiting for applying our patch to current sources\n> to be in sync. We'll change sources of new version of contrib-intarray\n> taking into account your comments.\n>\n\nTom, as promised, I attached patch to current CVS contrib/intarray -\nit's reimplemented to use function interface version 1 and special\ntreating of degenerated signatures.\n\nCorresponding version for 7.1.X is available from\nhttp://www.sai.msu.su/~megera/postgres/gist/\nAs we discussed previously we don't insist to include it into\n7.1.X branch.\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83",
"msg_date": "Fri, 1 Jun 2001 19:46:57 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "Re: Re: New version of contrib-intarray "
},
{
"msg_contents": "Oleg Bartunov <oleg@sai.msu.su> writes:\n>> We have been waiting for applying our patch to current sources\n>> to be in sync. We'll change sources of new version of contrib-intarray\n>> taking into account your comments.\n\n> Tom, as promised, I attached patch to current CVS contrib/intarray -\n> it's reimplemented to use function interface version 1 and special\n> treating of degenerated signatures.\n\nApplied.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 10 Jun 2001 22:31:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: New version of contrib-intarray "
}
] |
[
{
"msg_contents": "One more feature for discussion :-)\n\n In the next couple of hours (at least tomorrow) I would be\n ready to commit the backend changes for table-/index-access\n statistics and current backend activity views.\n\n Should I apply the patches or provide a separate patch for\n review first?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Thu, 31 May 2001 12:20:38 -0400 (EDT)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": true,
"msg_subject": "Access statistics"
},
{
"msg_contents": "Jan Wieck <JanWieck@Yahoo.com> writes:\n> In the next couple of hours (at least tomorrow) I would be\n> ready to commit the backend changes for table-/index-access\n> statistics and current backend activity views.\n> Should I apply the patches or provide a separate patch for\n> review first?\n\nConsidering that you've not offered any detailed information about\nwhat you plan to do (AFAIR), a patch for review first would be polite ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 31 May 2001 12:53:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Access statistics "
},
{
"msg_contents": "> One more feature for discussion :-)\n> \n> In the next couple of hours (at least tomorrow) I would be\n> ready to commit the backend changes for table-/index-access\n> statistics and current backend activity views.\n> \n> Should I apply the patches or provide a separate patch for\n> review first?\n\nI like doing a cvs diff -c and throwing the patch to PATCHES just before\ncommit. That way, people can see may changes easier and find problems. \nOf course, if I am at all unsure, I post to patches and wait 2 days.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 31 May 2001 13:00:03 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Access statistics"
},
{
"msg_contents": "Tom Lane wrote:\n> Jan Wieck <JanWieck@Yahoo.com> writes:\n> > In the next couple of hours (at least tomorrow) I would be\n> > ready to commit the backend changes for table-/index-access\n> > statistics and current backend activity views.\n> > Should I apply the patches or provide a separate patch for\n> > review first?\n>\n> Considering that you've not offered any detailed information about\n> what you plan to do (AFAIR), a patch for review first would be polite ...\n\n We had that discussion a couple of weeks ago down to if it's\n better to use UNIX or INET domain UDP sockets. But I expected\n it not to be detailed enough for our current quality level\n :-)\n\n It's incomplete anyway since the per database configuration\n in pg_database is missing and some other details I need to\n tidy up. So I'll wrap up a patch tomorrow.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Thu, 31 May 2001 15:10:44 -0400 (EDT)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": true,
"msg_subject": "Re: Access statistics"
},
{
"msg_contents": "Jan Wieck writes:\n\n> In the next couple of hours (at least tomorrow) I would be\n> ready to commit the backend changes for table-/index-access\n> statistics and current backend activity views.\n>\n> Should I apply the patches or provide a separate patch for\n> review first?\n\nMaybe you could describe what it's going to do and how it's going to work.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Thu, 31 May 2001 23:05:24 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Access statistics"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> Jan Wieck writes:\n>\n> > In the next couple of hours (at least tomorrow) I would be\n> > ready to commit the backend changes for table-/index-access\n> > statistics and current backend activity views.\n> >\n> > Should I apply the patches or provide a separate patch for\n> > review first?\n>\n> Maybe you could describe what it's going to do and how it's going to work.\n\n Real programmers don't comment - if it was hard to write it\n should be hard to read :-)\n\n So outing myself not beeing a *real programmer*, this is what\n I have so far:\n\n * On startup the postmaster creates an INET domain UDP socket\n and bind(2)'s it to localhost:0, meaning the kernel will\n assign a yet unused, unprivileged port that could be seen\n with getsockaddr(2).\n\n It then starts two background processes of which one is\n simply a wraparound buffer doing recvfrom(2) on the socket,\n checking that the source address of the received packets is\n the sockets own address (!) and forwarding approved ones\n over a pipe to the second one, discribed later.\n\n * Backends call some collector functions at various places\n now (these will finally be macros), that count up table\n scans, tuples returned by scans, buffer fetches/hits and\n the like. At the beginning of a statement the backends send\n a message telling the first couple of hundred bytes of the\n querystring and after the statement is done (just before\n getting ready for the next command) they send the collected\n access numbers.\n\n Tables, indexes etc. in these statistics are identified by\n OID, so the data doesn't tell much so far.\n\n * The second background process fired by the postmaster\n collects these numbers into hashtables and as long as it\n receives messages, it'll write out a summary file every 500\n or so milliseconds, telling a snapshot of current stats.\n Using select(2) with timeouts ensures that a complete idle\n DB instance not to waste a single CPU cycle or IO to write\n these snapshots.\n\n On startup it tries to read the last snapshot file in, so\n the collected statistics survive a postmaster restart.\n\n Vacuum reads the file too and sends bulk delete messages\n for objects that are gone. So the stats don't grow\n infinitely.\n\n * A bunch of new builtin functions gain access to the\n snapshot file. At first call of one of these functions\n during a transaction, the backend will read the current\n file and return the numbers from in memory then.\n\n Based on these functions a couple of views can tell these\n collected stats. Information from the databases system\n catalog is of course required to identify the objects in\n the stats, but I think those informations should only be\n visible to someone who identified herself as a valid DB\n user anyway.\n\n The visibility of querystrings (this info is available\n cross DB) is restricted to DB superusers.\n\n There has been discussion already about using an INET vs.\n UNIX UDP socket for the communication. At least for Linux I\n found INET to be the most effective way of communication. And\n for security concerns: If someone else than root can really\n send packets to that socket that show up with a source\n address of 127.0.0.1:n, where n is a portnumber actually\n occupied by your own socket, be sure you'll have more severe\n problems than modified access statistics.\n\n The views should be considered examples. The final naming and\n layout is subject for discussion.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Fri, 1 Jun 2001 09:31:05 -0400 (EDT)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": true,
"msg_subject": "Re: Access statistics"
},
{
"msg_contents": "Jan Wieck <JanWieck@Yahoo.com> writes:\n> So outing myself not beeing a *real programmer*, this is what\n> I have so far:\n\nHmm ... what is the performance of all this like? Seems like a lot\nof overhead. Can it be turned off?\n\n> * Backends call some collector functions at various places\n> now (these will finally be macros), that count up table\n> scans, tuples returned by scans, buffer fetches/hits and\n> the like.\n\nHave you removed the existing stats-gathering support\n(backend/access/heap/stats.c and so on)? That would buy back\nat least a few of the cycles involved ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 01 Jun 2001 10:08:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Access statistics "
},
{
"msg_contents": "Tom Lane wrote:\n> Jan Wieck <JanWieck@Yahoo.com> writes:\n> > So outing myself not beeing a *real programmer*, this is what\n> > I have so far:\n>\n> Hmm ... what is the performance of all this like? Seems like a lot\n> of overhead. Can it be turned off?\n\n Current performance loss is about 2-4% wallclock measured. I\n expect it to become better when turning some of the functions\n into macros.\n\n The plan is to add another column to pg_database that can be\n used to turn it on/off on a per database level. Backends just\n decide at startup if they collect and send for their session\n lifetime.\n\n> > * Backends call some collector functions at various places\n> > now (these will finally be macros), that count up table\n> > scans, tuples returned by scans, buffer fetches/hits and\n> > the like.\n>\n> Have you removed the existing stats-gathering support\n> (backend/access/heap/stats.c and so on)? That would buy back\n> at least a few of the cycles involved ...\n\n Not sure if we really should. Let's later decide if it's\n really obsolete.\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Fri, 1 Jun 2001 10:34:36 -0400 (EDT)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": true,
"msg_subject": "Re: Access statistics"
},
{
"msg_contents": "Jan Wieck <JanWieck@yahoo.com> writes:\n> Tom Lane wrote:\n>> Have you removed the existing stats-gathering support\n>> (backend/access/heap/stats.c and so on)? That would buy back\n>> at least a few of the cycles involved ...\n\n> Not sure if we really should. Let's later decide if it's\n> really obsolete.\n\nConsidering that Bruce long ago ifdef'd out all the code that could\nactually *do* anything with those stats (like print them), I'd say\nit's obsolete. In any case, it's too confusing to have two sets of\nstats-gathering code in there. I vote for getting rid of the old\nstuff.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 01 Jun 2001 10:42:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Access statistics "
},
{
"msg_contents": "> Tom Lane wrote:\n> > Jan Wieck <JanWieck@Yahoo.com> writes:\n> > > So outing myself not beeing a *real programmer*, this is what\n> > > I have so far:\n> >\n> > Hmm ... what is the performance of all this like? Seems like a lot\n> > of overhead. Can it be turned off?\n> \n> Current performance loss is about 2-4% wallclock measured. I\n> expect it to become better when turning some of the functions\n> into macros.\n\nAt 2-4%, I assume it is not enabled by default. I can see the query\nstring part being enabled by default though.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 1 Jun 2001 12:20:36 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Access statistics"
},
{
"msg_contents": "> Jan Wieck <JanWieck@yahoo.com> writes:\n> > Tom Lane wrote:\n> >> Have you removed the existing stats-gathering support\n> >> (backend/access/heap/stats.c and so on)? That would buy back\n> >> at least a few of the cycles involved ...\n> \n> > Not sure if we really should. Let's later decide if it's\n> > really obsolete.\n> \n> Considering that Bruce long ago ifdef'd out all the code that could\n> actually *do* anything with those stats (like print them), I'd say\n> it's obsolete. In any case, it's too confusing to have two sets of\n> stats-gathering code in there. I vote for getting rid of the old\n> stuff.\n\nI agree. Rip away.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 1 Jun 2001 12:23:44 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Access statistics"
}
] |
[
{
"msg_contents": "Hi All,\n\nI'm developing (currently in pre-alfa stage) a Acucobol interface for the \nPostgresql.\nThe Acucobol runtime have a generic FS API interface that handle the work \nwith the\nrecord oriented files, defining the open, close, read, write and so on low \nlevel function I can\nextend the runtime to talk with any file and database.\n\nMy current work translate each Acucobol FS command in a relative Postgresql \nquery and\nthe returned tuple will be translated in a record oriented view.\nAfter some performance tests I've notice that this path have much overhead \nand because\nthis I was thinking to redesign the interface.\n\nMy first think was to bypass the SQL translation and use the Postgresql low \nlevel routines.\nI need to see the tables as record oriented archive, so I can scan \nsequentially (forward and\nbackward) each record, lock/unlock it, insert and delete it and start to \nread the records with\na match of a specific key.\n\nDoes anyone know where can I start to search/read/learn/study some \ndocument/code of the\nPostgresql low level routines ?\n\nIf need some detail, please ask ;-)!\n\nThanks in advance.\n\n\nRoberto Fichera.\n\n",
"msg_date": "Thu, 31 May 2001 18:26:17 +0200",
"msg_from": "Roberto Fichera <robyf@tekno-soft.it>",
"msg_from_op": true,
"msg_subject": "Acucobol interface"
},
{
"msg_contents": "Roberto Fichera <robyf@tekno-soft.it> writes:\n> My first think was to bypass the SQL translation and use the Postgresql low \n> level routines.\n> I need to see the tables as record oriented archive, so I can scan \n> sequentially (forward and\n> backward) each record, lock/unlock it, insert and delete it and start to \n> read the records with\n> a match of a specific key.\n\nI don't think you want an SQL database at all. Possibly something like\nSleepycat's Berkeley DB package is closer to what you are looking for...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 05 Jun 2001 18:13:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Acucobol interface "
},
{
"msg_contents": "At 18.13 05/06/01 -0400, Tom Lane wrote:\n\n>Roberto Fichera <robyf@tekno-soft.it> writes:\n> > My first think was to bypass the SQL translation and use the Postgresql \n> low\n> > level routines.\n> > I need to see the tables as record oriented archive, so I can scan\n> > sequentially (forward and\n> > backward) each record, lock/unlock it, insert and delete it and start to\n> > read the records with\n> > a match of a specific key.\n>\n>I don't think you want an SQL database at all. Possibly something like\n>Sleepycat's Berkeley DB package is closer to what you are looking for...\n>\n> regards, tom lane\n\nI know the Sleepycat's Berkelay DB packages but isn't what I need.\nI need a relational database that can be used outside our Acucobol program,\nlike Excel, Access, Apache and in general a SQL view of our data for external\nanalysis and presentation. This is why I'm thinking to use SQL and in \nparticular\nthe PostgreSQL. Currently there is only one direct interface from Acucobol and\na SQL server and was developed by DBMaker for their SQL server, but have\nsome limitation that I want bypass.\n\nregards,\n\n\nRoberto Fichera.\n\n",
"msg_date": "Wed, 06 Jun 2001 09:46:31 +0200",
"msg_from": "Roberto Fichera <kernel@tekno-soft.it>",
"msg_from_op": false,
"msg_subject": "Re: Acucobol interface "
},
{
"msg_contents": "Roberto Fichera wrote:\n> \n> Hi All,\n> \n> I'm developing (currently in pre-alfa stage) a Acucobol interface for the\n> Postgresql.\n> The Acucobol runtime have a generic FS API interface that handle the work\n> with the\n> record oriented files, defining the open, close, read, write and so on low\n> level function I can\n> extend the runtime to talk with any file and database.\n> \n> My current work translate each Acucobol FS command in a relative Postgresql\n> query and\n> the returned tuple will be translated in a record oriented view.\n> After some performance tests I've notice that this path have much overhead\n> and because\n> this I was thinking to redesign the interface.\n> \n> My first think was to bypass the SQL translation and use the Postgresql low\n> level routines.\n> I need to see the tables as record oriented archive, so I can scan\n> sequentially (forward and\n> backward) each record, lock/unlock it, insert and delete it and start to\n> read the records with\n> a match of a specific key.\n> \n> Does anyone know where can I start to search/read/learn/study some\n> document/code of the\n> Postgresql low level routines ?\n> \n> If need some detail, please ask ;-)!\n> \n> Thanks in advance.\n> \n> Roberto Fichera.\n\nWhat you are looking for is a very powerful database back-end, as Tom Lane\nsuggests, something like Berkeley DB might do, but then what you want is a SQL\ninterface over that.\n\nI am reticent to admit that I have done a little COBOL and the interface for\ndata paradigms is very good for a dBase like package. If you can live without a\nclient/server interface ala Postgres, and can live with a file based access\nmethodology, then what you want is doable.\n\nI'm not aware what platform you wish to run your program, I am assuming\nWindows. The old dBase format is currently being used under the name \"xbase.\"\nThere are many libraries that conform to this file format and offer the type of\naccess which you wish to have. On top of that, there are ODBC drivers (in UNIX\nand Windows, btw) for these xBase files.\n\nYou write acucobol extensions using some generic xbase access layer, and use\nthe ODBC xbase driver for applications like Access and Excel.\n\nYou'll have to sort out all the issues like concurrent access, and stuff like\nthat, but it should come pretty close to what you want to do.\n",
"msg_date": "Thu, 07 Jun 2001 08:03:55 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Acucobol interface"
},
{
"msg_contents": "At 08.03 07/06/01 -0400, mlw wrote:\n>Roberto Fichera wrote:\n> >\n> > Hi All,\n> >\n> > I'm developing (currently in pre-alfa stage) a Acucobol interface for the\n> > Postgresql.\n> > The Acucobol runtime have a generic FS API interface that handle the work\n> > with the\n> > record oriented files, defining the open, close, read, write and so on low\n> > level function I can\n> > extend the runtime to talk with any file and database.\n> >\n> > My current work translate each Acucobol FS command in a relative Postgresql\n> > query and\n> > the returned tuple will be translated in a record oriented view.\n> > After some performance tests I've notice that this path have much overhead\n> > and because\n> > this I was thinking to redesign the interface.\n> >\n> > My first think was to bypass the SQL translation and use the Postgresql low\n> > level routines.\n> > I need to see the tables as record oriented archive, so I can scan\n> > sequentially (forward and\n> > backward) each record, lock/unlock it, insert and delete it and start to\n> > read the records with\n> > a match of a specific key.\n> >\n> > Does anyone know where can I start to search/read/learn/study some\n> > document/code of the\n> > Postgresql low level routines ?\n> >\n> > If need some detail, please ask ;-)!\n> >\n> > Thanks in advance.\n> >\n> > Roberto Fichera.\n>\n>What you are looking for is a very powerful database back-end, as Tom Lane\n>suggests, something like Berkeley DB might do, but then what you want is a SQL\n>interface over that.\n\nI've already evaluated the Berkeley DB interface, and surely it's a good\ninterface, it maybe superior of the current Acucobol proprietary format\nwhen the archive is very large (several Gb).\n\n>I am reticent to admit that I have done a little COBOL and the interface for\n>data paradigms is very good for a dBase like package. If you can live \n>without a\n>client/server interface ala Postgres, and can live with a file based access\n>methodology, then what you want is doable.\n\nThe main problem is that we want see our data as relational database\nand we want continue to use the current programs. Currently we have\nsome customer that have their company archive large around 50Gb.\n\n>I'm not aware what platform you wish to run your program, I am assuming\n>Windows. The old dBase format is currently being used under the name \"xbase.\"\n>There are many libraries that conform to this file format and offer the \n>type of\n>access which you wish to have. On top of that, there are ODBC drivers (in UNIX\n>and Windows, btw) for these xBase files.\n\nAcucobol runtime, currently is present in around 650 different platform \n(HW/SW)\nso we can run the same programs in different environment. We use Linux\nand WNT/W2K as server and W9x/WME as client. Use the xBase format\nisn't a good choice when we have a several Gb of data, this is why I'm \nthinking\nto the PostgreSQL. The current Acucobol's \"flat file\" isn't adequate to manage\nsuch large files, we need a way to see that files as relational DB.\n\n>You write acucobol extensions using some generic xbase access layer, and use\n>the ODBC xbase driver for applications like Access and Excel.\n>\n>You'll have to sort out all the issues like concurrent access, and stuff like\n>that, but it should come pretty close to what you want to do.\n\nI have already done some work. I've implemented an extension of the generic\nAcucobol FS layer that talk with a PostgreSQL. This lowlevel layer \ntranslate each\nFS primitive in a query. The acucobol's record is translated in attribute \n(and vice versa)\nusing a XFD file (eXtended Fields Description) which describe each record's \nfield and\nthat is cached in memory. This file is generated by the acucobol compiler \nfor each used file.\nWith this informations I'm able to perform a complete query to the \nPostgreSQL, the\nreturned tuples will be translated in the expected \"flat record\" and \nfinally returned to the\nruntime for its work.\n\nI know, the cobol use a different philosophy than a relational DB but this \nmy work\nshow that this two different world could talk. Also, I know the difficulty \nof the major lowlevel\nintegration. I need some doc/indication/files-to-read of PostgreSQL \nlowlevel routines\nto see if this different world could have a major integration bypassing the \n\"overhead\" of\nthe query accessing directly to the DB.\n\n\nRoberto Fichera.\n\n",
"msg_date": "Thu, 07 Jun 2001 20:21:36 +0200",
"msg_from": "Roberto Fichera <kernel@tekno-soft.it>",
"msg_from_op": false,
"msg_subject": "Re: Re: Acucobol interface"
},
{
"msg_contents": "Roberto Fichera wrote:\n\n>\n> >I am reticent to admit that I have done a little COBOL and the interface for\n> >data paradigms is very good for a dBase like package. If you can live\n> >without a\n> >client/server interface ala Postgres, and can live with a file based access\n> >methodology, then what you want is doable.\n>\n> The main problem is that we want see our data as relational database\n> and we want continue to use the current programs. Currently we have\n> some customer that have their company archive large around 50Gb.\n\nIn what format is this data?\n\n> >I'm not aware what platform you wish to run your program, I am assuming\n> >Windows. The old dBase format is currently being used under the name \"xbase.\"\n> >There are many libraries that conform to this file format and offer the\n> >type of\n> >access which you wish to have. On top of that, there are ODBC drivers (in UNIX\n> >and Windows, btw) for these xBase files.\n>\n> Acucobol runtime, currently is present in around 650 different platform\n> (HW/SW)\n> so we can run the same programs in different environment. We use Linux\n> and WNT/W2K as server and W9x/WME as client. Use the xBase format\n> isn't a good choice when we have a several Gb of data, this is why I'm\n> thinking\n> to the PostgreSQL. The current Acucobol's \"flat file\" isn't adequate to manage\n> such large files, we need a way to see that files as relational DB.\n\nJust out of curiosity, why is the xbase format not a good choice?\n\n> >You write acucobol extensions using some generic xbase access layer, and use\n> >the ODBC xbase driver for applications like Access and Excel.\n> >\n> >You'll have to sort out all the issues like concurrent access, and stuff like\n> >that, but it should come pretty close to what you want to do.\n\nI think you are missing the point. A good xbase library will allow you to perform\n\"joins\" on data across tables. It doesn't have a SQL syntax, but that does not\nmean you can't code that way.\n\nAlso, ODBC drivers for xbase use SQL format queries.\n\n> I have already done some work. I've implemented an extension of the generic\n> Acucobol FS layer that talk with a PostgreSQL. This lowlevel layer\n> translate each\n> FS primitive in a query. The acucobol's record is translated in attribute\n> (and vice versa)\n> using a XFD file (eXtended Fields Description) which describe each record's\n> field and\n> that is cached in memory. This file is generated by the acucobol compiler\n> for each used file.\n> With this informations I'm able to perform a complete query to the\n> PostgreSQL, the\n> returned tuples will be translated in the expected \"flat record\" and\n> finally returned to the\n> runtime for its work.\n>\n> I know, the cobol use a different philosophy than a relational DB but this\n> my work\n> show that this two different world could talk. Also, I know the difficulty\n> of the major lowlevel\n> integration. I need some doc/indication/files-to-read of PostgreSQL\n> lowlevel routines\n> to see if this different world could have a major integration bypassing the\n> \"overhead\" of\n> the query accessing directly to the DB.\n\nI think, strongly, you are going down the wrong track. Take a look at:\n\nhttp://sourceforge.net/projects/xdb/\nhttp://www.unixodbc.org/\n\n\n\n>\n>\n> Roberto Fichera.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n",
"msg_date": "Thu, 07 Jun 2001 15:38:17 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Acucobol interface"
},
{
"msg_contents": "At 15.38 07/06/01 -0400, mlw wrote:\n\n>Roberto Fichera wrote:\n>\n> >\n> > >I am reticent to admit that I have done a little COBOL and the \n> interface for\n> > >data paradigms is very good for a dBase like package. If you can live\n> > >without a\n> > >client/server interface ala Postgres, and can live with a file based \n> access\n> > >methodology, then what you want is doable.\n> >\n> > The main problem is that we want see our data as relational database\n> > and we want continue to use the current programs. Currently we have\n> > some customer that have their company archive large around 50Gb.\n>\n>In what format is this data?\n\nIt's a VISION format, a proprietary variant of a B*Tree format.\n\n> > >I'm not aware what platform you wish to run your program, I am assuming\n> > >Windows. The old dBase format is currently being used under the name \n> \"xbase.\"\n> > >There are many libraries that conform to this file format and offer the\n> > >type of\n> > >access which you wish to have. On top of that, there are ODBC drivers \n> (in UNIX\n> > >and Windows, btw) for these xBase files.\n> >\n> > Acucobol runtime, currently is present in around 650 different platform\n> > (HW/SW)\n> > so we can run the same programs in different environment. We use Linux\n> > and WNT/W2K as server and W9x/WME as client. Use the xBase format\n> > isn't a good choice when we have a several Gb of data, this is why I'm\n> > thinking\n> > to the PostgreSQL. The current Acucobol's \"flat file\" isn't adequate to \n> manage\n> > such large files, we need a way to see that files as relational DB.\n>\n>Just out of curiosity, why is the xbase format not a good choice?\n\nBecause the DBF format didn't perform well on a large archive of serveral \nmillion\nrecords. It's don't reuse the deleted records, have a limitation of 255 \nfields and\nthe char() field can be max 255 char in length and the max record size is 4k.\n\n> > >You write acucobol extensions using some generic xbase access layer, \n> and use\n> > >the ODBC xbase driver for applications like Access and Excel.\n> > >\n> > >You'll have to sort out all the issues like concurrent access, and \n> stuff like\n> > >that, but it should come pretty close to what you want to do.\n>\n>I think you are missing the point. A good xbase library will allow you to \n>perform\n>\"joins\" on data across tables. It doesn't have a SQL syntax, but that does not\n>mean you can't code that way.\n>\n>Also, ODBC drivers for xbase use SQL format queries.\n\nI know.\n\n> > I have already done some work. I've implemented an extension of the generic\n> > Acucobol FS layer that talk with a PostgreSQL. This lowlevel layer\n> > translate each\n> > FS primitive in a query. The acucobol's record is translated in attribute\n> > (and vice versa)\n> > using a XFD file (eXtended Fields Description) which describe each record's\n> > field and\n> > that is cached in memory. This file is generated by the acucobol compiler\n> > for each used file.\n> > With this informations I'm able to perform a complete query to the\n> > PostgreSQL, the\n> > returned tuples will be translated in the expected \"flat record\" and\n> > finally returned to the\n> > runtime for its work.\n> >\n> > I know, the cobol use a different philosophy than a relational DB but this\n> > my work\n> > show that this two different world could talk. Also, I know the difficulty\n> > of the major lowlevel\n> > integration. I need some doc/indication/files-to-read of PostgreSQL\n> > lowlevel routines\n> > to see if this different world could have a major integration bypassing the\n> > \"overhead\" of\n> > the query accessing directly to the DB.\n>\n>I think, strongly, you are going down the wrong track.\n\nCould be ;-)! But I want try to do some work before abandon this\nsolution.\n\n>Take a look at:\n>\n>http://sourceforge.net/projects/xdb/\n\nI've take a look on it, and I'd the confirm of the aspected limitations. \nAnother\nproblem of this library is that we can't perform concurrent locks with other\napplications different than xdb library. So it's unusable for me.\n\n>http://www.unixodbc.org/\n\nI know it, it's good link!\n\n\nRoberto Fichera.\n\n",
"msg_date": "Fri, 08 Jun 2001 10:44:48 +0200",
"msg_from": "Roberto Fichera <kernel@tekno-soft.it>",
"msg_from_op": false,
"msg_subject": "Re: Re: Acucobol interface"
}
] |
[
{
"msg_contents": "Hi there:\n\nWould like to know how to fix this:\n\nEverytime I do a pg_dump or pg_dumpall I get this error message:\n\ndumpProcLangs(): handler procedure for language plpgsql not found\n\nAny help would be appreciated.\n\nThank you in advanced.\n--\nIng. Luis Maga�a\nGnovus Networks & Software\nwww.gnovus.com\n\n",
"msg_date": "Thu, 31 May 2001 12:26:19 -0500",
"msg_from": "Luis =?iso-8859-1?Q?Maga=F1a?= <joe666@gnovus.com>",
"msg_from_op": true,
"msg_subject": "pg_dump & pg_dumpall problem."
}
] |
[
{
"msg_contents": "Hi all,\n\nI'm not a postgres hacker, but I' think that you must be the most\nappropriate person to give me pointer about this question. Thus... sorry for\nany possible mistake.\n\nNow I'm trying the posibibility to use postgresql plus the pgbench like a\nfirst test to stress the interconnection system in a parallel machine. I\nknow that tpc-b is just a toy (no too much real... but before to do\nsomething more complex like tpc-c y want to see the posgres behavior).\n\nOk...well I'm running this benchmarks in different SMP machines (SGI with 4\nto 8 processors and the results are odd). The best performance is achieved\nwith just one backend (1 client). When I try to run more clients the tps\nfalls quickly.\n\nIn all cases I see that when I increase the number of clients the total CPU\nusage falls. With one client I can see a 100% usage (after a warm-up to get\nall data from disk - I'm running without fsync and with a large shared\nbuffer).My systems have a lot of memory then this is normal. But when I try\nwith more clients each CPU usage falls between 40% for 2 clients to 10% to 8\nclients. I assume the access to the shared memory through critical regions\n(lock-unlock) must be one reason... but this is too much. I've heard that\nlocks in postgress are at table level instead tuple level. I'm wrong?.\n\nSome suggestion about this?.\n\nThanks in advance for your support.\n\n--vpuente\n\n",
"msg_date": "Thu, 31 May 2001 20:09:04 +0200",
"msg_from": "\"Valentin Puente\" <vpuente@atc.unican.es>",
"msg_from_op": true,
"msg_subject": "Question about scalability in postgresql 7.1.2"
}
] |
[
{
"msg_contents": "\nI just realized that INSERT allows us to have more syntax than the\nmanual said. I wonder if we want to elimiate it or keep it with more\ndocumentation on the INSERT statment?\n\nHere is the INSERT synopsis we have in 7.2 documentation.\n==========\nINSERT INTO table [ ( column [, ...] ) ]\n { DEFAULT VALUES | VALUES ( expression [, ...] ) | SELECT query }\n\nAssume we have,\nCREATE TABLE t1 (a1 int, a2 int);\nCREATE TABLE t2 (a3 int, a4 int);\n\nINSERT INTO t2 VALUES(2, 0);\nINSERT INTO t2 VALUES(2,1);\n\n==== postgres allows to have something like ====\n\nINSERT INTO t1 VALUES(1, 0 AS \"Oops\");\nINSRET INTO t1 VALUES(t2.*);\n\n===================\n\nFor the first one, I believe that is due to reusing the definition of\ntarget_list/target_el. I didn't dig in to see how PostgreSQL handle the\nsecond case. At least the INSRET synopsis does not cover this case.\n\n\n--\nRegards,\nLM Liu\n\n\n",
"msg_date": "Thu, 31 May 2001 13:24:12 -0700",
"msg_from": "Limin Liu <limin@pumpkinnet.com>",
"msg_from_op": true,
"msg_subject": "extra syntax on INSERT"
},
{
"msg_contents": "Limin Liu <limin@pumpkinnet.com> writes:\n> I just realized that INSERT allows us to have more syntax than the\n> manual said. I wonder if we want to elimiate it or keep it with more\n> documentation on the INSERT statment?\n\nThis will likely go away when we get around to upgrading INSERT to the\nfull SQL spec --- certainly I'd feel no compunction about removing any\nnon-SQL syntax that happens to be supported now, if it gets in the way\nof spec compliance.\n\nIn short, no I don't want to document it, because I don't want people\nto start relying on it.\n\n> For the first one, I believe that is due to reusing the definition of\n> target_list/target_el.\n\nYes. There's not a lot of difference in the implementations of\nINSERT ... VALUES and INSERT ... SELECT, at the moment.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 31 May 2001 20:58:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] extra syntax on INSERT "
},
{
"msg_contents": "On Thu, 31 May 2001, Tom Lane wrote:\n\n> Limin Liu <limin@pumpkinnet.com> writes:\n> > I just realized that INSERT allows us to have more syntax than the\n> > manual said. I wonder if we want to elimiate it or keep it with more\n> > documentation on the INSERT statment?\n>\n> This will likely go away when we get around to upgrading INSERT to the\n> full SQL spec --- certainly I'd feel no compunction about removing any\n> non-SQL syntax that happens to be supported now, if it gets in the way\n> of spec compliance.\n\nAre you talking about allowing multiple rows in one insert, like this?\n\nINSERT into foo VALUES ((1, 2, 3), (4, 5, 6), (7, 8, 9))\n\nThat would be a nice feature to have, and I think it's consistent with\nSQL-92.\n-- \nTod McQuillin\n\n\n\n",
"msg_date": "Fri, 1 Jun 2001 21:24:34 -0500 (CDT)",
"msg_from": "Tod McQuillin <devin@spamcop.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: [HACKERS] extra syntax on INSERT "
}
] |
[
{
"msg_contents": "\nRefractions Research is pleased to announce the inital release of\nPostGIS, a set of 3-D geographic object types for the PostgreSQL 7.1.x\ndatabase server.\n\nPostGIS includes the following functionality:\n\n- Simple Features as defined by the OpenGIS Consortium (OGC)\n - Point\n - LineString\n - Polygon (with holes)\n - MultiPoint\n - MultiLineString\n - MultiPolygon\n - GeometryCollection\n- The text representation of the simple features is the OGC \n Well-Known Text format.\n- Geometries can be indexed using either R-Tree (not recommended) or \n GiST (recommended).\n- Simple geospatial analysis functions.\n- PostgreSQL JDBC extension objects corresponding to the geometries.\n\nPostGIS is released under the GNU General Public Licence.\n\nFor more information, visit the PostGIS web site,\nhttp://postgis.refractions.net or join the discussion list by sending a\nmessage to postgis-subscribe@yahoogroups.com .\n",
"msg_date": "Thu, 31 May 2001 15:49:24 -0700",
"msg_from": "Dave Blasby <dblasby@refractions.net>",
"msg_from_op": true,
"msg_subject": "Initial Release of PostGIS"
}
] |
[
{
"msg_contents": "Hello all,\n\nAttached is a patch to implement a new internal function, 'has_privilege'.\nMy proposal below explains the reasoning behind this submittal, although I\nnever did get any feedback -- positive or negative. If the patch is accepted\nI'll be happy to do the work to create the system view as descibed.\n\nThe patch applies cleanly against cvs tip. One item I was not sure about was\nthe selection of the OID value for the new function. I chose 1920 for no\nother reason that the highest OID in pg_proc.h was 1909, and this seemed\nlike a safe value. Is there somewhere I should have looked for guidance on\nthis?\n\nThanks,\n\n-- Joe\n\n> The recent discussions on pg_statistic got me started thinking about how\nto\n> implement a secure form of the view. Based on the list discussion, and a\n> suggestion from Tom, I did some research regarding how SQL92 and some of\nthe\n> larger commercial database systems allow access to system privilege\n> information.\n>\n> I reviewed the ANSI SQL 92 specification, Oracle, MSSQL, and IBM DB2\n> (documentation only). Here's what I found:\n>\n> ANSI SQL 92 does not have any functions defined for retrieving privilege\n> information. It does, however define an \"information schema\" and\n\"definition\n> schema\" which among other things includes a TABLE_PRIVILEGES view.\n>\n> With this view available, it is possible to discern what privileges the\n> current user has using a simple SQL statement. In Oracle, I found this\nview,\n> and some other variations. According to the Oracle DBA I work with, there\nis\n> no special function, and a SQL statement on the view is how he would\ngather\n> this kind of information when needed.\n>\n> MSSQL Server 7 also has this same view. Additionally, SQL7 has a T-SQL\n> function called PERMISSIONS with the following description:\n> \"Returns a value containing a bitmap that indicates the statement, object,\n> or column permissions for the current user.\n> Syntax PERMISSIONS([objectid [, 'column']])\".\n>\n> I only looked briefly at the IBM DB2 documentation, but could find no\n> mention of TABLE_PRIVILEGES or any privilege specific function. I imagine\n> TABLE_PRIVILEGES might be there somewhere since it seems to be standard\n> SQL92.\n>\n> Based on all of the above, I concluded that there is nothing compelling in\n> terms of a specific function to be compatible with. I do think that in the\n> longer term it makes sense to implement the SQL 92 information schema\nviews\n> in PostgreSQL.\n>\n> So, now for the proposal. I created a function (attached) which will allow\n> any privilege type to be probed, called has_privilege. It is used like\nthis:\n>\n> select relname from pg_class where has_privilege(current_user, relname,\n> 'update');\n>\n> or\n>\n> select has_privilege('postgres', 'pg_shadow', 'select');\n>\n> where\n> the first parameter is any valid user name\n> the second parameter can be a table, view, or sequence\n> the third parameter can be 'select', 'insert', 'update', 'delete', or\n> 'rule'\n>\n> The function is currently implemented as an external c function and\ndesigned\n> to be built under contrib. This function should really be an internal\n> function. If the proposal is acceptable, I would like to take on the task\nof\n> turning the function into an internal one (with guidance, pointers,\n> suggestions greatly appreciated). This would allow a secure view to be\n> implemented over pg_statistic as:\n>\n> create view pg_userstat as (\n> select\n> s.starelid\n> ,s.staattnum\n> ,s.staop\n> ,s.stanullfrac\n> ,s.stacommonfrac\n> ,s.stacommonval\n> ,s.staloval\n> ,s.stahival\n> ,c.relname\n> ,a.attname\n> ,sh.usename\n> from\n> pg_statistic as s\n> ,pg_class as c\n> ,pg_shadow as sh\n> ,pg_attribute as a\n> where\n> has_privilege(current_user,c.relname,'select')\n> and sh.usesysid = c.relowner\n> and a.attrelid = c.oid\n> and c.oid = s.starelid\n> );\n>\n> Then restrict pg_statistic from public viewing. This view would allow the\n> current user to view statistics only on relations for which they already\n> have 'select' granted.\n>\n> Comments?\n>\n> Regards,\n> -- Joe\n>\n> installation:\n>\n> place in contrib\n> tar -xzvf has_priv.tgz\n> cd has_priv\n> ./install.sh\n> Note: installs the function into template1 by default. Edit install.sh to\n> change.\n>\n>",
"msg_date": "Thu, 31 May 2001 23:31:51 -0700",
"msg_from": "\"Joe Conway\" <joe@conway-family.com>",
"msg_from_op": true,
"msg_subject": "Fw: Isn't pg_statistic a security hole - Solution Proposal"
},
{
"msg_contents": "Joe Conway writes:\n\n> The patch applies cleanly against cvs tip. One item I was not sure about was\n> the selection of the OID value for the new function. I chose 1920 for no\n> other reason that the highest OID in pg_proc.h was 1909, and this seemed\n> like a safe value. Is there somewhere I should have looked for guidance on\n> this?\n\n~/pgsql/src/include/catalog$ ./unused_oids\n3 - 11\n90\n143\n352 - 353\n1264\n1713 - 1717\n1813\n1910 - 16383\n\n\n> > ANSI SQL 92 does not have any functions defined for retrieving privilege\n> > information. It does, however define an \"information schema\" and\n> \"definition\n> > schema\" which among other things includes a TABLE_PRIVILEGES view.\n\nYes, that's what we pretty much want to do once we have schema support.\nThe function you propose, or one similar to it, will probably be needed to\nmake this work.\n\n> > select has_privilege('postgres', 'pg_shadow', 'select');\n> >\n> > where\n> > the first parameter is any valid user name\n> > the second parameter can be a table, view, or sequence\n> > the third parameter can be 'select', 'insert', 'update', 'delete', or\n> > 'rule'\n\nThis is probably going to blow up when we have the said schema support.\nProbably better to reference things by oid. Also, since things other than\nrelations might have privileges sometime, the function name should\nprobably imply this; maybe \"has_table_privilege\".\n\nImplementation notes:\n\n* This function should probably go into backend/utils/adt/acl.c.\n\n* You don't need PG_FUNCTION_INFO_V1 for built-in functions.\n\n* I'm not sure whether it's useful to handle NULL parameters explicitly.\n The common approach is to return NULL, which would be semantically right\n for this function.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Fri, 1 Jun 2001 17:04:10 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Fw: Isn't pg_statistic a security hole - Solution\n Proposal"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> This is probably going to blow up when we have the said schema support.\n> Probably better to reference things by oid.\n\nTwo versions, one that takes an oid and one that takes a name, might be\nconvenient. The name version will probably have to accept qualified\nnames (schema.table) once we have schema support --- but I don't think\nthat needs to break the function definition. An unqualified name would\nbe looked up using whatever schema resolution rules would be in effect\nfor ordinary table references.\n\nWe might also want the user to be specified by usesysid rather than\nname; and a two-parameter form that assumes user == current_user would\nbe a particularly useful shorthand.\n\n> Also, since things other than\n> relations might have privileges sometime, the function name should\n> probably imply this; maybe \"has_table_privilege\".\n\nAgreed.\n\n> * I'm not sure whether it's useful to handle NULL parameters explicitly.\n> The common approach is to return NULL, which would be semantically right\n> for this function.\n\nThe standard approach for C-coded functions is to mark them\n'proisstrict' in pg_proc, and then not waste any code checking for NULL;\nthe function manager takes care of it for you. The only reason not to\ndo it that way is if you actually want to return non-NULL for (some\ncases with) NULL inputs. Offhand this looks like a strict function to\nme...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 01 Jun 2001 13:18:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fw: Isn't pg_statistic a security hole - Solution Proposal "
},
{
"msg_contents": "> The standard approach for C-coded functions is to mark them\n> 'proisstrict' in pg_proc, and then not waste any code checking for NULL;\n> the function manager takes care of it for you. The only reason not to\n> do it that way is if you actually want to return non-NULL for (some\n> cases with) NULL inputs. Offhand this looks like a strict function to\n> me...\n>\n\nThanks for the feedback! To summarize the recommended changes:\n\n- put function into backend/utils/adt/acl.c.\n- remove PG_FUNCTION_INFO_V1\n- mark 'proisstrict' in pg_proc\n- rename to has_table_privilege()\n- overload the function name for 6 versions (OIDs 1920 - 1925):\n -> has_table_privilege(text username, text relname, text priv)\n -> has_table_privilege(oid usesysid, text relname, text priv)\n -> has_table_privilege(oid usesysid, oid reloid, text priv)\n -> has_table_privilege(text username, oid reloid, text priv)\n -> has_table_privilege(text relname, text priv) /* assumes\ncurrent_user */\n -> has_table_privilege(oid reloid, text priv) /* assumes current_user\n*/\n\nNew patch forthcoming . . .\n\n-- Joe\n\n",
"msg_date": "Fri, 1 Jun 2001 15:33:30 -0700",
"msg_from": "\"Joe Conway\" <joe@conway-family.com>",
"msg_from_op": true,
"msg_subject": "Re: Fw: Isn't pg_statistic a security hole - Solution Proposal "
},
{
"msg_contents": "Tom Lane writes:\n\n> Two versions, one that takes an oid and one that takes a name, might be\n> convenient. The name version will probably have to accept qualified\n> names (schema.table) once we have schema support\n\nWill you expect the function to do dequoting etc. as well? This might get\nout of hand.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sat, 2 Jun 2001 16:49:11 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Fw: Isn't pg_statistic a security hole - Solution\n Proposal"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Will you expect the function to do dequoting etc. as well? This might get\n> out of hand.\n\nHm. We already have such code available for nextval(), so I suppose\nit might be appropriate to invoke that. Not sure. Might be better\nto expect the given string to be the correct case already. Let's see\n... if you expect the function to be applied to names extracted from\npg_class or other tables, then exact case would be better --- but it'd\nbe just as easy to invoke the OID form in such cases. For hand-entered\ndata the nextval convention is probably more convenient.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 02 Jun 2001 11:04:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fw: Isn't pg_statistic a security hole - Solution Proposal "
},
{
"msg_contents": "> Thanks for the feedback! To summarize the recommended changes:\n>\n> - put function into backend/utils/adt/acl.c.\n> - remove PG_FUNCTION_INFO_V1\n> - mark 'proisstrict' in pg_proc\n> - rename to has_table_privilege()\n> - overload the function name for 6 versions (OIDs 1920 - 1925):\n> -> has_table_privilege(text username, text relname, text priv)\n> -> has_table_privilege(oid usesysid, text relname, text priv)\n> -> has_table_privilege(oid usesysid, oid reloid, text priv)\n> -> has_table_privilege(text username, oid reloid, text priv)\n> -> has_table_privilege(text relname, text priv) /* assumes\n> current_user */\n> -> has_table_privilege(oid reloid, text priv) /* assumes\ncurrent_user\n> */\n>\n\nHere's a new patch for has_table_privilege( . . .). One change worthy of\nnote is that I added a definition to fmgr.h as follows:\n\n #define PG_NARGS (fcinfo->nargs)\n\nThis allowed me to use two of the new functions to handle both 2 and 3\nargument cases. Also different from the above, I used int instead of oid for\nthe usesysid type.\n\nI'm also attaching a test script and expected output. I haven't yet looked\nat how to properly include these into the normal regression testing -- any\npointers are much appreciated.\n\nThanks,\n\n-- Joe",
"msg_date": "Sat, 2 Jun 2001 15:14:41 -0700",
"msg_from": "\"Joe Conway\" <joe@conway-family.com>",
"msg_from_op": true,
"msg_subject": "Re: Fw: Isn't pg_statistic a security hole - Solution Proposal "
},
{
"msg_contents": "\"Joe Conway\" <joe@conway-family.com> writes:\n> Here's a new patch for has_table_privilege\n\nLooks like you're getting there. Herewith some miscellaneous comments\non minor matters like coding style:\n\n> I used int instead of oid for the usesysid type.\n\nI am not sure if that's a good idea or not. Peter E. has proposed\nchanging usesysid to type OID or even eliminating it entirely\n(identifying users by pg_shadow row OID alone). While this hasn't\nhappened yet, it'd be a good idea to minimize the dependencies of your\ncode on which type is used for user ID. In particular I'd suggest using\n\"name\" and \"id\" in the names of your variant functions, not \"text\" and\n\"oid\" and \"int\", so that they don't have to be renamed if the types\nchange.\n\n> I'm also attaching a test script and expected output.\n\nThe script doesn't seem to demonstrate that any attention is paid to the\nmode input --- AFAICT all the tested cases are either all privileges\navailable or no privileges available. It could probably be a lot\nshorter without being materially less effective, too; quasi-exhaustive\ntests are usually not worth the cycles to run.\n\n> +Datum\n> +text_oid_has_table_privilege(PG_FUNCTION_ARGS)\n\nThis is just my personal preference, but I'd put the type identifiers at\nthe end (has_table_privilege_name_id) rather than giving them pride of\nplace at the start of the name.\n\n> +Datum\n> +has_table_privilege(int usesysid, char *relname, char *priv_type)\n\nSince has_table_privilege is just an internal function and is not\nfmgr-callable, there's no percentage in declaring it to return Datum;\nit should just return bool and not use the PG_RETURN_ macros. You have\nin fact called it as though it returned bool, which would be a type\nviolation if C were not so lax about type conversions.\n\nActually, though, I'm wondering why has_table_privilege is a function\nat all. Its tests for valid usesysid and relname are a waste of cycles;\npg_aclcheck will do those for itself. The only actually useful code in\nit is the conversion from a priv_type string to an AclMode code, which\nwould seem to be better handled as a separate function that just does\nthat part. The has_table_privilege_foo_bar routines could call\npg_aclcheck for themselves without any material loss of concision.\n\n> +\tresult = pg_aclcheck(relname, usesysid, mode);\n> +\n> +\tif (result == 1) {\n\nThis is not only non-symbolic, but outright wrong. You should be\ntesting pg_aclcheck's result to see if it is ACLCHECK_OK or not.\n\n> +/* Privilege names for oid_oid_has_table_privilege */\n> +#define PRIV_INSERT\t\t\t\"INSERT\\0\"\n> +#define PRIV_SELECT\t\t\t\"SELECT\\0\"\n> +#define PRIV_UPDATE\t\t\t\"UPDATE\\0\"\n> +#define PRIV_DELETE\t\t\t\"DELETE\\0\"\n> +#define PRIV_RULE\t\t\t\"RULE\\0\"\n> +#define PRIV_REFERENCES\t\t\"REFERENCES\\0\"\n> +#define PRIV_TRIGGER\t\t\"TRIGGER\\0\"\n\nYou need not write these strings with those redundant null terminators.\nFor that matter, since they're only known in one function, it's not\nclear that they should be exported to all and sundry in a header file\nin the first place. I'd be inclined to just code\n\n\tAclMode convert_priv_string (char * str)\n\t{\n\t\tif (strcasecmp(str, \"SELECT\") == 0)\n\t\t\treturn ACL_SELECT;\n\t\tif (strcasecmp(str, \"INSERT\") == 0)\n\t\t\treturn ACL_INSERT;\n\t\t... etc ...\n\t\telog(ERROR, ...);\n\t}\n\nand keep all the knowledge right there in that function. (Possibly it\nshould take a text* not char*, so as to avoid duplicated conversion code\nin the callers, but this is minor.)\n\n\nDespite all these gripes, it looks pretty good. One more round of\nrevisions ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 02 Jun 2001 19:26:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fw: Isn't pg_statistic a security hole - Solution Proposal "
},
{
"msg_contents": "Thanks for the detailed feedback, Tom. I really appreciate the pointers on\nmy style and otherwise. Attached is my next attempt. To summarize the\nchanges:\n\n- changed usesysid back to Oid. I noticed that the Acl functions all treated\nusesysid as an Oid anyway.\n\n- changed function names to has_user_privilege_name_name,\nhas_user_privilege_name_id, etc\n\n- trimmed down test script, added variety (some privs granted, not all), and\nadded bad input cases (this already paid off -- see below)\n\n- replaced has_table_privilege(int usesysid, char *relname, char *priv_type)\n with\n AclMode convert_priv_string (text * priv_type_text)\n\n- changed\n if (result == 1) {\n PG_RETURN_BOOL(FALSE);\n . . .\n to\n if (result == ACLCHECK_OK) {\n PG_RETURN_BOOL(TRUE);\n . . .\n- removed #define PRIV_INSERT \"INSERT\\0\", etc from acl.h\n\nOne item of note -- while pg_aclcheck *does* validate relname for\nnon-superusers, it *does not* bother for superusers. Therefore I left the\nrelname check in the has_table_privilege_*_name() functions. Also note that\nI skipped has_priv_r3.diff -- that one helped find the superuser/relname\nissue.\n\nI hope this version passes muster ;-)\n\n-- Joe",
"msg_date": "Sat, 2 Jun 2001 20:22:44 -0700",
"msg_from": "\"Joe Conway\" <joe@conway-family.com>",
"msg_from_op": true,
"msg_subject": "Re: Fw: Isn't pg_statistic a security hole - Solution Proposal "
},
{
"msg_contents": "[ -> hackers ]\n\nTom Lane writes:\n\n> > Will you expect the function to do dequoting etc. as well? This might get\n> > out of hand.\n>\n> Hm. We already have such code available for nextval(),\n\nIMHO, nextval() isn't the greatest interface in the world. I do like the\nalternative (deprecated?) syntax sequence.nextval() because of the\nnotational resemblence to OO. (We might even be able to turn this into\nsomething like an SQL99 \"class\" feature.)\n\nAs I understand it, currently\n\n relation.function(a, b, c)\n\nends up as being a function call\n\n function(relation, a, b, c)\n\nwhere the first argument is \"text\". This is probably an unnecessary\nfragility, since the oid of the relation should already be known by that\ntime. So perhaps we could change this that the first argument gets passed\nin an Oid. Then we'd really only need the Oid version of Joe's\nhas_*_privilege functions.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sun, 3 Jun 2001 17:18:20 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Fw: Isn't pg_statistic a security hole - Solution\n\tProposal"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> IMHO, nextval() isn't the greatest interface in the world. I do like the\n> alternative (deprecated?) syntax sequence.nextval() because of the\n> notational resemblence to OO.\n\nTry \"nonexistent\". I too would like a notation like that, because it\nwould be more transparent to the user w.r.t. case folding and such.\nBut it doesn't exist now.\n\nObserve, however, that such a notation would work well only for queries\nin which the sequence/table name is fixed and known when the query is\nwritten. I don't see a way to use it in the case where the name is\nbeing computed at runtime (eg, taken from a table column). So it\ndoesn't really solve the problem posed by has_table_privilege.\n\n> As I understand it, currently\n> relation.function(a, b, c)\n> ends up as being a function call\n> function(relation, a, b, c)\n> where the first argument is \"text\".\n\nSorry, that has nothing to do with reality. What we actually have is\nan equivalence between the two notations\n\trel.func\n\tfunc(rel)\nwhere the semantics are that an entire tuple of the relation \"rel\" is\npassed to the function. This doesn't really gain us anything for the\nproblem at hand (and we'll quite likely have to give it up anyway when\nwe implement schemas, since SQL has very different ideas about what\na.b.c means than our current parser does).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 03 Jun 2001 13:17:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Fw: Isn't pg_statistic a security hole - Solution\n\tProposal"
},
{
"msg_contents": "> where the semantics are that an entire tuple of the relation \"rel\" is\n> passed to the function. This doesn't really gain us anything for the\n> problem at hand (and we'll quite likely have to give it up anyway when\n> we implement schemas, since SQL has very different ideas about what\n> a.b.c means than our current parser does).\n>\n\nI wasn't quite sure if there are changes I can/should make to\nhas_table_privilege based on this discussion. Is there any action for me on\nthis (other than finishing the regression test and creating documentation\npatches)?\n\nThanks,\n\n-- Joe\n\n",
"msg_date": "Wed, 6 Jun 2001 14:45:57 -0700",
"msg_from": "\"Joe Conway\" <joe@conway-family.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] Fw: Isn't pg_statistic a security hole - Solution\n\tProposal"
},
{
"msg_contents": "\"Joe Conway\" <joe@conway-family.com> writes:\n> I wasn't quite sure if there are changes I can/should make to\n> has_table_privilege based on this discussion.\n\nMy feeling is that the name-based variants of has_table_privilege should\nperform downcasing and truncation of the supplied strings before trying\nto use them as tablename or username; see get_seq_name in\nbackend/commands/sequence.c for a model. (BTW, I only just now added\ntruncation code to that routine, so look at current CVS. Perhaps the\nroutine should be renamed and placed somewhere else, so that sequence.c\nand has_table_privilege can share it.)\n\nPeter's argument seemed to be that there shouldn't be name-based\nvariants at all, with which I do not agree; but perhaps that's not\nwhat he meant.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 06 Jun 2001 18:10:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCHES] Fw: Isn't pg_statistic a security hole - Solution\n\tProposal"
},
{
"msg_contents": "> My feeling is that the name-based variants of has_table_privilege should\n> perform downcasing and truncation of the supplied strings before trying\n> to use them as tablename or username; see get_seq_name in\n> backend/commands/sequence.c for a model. (BTW, I only just now added\n> truncation code to that routine, so look at current CVS. Perhaps the\n> routine should be renamed and placed somewhere else, so that sequence.c\n> and has_table_privilege can share it.)\n>\n\nLooking at get_seq_name, it does seem like it should be called something\nlike get_object_name (or just get_name?) and moved to a common location. Am\nI correct in thinking that this function could/should be called by any other\nfunction (internal, C, plpgsql, or otherwise) which accepts a text\nrepresentation of a system object name?\n\nWhat if I rename the get_seq_name function and move it to\nbackend/utils/adt/name.c (and of course change the references to it in\nsequence.c)? Actually, now I'm wondering why nameout doesn't downcase and\ntruncate.\n\n-- Joe\n\n\n\n",
"msg_date": "Wed, 6 Jun 2001 22:09:05 -0700",
"msg_from": "\"Joe Conway\" <joe.conway@mail.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCHES] Fw: Isn't pg_statistic a security hole - Solution\n\tProposal"
},
{
"msg_contents": "> My feeling is that the name-based variants of has_table_privilege should\n> perform downcasing and truncation of the supplied strings before trying\n> to use them as tablename or username; see get_seq_name in\n> backend/commands/sequence.c for a model. (BTW, I only just now added\n> truncation code to that routine, so look at current CVS. Perhaps the\n> routine should be renamed and placed somewhere else, so that sequence.c\n> and has_table_privilege can share it.)\n>\n\nLooking at get_seq_name, it does seem like it should be called something\nlike get_object_name (or just get_name?) and moved to a common location. Am\nI correct in thinking that this function could/should be called by any other\nfunction (internal, C, plpgsql, or otherwise) which accepts a text\nrepresentation of a system object name?\n\nWhat if I rename the get_seq_name function and move it to\nbackend/utils/adt/name.c (and of course change the references to it in\nsequence.c)? Actually, now I'm wondering why nameout doesn't downcase and\ntruncate.\n\n-- Joe\n\n\n\n\n",
"msg_date": "Wed, 6 Jun 2001 22:09:27 -0700",
"msg_from": "\"Joe Conway\" <joe@conway-family.com>",
"msg_from_op": true,
"msg_subject": "Re: Re: [PATCHES] Fw: Isn't pg_statistic a security hole - Solution\n\tProposal"
},
{
"msg_contents": "> My feeling is that the name-based variants of has_table_privilege should\n> perform downcasing and truncation of the supplied strings before trying\n> to use them as tablename or username; see get_seq_name in\n> backend/commands/sequence.c for a model. (BTW, I only just now added\n> truncation code to that routine, so look at current CVS. Perhaps the\n> routine should be renamed and placed somewhere else, so that sequence.c\n> and has_table_privilege can share it.)\n>\n\nLooking at get_seq_name, it does seem like it should be called something\nlike get_object_name (or just get_name?) and moved to a common location. Am\nI correct in thinking that this function could/should be called by any other\nfunction (internal, C, plpgsql, or otherwise) which accepts a text\nrepresentation of a system object name?\n\nWhat if I rename the get_seq_name function and move it to\nbackend/utils/adt/name.c (and of course change the references to it in\nsequence.c)? Actually, now I'm wondering why nameout doesn't downcase and\ntruncate.\n\n-- Joe\n\n\n\n\n",
"msg_date": "Wed, 6 Jun 2001 22:09:31 -0700",
"msg_from": "\"Joe Conway\" <joe@conway-family.com>",
"msg_from_op": true,
"msg_subject": "Re: Re: [PATCHES] Fw: Isn't pg_statistic a security hole - Solution\n\tProposal"
},
{
"msg_contents": "> My feeling is that the name-based variants of has_table_privilege should\n> perform downcasing and truncation of the supplied strings before trying\n> to use them as tablename or username; see get_seq_name in\n> backend/commands/sequence.c for a model. (BTW, I only just now added\n> truncation code to that routine, so look at current CVS. Perhaps the\n> routine should be renamed and placed somewhere else, so that sequence.c\n> and has_table_privilege can share it.)\n>\n\nLooking at get_seq_name, it does seem like it should be called something\nlike get_object_name (or just get_name?) and moved to a common location. Am\nI correct in thinking that this function could/should be called by any other\nfunction (internal, C, plpgsql, or otherwise) which accepts a text\nrepresentation of a system object name?\n\nWhat if I rename the get_seq_name function and move it to\nbackend/utils/adt/name.c (and of course change the references to it in\nsequence.c)? Actually, now I'm wondering why nameout doesn't downcase and\ntruncate.\n\n-- Joe\n\n\n\n\n",
"msg_date": "Wed, 6 Jun 2001 22:09:35 -0700",
"msg_from": "\"Joe Conway\" <joe@conway-family.com>",
"msg_from_op": true,
"msg_subject": "Re: Re: [PATCHES] Fw: Isn't pg_statistic a security hole - Solution\n\tProposal"
},
{
"msg_contents": "> representation of a system object name?\n>\n> What if I rename the get_seq_name function and move it to\n> backend/utils/adt/name.c (and of course change the references to it in\n> sequence.c)? Actually, now I'm wondering why nameout doesn't downcase and\n> truncate.\n>\n> -- Joe\n\nYikes! Sorry about sending that last message 3 times -- I guess that's what\nI get for using an evil mail client ;-)\n\nJoe\n\n",
"msg_date": "Wed, 6 Jun 2001 22:20:02 -0700",
"msg_from": "\"Joe Conway\" <joe@conway-family.com>",
"msg_from_op": true,
"msg_subject": "sorry for the repeats - no spam intended :-)"
},
{
"msg_contents": "Tom Lane writes:\n\n> My feeling is that the name-based variants of has_table_privilege should\n> perform downcasing and truncation of the supplied strings before trying\n> to use them as tablename or username; see get_seq_name in\n> backend/commands/sequence.c for a model.\n\nI don't like this approach. It's ugly, non-intuitive, and inconvenient.\n\nSince these functions will primarily be used in building a sort of\ninformation schema and for querying system catalogs, we should use the\napproach that is or will be used there: character type values contain the\ntable name already case-adjusted. Imagine the pain we would have to go\nthrough to *re-quote* the names we get from the system catalogs and\ninformation schema components before passing them to this function.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Thu, 7 Jun 2001 16:16:33 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCHES] Fw: Isn't pg_statistic a security hole\n\t- Solution Proposal"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Since these functions will primarily be used in building a sort of\n> information schema and for querying system catalogs, we should use the\n> approach that is or will be used there: character type values contain the\n> table name already case-adjusted.\n\nWeren't you just arguing that such cases could/should use the OID, not\nthe name at all? ISTM the name-based variants will primarily be used\nfor user-entered names, and in that case the user can reasonably expect\nthat a name will be interpreted the same way as if he'd written it out\nin a query.\n\nThe nextval approach is ugly, I'll grant you, but it's also functional.\nWe got complaints about nextval before we put that in; we get lots\nfewer now.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Jun 2001 00:06:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCHES] Fw: Isn't pg_statistic a security hole - Solution\n\tProposal"
},
{
"msg_contents": "Tom Lane writes:\n\n> Weren't you just arguing that such cases could/should use the OID, not\n> the name at all?\n\nYes, but if we're going to have name arguments, we should have sane ones.\n\n> ISTM the name-based variants will primarily be used for user-entered\n> names, and in that case the user can reasonably expect that a name\n> will be interpreted the same way as if he'd written it out in a query.\n\nThat would be correct if the user were actually entering the name in the\nsame way, i.e., unquoted or double-quoted.\n\n> The nextval approach is ugly, I'll grant you, but it's also functional.\n\nBut it's incompatible with the SQL conventions.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Fri, 8 Jun 2001 18:09:50 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCHES] Fw: Isn't pg_statistic a security hole\n\t- Solution Proposal"
},
{
"msg_contents": "> > ISTM the name-based variants will primarily be used for user-entered\n> > names, and in that case the user can reasonably expect that a name\n> > will be interpreted the same way as if he'd written it out in a query.\n>\n> That would be correct if the user were actually entering the name in the\n> same way, i.e., unquoted or double-quoted.\n>\n> > The nextval approach is ugly, I'll grant you, but it's also functional.\n>\n> But it's incompatible with the SQL conventions.\n>\n\nIs the concern that the name-based variants of the function should be called\nlike:\n\n select has_table_privilege(current_user, pg_class, 'insert');\n or\n select has_table_privilege(current_user, \"My Quoted Relname\", 'insert');\n\ninstead of\n\n select has_table_privilege(current_user, 'pg_class', 'insert');\n or\n select has_table_privilege(current_user, '\"My Quoted Relname\"',\n'insert');\n\n?\n\nIf so, what would be involved in fixing it?\n\n From an end user's perspective, I wouldn't mind the latter syntax, although\nthe former is clearly more intuitive. But I'd rather have the second form\nthan nothing (just MHO).\n\n-- Joe\n\n\n",
"msg_date": "Fri, 8 Jun 2001 18:28:20 -0700",
"msg_from": "\"Joe Conway\" <joe@conway-family.com>",
"msg_from_op": true,
"msg_subject": "Re: Re: [PATCHES] Fw: Isn't pg_statistic a security hole - Solution\n\tProposal"
},
{
"msg_contents": "\"Joe Conway\" <joe@conway-family.com> writes:\n> Is the concern that the name-based variants of the function should be called\n> like:\n\n> select has_table_privilege(current_user, pg_class, 'insert');\n> or\n> select has_table_privilege(current_user, \"My Quoted Relname\", 'insert');\n\nIt'd be really nice to do that, but I don't see any reasonable way to do\nit. The problem is that the unquoted relation name will probably be\nresolved (to the wrong thing) before we discover that the function wants\nit to be resolved as a relation OID. Remember that the arguments of a\nfunction have to be resolved before we can even start to look up the\nfunction, since function lookup depends on the types of the arguments.\n\nI have just thought of a possible compromise. Peter is right that we\ndon't want case conversion on table names that are extracted from\ncatalogs. But I think we do want it on table names expressed as string\nliterals. Could we make the assumption that table names in catalogs\nwill be of type 'name'? If so, it'd work to make two versions of the \nhas_table_privilege function, one taking type \"name\" and the other\ntaking type \"text\". The \"name\" version would take its input as-is,\nthe \"text\" version would do case folding and truncation. This would\nwork transparently for queries selecting relation names from the system\ncatalogs, and it'd also work transparently for queries using unmarked\nstring literals (which will be preferentially resolved as type \"text\").\nWorst case if the system makes the wrong choice is you throw in an\nexplicit coercion to name or text. Comments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 09 Jun 2001 00:31:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCHES] Fw: Isn't pg_statistic a security hole - Solution\n\tProposal"
},
{
"msg_contents": "Your patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n> Thanks for the detailed feedback, Tom. I really appreciate the pointers on\n> my style and otherwise. Attached is my next attempt. To summarize the\n> changes:\n> \n> - changed usesysid back to Oid. I noticed that the Acl functions all treated\n> usesysid as an Oid anyway.\n> \n> - changed function names to has_user_privilege_name_name,\n> has_user_privilege_name_id, etc\n> \n> - trimmed down test script, added variety (some privs granted, not all), and\n> added bad input cases (this already paid off -- see below)\n> \n> - replaced has_table_privilege(int usesysid, char *relname, char *priv_type)\n> with\n> AclMode convert_priv_string (text * priv_type_text)\n> \n> - changed\n> if (result == 1) {\n> PG_RETURN_BOOL(FALSE);\n> . . .\n> to\n> if (result == ACLCHECK_OK) {\n> PG_RETURN_BOOL(TRUE);\n> . . .\n> - removed #define PRIV_INSERT \"INSERT\\0\", etc from acl.h\n> \n> One item of note -- while pg_aclcheck *does* validate relname for\n> non-superusers, it *does not* bother for superusers. Therefore I left the\n> relname check in the has_table_privilege_*_name() functions. Also note that\n> I skipped has_priv_r3.diff -- that one helped find the superuser/relname\n> issue.\n> \n> I hope this version passes muster ;-)\n> \n> -- Joe\n> \n\n[ Attachment, skipping... ]\n\n[ Attachment, skipping... ]\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 9 Jun 2001 18:15:54 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fw: Isn't pg_statistic a security hole - Solution Proposal"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Your patch has been added to the PostgreSQL unapplied patches list at:\n> \thttp://candle.pha.pa.us/cgi-bin/pgpatches\n> I will try to apply it within the next 48 hours.\n\nIt's not approved yet ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 09 Jun 2001 18:18:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fw: Isn't pg_statistic a security hole - Solution Proposal "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Your patch has been added to the PostgreSQL unapplied patches list at:\n> > \thttp://candle.pha.pa.us/cgi-bin/pgpatches\n> > I will try to apply it within the next 48 hours.\n> \n> It's not approved yet ...\n\nOK.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 9 Jun 2001 18:20:32 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fw: Isn't pg_statistic a security hole - Solution Proposal"
},
{
"msg_contents": "> I have just thought of a possible compromise. Peter is right that we\n> don't want case conversion on table names that are extracted from\n> catalogs. But I think we do want it on table names expressed as string\n> literals. Could we make the assumption that table names in catalogs\n> will be of type 'name'? If so, it'd work to make two versions of the\n> has_table_privilege function, one taking type \"name\" and the other\n> taking type \"text\". The \"name\" version would take its input as-is,\n> the \"text\" version would do case folding and truncation. This would\n> work transparently for queries selecting relation names from the system\n> catalogs, and it'd also work transparently for queries using unmarked\n> string literals (which will be preferentially resolved as type \"text\").\n> Worst case if the system makes the wrong choice is you throw in an\n> explicit coercion to name or text. Comments?\n\nOK -- here's take #5.\n\nIt \"make\"s and \"make check\"s clean against current cvs tip.\n\nThere are now both Text and Name variants, and the regression test support\nis rolled into the patch. Note that to be complete wrt Name based variants,\nthere are now 12 user visible versions of has_table_privilege:\n\nhas_table_privilege(Text usename, Text relname, Text priv_type)\nhas_table_privilege(Text usename, Name relname, Text priv_type)\nhas_table_privilege(Name usename, Text relname, Text priv_type)\nhas_table_privilege(Name usename, Name relname, Text priv_type)\nhas_table_privilege(Text relname, Text priv_type) /* assumes current_user */\nhas_table_privilege(Name relname, Text priv_type) /* assumes current_user */\nhas_table_privilege(Text usename, Oid reloid, Text priv_type)\nhas_table_privilege(Name usename, Oid reloid, Text priv_type)\nhas_table_privilege(Oid reloid, Text priv_type) /* assumes current_user */\nhas_table_privilege(Oid usesysid, Text relname, Text priv_type)\nhas_table_privilege(Oid usesysid, Name relname, Text priv_type)\nhas_table_privilege(Oid usesysid, Oid reloid, Text priv_type)\n\nFor the Text based inputs, a new internal function, get_Name is used\n(shamelessly copied from get_seq_name in sequence.c) to downcase if not\nquoted, or remove quotes if quoted, and truncate. I also added a few test\ncases for the downcasing, quote removal, and Name based variants to the\nregression test.\n\nOnly thing left (I hope!) is documentation. I'm sure I either have or can\nget the DocBook tools, but I've never used them. Would it be simpler to\nclone and hand edit one of the existing docs? Any suggestions to get me\nstarted?\n\nThanks,\n\n-- Joe",
"msg_date": "Sat, 9 Jun 2001 19:26:52 -0700",
"msg_from": "\"Joe Conway\" <joe@conway-family.com>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Re: Fw: Isn't pg_statistic a security hole - Solution\n\tProposal"
},
{
"msg_contents": "> I have just thought of a possible compromise. Peter is right that we\n> don't want case conversion on table names that are extracted from\n> catalogs. But I think we do want it on table names expressed as string\n> literals. Could we make the assumption that table names in catalogs\n> will be of type 'name'? If so, it'd work to make two versions of the \n> has_table_privilege function, one taking type \"name\" and the other\n> taking type \"text\". The \"name\" version would take its input as-is,\n> the \"text\" version would do case folding and truncation. This would\n> work transparently for queries selecting relation names from the system\n> catalogs, and it'd also work transparently for queries using unmarked\n> string literals (which will be preferentially resolved as type \"text\").\n> Worst case if the system makes the wrong choice is you throw in an\n> explicit coercion to name or text. Comments?\n\nSeems you are adding a distinction between name and text that we never\nhad before. Is it worth it to fix this case?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 11 Jun 2001 00:57:21 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCHES] Fw: Isn't pg_statistic a security hole -\n\tSolution Proposal"
},
{
"msg_contents": "Your patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n> > I have just thought of a possible compromise. Peter is right that we\n> > don't want case conversion on table names that are extracted from\n> > catalogs. But I think we do want it on table names expressed as string\n> > literals. Could we make the assumption that table names in catalogs\n> > will be of type 'name'? If so, it'd work to make two versions of the\n> > has_table_privilege function, one taking type \"name\" and the other\n> > taking type \"text\". The \"name\" version would take its input as-is,\n> > the \"text\" version would do case folding and truncation. This would\n> > work transparently for queries selecting relation names from the system\n> > catalogs, and it'd also work transparently for queries using unmarked\n> > string literals (which will be preferentially resolved as type \"text\").\n> > Worst case if the system makes the wrong choice is you throw in an\n> > explicit coercion to name or text. Comments?\n> \n> OK -- here's take #5.\n> \n> It \"make\"s and \"make check\"s clean against current cvs tip.\n> \n> There are now both Text and Name variants, and the regression test support\n> is rolled into the patch. Note that to be complete wrt Name based variants,\n> there are now 12 user visible versions of has_table_privilege:\n> \n> has_table_privilege(Text usename, Text relname, Text priv_type)\n> has_table_privilege(Text usename, Name relname, Text priv_type)\n> has_table_privilege(Name usename, Text relname, Text priv_type)\n> has_table_privilege(Name usename, Name relname, Text priv_type)\n> has_table_privilege(Text relname, Text priv_type) /* assumes current_user */\n> has_table_privilege(Name relname, Text priv_type) /* assumes current_user */\n> has_table_privilege(Text usename, Oid reloid, Text priv_type)\n> has_table_privilege(Name usename, Oid reloid, Text priv_type)\n> has_table_privilege(Oid reloid, Text priv_type) /* assumes current_user */\n> has_table_privilege(Oid usesysid, Text relname, Text priv_type)\n> has_table_privilege(Oid usesysid, Name relname, Text priv_type)\n> has_table_privilege(Oid usesysid, Oid reloid, Text priv_type)\n> \n> For the Text based inputs, a new internal function, get_Name is used\n> (shamelessly copied from get_seq_name in sequence.c) to downcase if not\n> quoted, or remove quotes if quoted, and truncate. I also added a few test\n> cases for the downcasing, quote removal, and Name based variants to the\n> regression test.\n> \n> Only thing left (I hope!) is documentation. I'm sure I either have or can\n> get the DocBook tools, but I've never used them. Would it be simpler to\n> clone and hand edit one of the existing docs? Any suggestions to get me\n> started?\n> \n> Thanks,\n> \n> -- Joe\n> \n> \n> \n> \n> \n> \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 11 Jun 2001 21:44:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Fw: Isn't pg_statistic a security hole -\n\tSolution Proposal"
},
{
"msg_contents": "\nI don't know about other people but the 48 hours notice email and the\nweb page of outstanding patches seems to be working well for me.\n\n> > I have just thought of a possible compromise. Peter is right that we\n> > don't want case conversion on table names that are extracted from\n> > catalogs. But I think we do want it on table names expressed as string\n> > literals. Could we make the assumption that table names in catalogs\n> > will be of type 'name'? If so, it'd work to make two versions of the\n> > has_table_privilege function, one taking type \"name\" and the other\n> > taking type \"text\". The \"name\" version would take its input as-is,\n> > the \"text\" version would do case folding and truncation. This would\n> > work transparently for queries selecting relation names from the system\n> > catalogs, and it'd also work transparently for queries using unmarked\n> > string literals (which will be preferentially resolved as type \"text\").\n> > Worst case if the system makes the wrong choice is you throw in an\n> > explicit coercion to name or text. Comments?\n> \n> OK -- here's take #5.\n> \n> It \"make\"s and \"make check\"s clean against current cvs tip.\n> \n> There are now both Text and Name variants, and the regression test support\n> is rolled into the patch. Note that to be complete wrt Name based variants,\n> there are now 12 user visible versions of has_table_privilege:\n> \n> has_table_privilege(Text usename, Text relname, Text priv_type)\n> has_table_privilege(Text usename, Name relname, Text priv_type)\n> has_table_privilege(Name usename, Text relname, Text priv_type)\n> has_table_privilege(Name usename, Name relname, Text priv_type)\n> has_table_privilege(Text relname, Text priv_type) /* assumes current_user */\n> has_table_privilege(Name relname, Text priv_type) /* assumes current_user */\n> has_table_privilege(Text usename, Oid reloid, Text priv_type)\n> has_table_privilege(Name usename, Oid reloid, Text priv_type)\n> has_table_privilege(Oid reloid, Text priv_type) /* assumes current_user */\n> has_table_privilege(Oid usesysid, Text relname, Text priv_type)\n> has_table_privilege(Oid usesysid, Name relname, Text priv_type)\n> has_table_privilege(Oid usesysid, Oid reloid, Text priv_type)\n> \n> For the Text based inputs, a new internal function, get_Name is used\n> (shamelessly copied from get_seq_name in sequence.c) to downcase if not\n> quoted, or remove quotes if quoted, and truncate. I also added a few test\n> cases for the downcasing, quote removal, and Name based variants to the\n> regression test.\n> \n> Only thing left (I hope!) is documentation. I'm sure I either have or can\n> get the DocBook tools, but I've never used them. Would it be simpler to\n> clone and hand edit one of the existing docs? Any suggestions to get me\n> started?\n> \n> Thanks,\n> \n> -- Joe\n> \n> \n> \n> \n> \n> \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 11 Jun 2001 21:45:31 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Fw: Isn't pg_statistic a security hole -\n\tSolution Proposal"
},
{
"msg_contents": ">\n> I don't know about other people but the 48 hours notice email and the\n> web page of outstanding patches seems to be working well for me.\n>\n\nSorry -- my \"into the ether\" comment wasn't griping about you :-)\n\nI've been having serious problems with my \"mail.com\" account. I originally\nsent this post Saturday afternoon, but it didn't even make it to the\npgsql-patches list\nuntil Monday morning (and I only know that because I started reading the\ncomp.databases.postgresql.patches news feed).\n :(\n\nJoe\n\n\n\n",
"msg_date": "Mon, 11 Jun 2001 19:32:14 -0700",
"msg_from": "\"Joe Conway\" <joseph.conway@home.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Fw: Isn't pg_statistic a security hole - Solution\n\tProposal"
},
{
"msg_contents": "> >\n> > I don't know about other people but the 48 hours notice email and the\n> > web page of outstanding patches seems to be working well for me.\n> >\n> \n> Sorry -- my \"into the ether\" comment wasn't griping about you :-)\n> \n> I've been having serious problems with my \"mail.com\" account. I originally\n> sent this post Saturday afternoon, but it didn't even make it to the\n> pgsql-patches list\n> until Monday morning (and I only know that because I started reading the\n> comp.databases.postgresql.patches news feed).\n> :(\n\nNo, I didn't see any gripe. I was just saying I like making the\nannouncement, having it in the queue on a web page everyone can see, and\nlater applying it. Seems like a good procedure, and added because\npeople were complaining I was not consistenly waiting for comments\nbefore applying patches.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 11 Jun 2001 22:34:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Fw: Isn't pg_statistic a security hole -\n\tSolution Proposal"
},
{
"msg_contents": "Tom Lane writes:\n\n> Could we make the assumption that table names in catalogs\n> will be of type 'name'?\n\nI wouldn't want to guarantee it for the information schema.\n\n> If so, it'd work to make two versions of the has_table_privilege\n> function, one taking type \"name\" and the other taking type \"text\".\n> The \"name\" version would take its input as-is, the \"text\" version\n> would do case folding and truncation.\n\nTarget typing is ugly.\n\nI've tried to look up the supposed \\paraphrase{we had enough problems\nbefore we added the existing behaviour to setval, etc.} discussion but\ncouldn't find it. My experience on the mailing list is that it goes the\nother way.\n\nThe identifier quoting rules are already surprising enough for the\nuninitiated, but it's even more surprising that they even apply when\nsyntactically no identifier is present. Between the behaviour of \"what\nyou see is what you get\" and \"this language is kind of confusing so you\nhave to quote your strings twice with two different quoting characters\"\nthe choice is obvious to me.\n\nI'm also arguing for consistency with the standard. According to that,\nusers will be able to do\n\nSELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = 'SoMeThInG';\n\nand a load of similar queries. You can't change the case folding rules\nhere unless you really want to go out of your way, and then you have\nreally confused the heck out of users.\n\nWe could make relname.func() work without breaking compatibility, ISTM,\nand then we only need the Oid version. For computing the relation name at\nexecution time, the \"plain\" version is going to be more useful anyway.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Tue, 12 Jun 2001 18:01:23 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCHES] Fw: Isn't pg_statistic a security hole\n\t- Solution Proposal"
},
{
"msg_contents": "> Tom Lane writes:\n> \n> > Could we make the assumption that table names in catalogs\n> > will be of type 'name'?\n> \n> I wouldn't want to guarantee it for the information schema.\n> \n> > If so, it'd work to make two versions of the has_table_privilege\n> > function, one taking type \"name\" and the other taking type \"text\".\n> > The \"name\" version would take its input as-is, the \"text\" version\n> > would do case folding and truncation.\n> \n> Target typing is ugly.\n> \n> I've tried to look up the supposed \\paraphrase{we had enough problems\n> before we added the existing behaviour to setval, etc.} discussion but\n> couldn't find it. My experience on the mailing list is that it goes the\n> other way.\n\nI am confused. What are you suggesting as far as having a name and text\nversion of the functions?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 12 Jun 2001 12:15:22 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCHES] Fw: Isn't pg_statistic a security hole -\n\tSolution Proposal"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n>> Could we make the assumption that table names in catalogs\n>> will be of type 'name'?\n\n> I wouldn't want to guarantee it for the information schema.\n\nYour objections are not without merit, and in the interest of bringing\nthis thing to closure I'll concede for now. I want to get on with this\nso that I can wrap up the pg_statistic view that started the whole\nthread.\n\nWhat I suggest we do is apply the portions of Joe's latest patch that\nsupport has_table_privilege with OID inputs and with NAME inputs,\nomitting the combinations that take TEXT inputs and do casefolding.\nWe can add that part later if it proves that people do indeed want it.\n\nI have specific reasons for wanting to keep the functions accepting\nNAME rather than TEXT: that will save a run-time type conversion in the\ncommon case where one is reading the input from a system catalog, and\nit will at least provide automatic truncation of overlength names when\none is accepting a literal. (I trust Peter won't object to that ;-).)\n\nWe will probably have to revisit this territory when we implement\nschemas: there will need to be a way to input qualified table names\nlike foo.bar, and a way to input NON qualified names like \"foo.bar\".\nBut we can cross that bridge when we come to it.\n\nComments, objections?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Jun 2001 13:22:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCHES] Fw: Isn't pg_statistic a security hole - Solution\n\tProposal"
},
{
"msg_contents": "Tom Lane writes:\n\n> What I suggest we do is apply the portions of Joe's latest patch that\n> support has_table_privilege with OID inputs and with NAME inputs,\n> omitting the combinations that take TEXT inputs and do casefolding.\n> We can add that part later if it proves that people do indeed want it.\n\nOkay.\n\n> We will probably have to revisit this territory when we implement\n> schemas: there will need to be a way to input qualified table names\n> like foo.bar, and a way to input NON qualified names like \"foo.bar\".\n> But we can cross that bridge when we come to it.\n\nI figured we would add another argument to the function.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Wed, 13 Jun 2001 23:14:38 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCHES] Fw: Isn't pg_statistic a security hole\n\t- Solution Proposal"
},
{
"msg_contents": "> What I suggest we do is apply the portions of Joe's latest patch that\n> support has_table_privilege with OID inputs and with NAME inputs,\n> omitting the combinations that take TEXT inputs and do casefolding.\n> We can add that part later if it proves that people do indeed want it.\n> \n> I have specific reasons for wanting to keep the functions accepting\n> NAME rather than TEXT: that will save a run-time type conversion in the\n> common case where one is reading the input from a system catalog, and\n> it will at least provide automatic truncation of overlength names when\n> one is accepting a literal. (I trust Peter won't object to that ;-).)\n> \n\nI'll rework the patch per the above and resend.\n\nThanks,\n\n-- Joe\n\n",
"msg_date": "Wed, 13 Jun 2001 18:19:52 -0700",
"msg_from": "\"Joe Conway\" <joseph.conway@home.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCHES] Fw: Isn't pg_statistic a security hole - Solution\n\tProposal"
},
{
"msg_contents": "\"Joe Conway\" <joseph.conway@home.com> writes:\n> I'll rework the patch per the above and resend.\n\nToo late ;-). I just finished ripping out the unneeded parts and\napplying.\n\nI made a few minor changes too, mostly removing unnecessary code\n(you don't need to call nameout, everyone else just uses NameStr)\nand trying to line up stylistically with other code. One actual\nbug noted: you were doing this in a couple of places:\n\n+\ttuple = SearchSysCache(RELOID, ObjectIdGetDatum(reloid), 0, 0, 0);\n+\tif (!HeapTupleIsValid(tuple)) {\n+\t\telog(ERROR, \"has_table_privilege: invalid relation oid %d\", (int) reloid);\n+\t}\n+\n+\trelname = NameStr(((Form_pg_class) GETSTRUCT(tuple))->relname);\n+\n+\tReleaseSysCache(tuple);\n\nSince relname is just a pointer into the tuple, expecting it to still\nbe valid after you release the syscache entry is not kosher. There are\nseveral ways to deal with this, but what I actually did was to make use\nof lsyscache.c's get_rel_name, which pstrdup()s its result to avoid this\ntrap.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Jun 2001 21:27:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCHES] Fw: Isn't pg_statistic a security hole - Solution\n\tProposal"
},
{
"msg_contents": ">\n> Too late ;-). I just finished ripping out the unneeded parts and\n> applying.\n\n\nThanks! I take it I still need to do the documentation though ;)\n\n\n>\n> I made a few minor changes too, mostly removing unnecessary code\n> (you don't need to call nameout, everyone else just uses NameStr)\n> and trying to line up stylistically with other code. One actual\n> bug noted: you were doing this in a couple of places:\n>\n\nOnce again, thanks for the \"important safety tips\". I saw references to this\ntrap in the comments, and therefore should have known better. I guess only\npractice makes perfect (hopefully!).\n\n-- Joe\n\n",
"msg_date": "Wed, 13 Jun 2001 18:37:00 -0700",
"msg_from": "\"Joe Conway\" <joseph.conway@home.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCHES] Fw: Isn't pg_statistic a security hole - Solution\n\tProposal"
},
{
"msg_contents": "\"Joe Conway\" <joseph.conway@home.com> writes:\n>> Too late ;-). I just finished ripping out the unneeded parts and\n>> applying.\n\n> Thanks! I take it I still need to do the documentation though ;)\n\nI put in a few words in func.sgml, but feel free to improve on it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Jun 2001 21:40:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCHES] Fw: Isn't pg_statistic a security hole - Solution\n\tProposal"
},
{
"msg_contents": "\nPatch applied by Tom for oid and Name versions.\n\n> > I have just thought of a possible compromise. Peter is right that we\n> > don't want case conversion on table names that are extracted from\n> > catalogs. But I think we do want it on table names expressed as string\n> > literals. Could we make the assumption that table names in catalogs\n> > will be of type 'name'? If so, it'd work to make two versions of the\n> > has_table_privilege function, one taking type \"name\" and the other\n> > taking type \"text\". The \"name\" version would take its input as-is,\n> > the \"text\" version would do case folding and truncation. This would\n> > work transparently for queries selecting relation names from the system\n> > catalogs, and it'd also work transparently for queries using unmarked\n> > string literals (which will be preferentially resolved as type \"text\").\n> > Worst case if the system makes the wrong choice is you throw in an\n> > explicit coercion to name or text. Comments?\n> \n> OK -- here's take #5.\n> \n> It \"make\"s and \"make check\"s clean against current cvs tip.\n> \n> There are now both Text and Name variants, and the regression test support\n> is rolled into the patch. Note that to be complete wrt Name based variants,\n> there are now 12 user visible versions of has_table_privilege:\n> \n> has_table_privilege(Text usename, Text relname, Text priv_type)\n> has_table_privilege(Text usename, Name relname, Text priv_type)\n> has_table_privilege(Name usename, Text relname, Text priv_type)\n> has_table_privilege(Name usename, Name relname, Text priv_type)\n> has_table_privilege(Text relname, Text priv_type) /* assumes current_user */\n> has_table_privilege(Name relname, Text priv_type) /* assumes current_user */\n> has_table_privilege(Text usename, Oid reloid, Text priv_type)\n> has_table_privilege(Name usename, Oid reloid, Text priv_type)\n> has_table_privilege(Oid reloid, Text priv_type) /* assumes current_user */\n> has_table_privilege(Oid usesysid, Text relname, Text priv_type)\n> has_table_privilege(Oid usesysid, Name relname, Text priv_type)\n> has_table_privilege(Oid usesysid, Oid reloid, Text priv_type)\n> \n> For the Text based inputs, a new internal function, get_Name is used\n> (shamelessly copied from get_seq_name in sequence.c) to downcase if not\n> quoted, or remove quotes if quoted, and truncate. I also added a few test\n> cases for the downcasing, quote removal, and Name based variants to the\n> regression test.\n> \n> Only thing left (I hope!) is documentation. I'm sure I either have or can\n> get the DocBook tools, but I've never used them. Would it be simpler to\n> clone and hand edit one of the existing docs? Any suggestions to get me\n> started?\n> \n> Thanks,\n> \n> -- Joe\n> \n> \n> \n> \n> \n> \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 13 Jun 2001 22:52:59 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: Fw: Isn't pg_statistic a security hole -\n\tSolution Proposal"
}
] |
[
{
"msg_contents": "> One more feature for discussion :-)\n> \n> In the next couple of hours (at least tomorrow) I would be\n> ready to commit the backend changes for table-/index-access\n> statistics and current backend activity views.\n> \n> Should I apply the patches or provide a separate patch for\n> review first?\n\nOne concern I remember from memory was, that the table names\ndid not conform to the system table semantics of \"pg_*\". (pgstat_*)\nHave you, or would you change that ?\n\nAndreas\n",
"msg_date": "Fri, 1 Jun 2001 09:43:03 +0200 ",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: Access statistics"
}
] |
[
{
"msg_contents": "\n> > AND expect it to do more than that. So a NOTICE at the\n> > actual usage, telling that x%TYPE for y got resolved to\n> > basetype z and will currently NOT follow later changes to x\n> > should do it.\n> \n> So if you could implement it like that, we will be VERY happy.\n\nI also like that approach.\n\nAndreas\n\n",
"msg_date": "Fri, 1 Jun 2001 09:55:54 +0200 ",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: Re: Support for %TYPE in CREATE FUNCTION"
},
{
"msg_contents": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n\n> > > AND expect it to do more than that. So a NOTICE at the\n> > > actual usage, telling that x%TYPE for y got resolved to\n> > > basetype z and will currently NOT follow later changes to x\n> > > should do it.\n> > \n> > So if you could implement it like that, we will be VERY happy.\n> \n> I also like that approach.\n\nWell, if it helps, here is the patch again, with the NOTICE.\n\nIan\n\nIndex: doc/src/sgml/ref/create_function.sgml\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/doc/src/sgml/ref/create_function.sgml,v\nretrieving revision 1.23\ndiff -u -p -r1.23 create_function.sgml\n--- doc/src/sgml/ref/create_function.sgml\t2001/05/19 09:01:10\t1.23\n+++ doc/src/sgml/ref/create_function.sgml\t2001/06/01 16:52:56\n@@ -55,10 +55,16 @@ CREATE FUNCTION <replaceable class=\"para\n <listitem>\n <para>\n The data type(s) of the function's arguments, if any. The\n- input types may be base or complex types, or\n- <literal>opaque</literal>. <literal>Opaque</literal> indicates\n+ input types may be base or complex types,\n+ <literal>opaque</literal>, or the same as the type of an\n+ existing column. <literal>Opaque</literal> indicates\n that the function accepts arguments of a non-SQL type such as\n <type>char *</type>.\n+\tThe type of a column is indicated using <replaceable\n+\tclass=\"parameter\">tablename</replaceable>.<replaceable\n+\tclass=\"parameter\">columnname</replaceable><literal>%TYPE</literal>;\n+\tusing this can sometimes help make a function independent from\n+\tchanges to the definition of a table.\n </para>\n </listitem>\n </varlistentry>\n@@ -69,8 +75,10 @@ CREATE FUNCTION <replaceable class=\"para\n <listitem>\n <para>\n The return data type. The output type may be specified as a\n- base type, complex type, <literal>setof</literal> type, or\n- <literal>opaque</literal>. The <literal>setof</literal>\n+ base type, complex type, <literal>setof</literal> type,\n+ <literal>opaque</literal>, or the same as the type of an\n+ existing column.\n+ The <literal>setof</literal>\n modifier indicates that the function will return a set of\n items, rather than a single item. Functions with a declared\n return type of <literal>opaque</literal> do not return a value.\nIndex: src/backend/parser/analyze.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/parser/analyze.c,v\nretrieving revision 1.187\ndiff -u -p -r1.187 analyze.c\n--- src/backend/parser/analyze.c\t2001/05/22 16:37:15\t1.187\n+++ src/backend/parser/analyze.c\t2001/06/01 16:52:58\n@@ -29,6 +29,7 @@\n #include \"parser/parse_relation.h\"\n #include \"parser/parse_target.h\"\n #include \"parser/parse_type.h\"\n+#include \"parser/parse_expr.h\"\n #include \"rewrite/rewriteManip.h\"\n #include \"utils/builtins.h\"\n #include \"utils/fmgroids.h\"\n@@ -51,7 +52,10 @@ static Node *transformSetOperationTree(P\n static Query *transformUpdateStmt(ParseState *pstate, UpdateStmt *stmt);\n static Query *transformCreateStmt(ParseState *pstate, CreateStmt *stmt);\n static Query *transformAlterTableStmt(ParseState *pstate, AlterTableStmt *stmt);\n+static Node *transformTypeRefs(ParseState *pstate, Node *stmt);\n \n+static void transformTypeRefsList(ParseState *pstate, List *l);\n+static void transformTypeRef(ParseState *pstate, TypeName *tn);\n static List *getSetColTypes(ParseState *pstate, Node *node);\n static void transformForUpdate(Query *qry, List *forUpdate);\n static void transformFkeyGetPrimaryKey(FkConstraint *fkconstraint, Oid *pktypoid);\n@@ -232,6 +236,17 @@ transformStmt(ParseState *pstate, Node *\n \t\t\t\t\t\t\t\t\t\t\t (SelectStmt *) parseTree);\n \t\t\tbreak;\n \n+\t\t\t/*\n+\t\t\t * Convert use of %TYPE in statements where it is permitted.\n+\t\t\t */\n+\t\tcase T_ProcedureStmt:\n+\t\tcase T_CommentStmt:\n+\t\tcase T_RemoveFuncStmt:\n+\t\tcase T_DefineStmt:\n+\t\t\tresult = makeNode(Query);\n+\t\t\tresult->commandType = CMD_UTILITY;\n+\t\t\tresult->utilityStmt = transformTypeRefs(pstate, parseTree);\n+\t\t\tbreak;\n \n \t\tdefault:\n \n@@ -2686,6 +2701,107 @@ transformAlterTableStmt(ParseState *psta\n \t}\n \tqry->utilityStmt = (Node *) stmt;\n \treturn qry;\n+}\n+\n+/* \n+ * Transform uses of %TYPE in a statement.\n+ */\n+static Node *\n+transformTypeRefs(ParseState *pstate, Node *stmt)\n+{\n+\tswitch (nodeTag(stmt))\n+\t{\n+\t\tcase T_ProcedureStmt:\n+\t\t{\n+\t\t\tProcedureStmt *ps = (ProcedureStmt *) stmt;\n+\n+\t\t\ttransformTypeRefsList(pstate, ps->argTypes);\n+\t\t\ttransformTypeRef(pstate, (TypeName *) ps->returnType);\n+\t\t\ttransformTypeRefsList(pstate, ps->withClause);\n+\t\t}\n+\t\tbreak;\n+\n+\t\tcase T_CommentStmt:\n+\t\t{\n+\t\t\tCommentStmt\t *cs = (CommentStmt *) stmt;\n+\n+\t\t\ttransformTypeRefsList(pstate, cs->objlist);\n+\t\t}\n+\t\tbreak;\n+\n+\t\tcase T_RemoveFuncStmt:\n+\t\t{\n+\t\t\tRemoveFuncStmt *rs = (RemoveFuncStmt *) stmt;\n+\n+\t\t\ttransformTypeRefsList(pstate, rs->args);\n+\t\t}\n+\t\tbreak;\n+\n+\t\tcase T_DefineStmt:\n+\t\t{\n+\t\t\tDefineStmt *ds = (DefineStmt *) stmt;\n+\t\t\tList\t *ele;\n+\n+\t\t\tforeach(ele, ds->definition)\n+\t\t\t{\n+\t\t\t\tDefElem\t *de = (DefElem *) lfirst(ele);\n+\n+\t\t\t\tif (de->arg != NULL\n+\t\t\t\t\t&& IsA(de->arg, TypeName))\n+\t\t\t\t{\n+\t\t\t\t\ttransformTypeRef(pstate, (TypeName *) de->arg);\n+\t\t\t\t}\n+\t\t\t}\n+\t\t}\n+\t\tbreak;\n+\n+\t\tdefault:\n+\t\t\telog(ERROR, \"Unsupported type %d in transformTypeRefs\",\n+\t\t\t\t nodeTag(stmt));\n+\t\t\tbreak;\n+\t}\n+\n+\treturn stmt;\n+}\n+\n+/*\n+ * Transform uses of %TYPE in a list.\n+ */\n+static void\n+transformTypeRefsList(ParseState *pstate, List *l)\n+{\n+\tList\t *ele;\n+\n+\tforeach(ele, l)\n+\t{\n+\t\tif (IsA(lfirst(ele), TypeName))\n+\t\t\ttransformTypeRef(pstate, (TypeName *) lfirst(ele));\n+\t}\n+}\n+\n+/*\n+ * Transform a TypeName to not use %TYPE.\n+ */\n+static void\n+transformTypeRef(ParseState *pstate, TypeName *tn)\n+{\n+\tAttr *att;\n+\tNode *n;\n+\tVar\t *v;\n+\tchar *tyn;\n+\n+\tif (tn->attrname == NULL)\n+\t\treturn;\n+\tatt = makeAttr(tn->name, tn->attrname);\n+\tn = transformExpr(pstate, (Node *) att, EXPR_COLUMN_FIRST);\n+\tif (! IsA(n, Var))\n+\t\telog(ERROR, \"unsupported expression in %%TYPE\");\n+\tv = (Var *) n;\n+\ttyn = typeidTypeName(v->vartype);\n+\telog(NOTICE, \"%s.%s%%TYPE converted to %s\", tn->name, tn->attrname, tyn);\n+\ttn->name = tyn;\n+\ttn->typmod = v->vartypmod;\n+\ttn->attrname = NULL;\n }\n \n /* exported so planner can check again after rewriting, query pullup, etc */\nIndex: src/backend/parser/gram.y\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/parser/gram.y,v\nretrieving revision 2.227\ndiff -u -p -r2.227 gram.y\n--- src/backend/parser/gram.y\t2001/05/27 09:59:29\t2.227\n+++ src/backend/parser/gram.y\t2001/06/01 16:53:02\n@@ -192,7 +192,7 @@ static void doNegateFloat(Value *v);\n \t\tdef_list, opt_indirection, group_clause, TriggerFuncArgs,\n \t\tselect_limit, opt_select_limit\n \n-%type <typnam>\tfunc_arg, func_return, aggr_argtype\n+%type <typnam>\tfunc_arg, func_return, func_type, aggr_argtype\n \n %type <boolean>\topt_arg, TriggerForOpt, TriggerForType, OptTemp\n \n@@ -2490,7 +2490,7 @@ func_args_list: func_arg\n \t\t\t\t{\t$$ = lappend($1, $3); }\n \t\t;\n \n-func_arg: opt_arg Typename\n+func_arg: opt_arg func_type\n \t\t\t\t{\n \t\t\t\t\t/* We can catch over-specified arguments here if we want to,\n \t\t\t\t\t * but for now better to silently swallow typmod, etc.\n@@ -2498,7 +2498,7 @@ func_arg: opt_arg Typename\n \t\t\t\t\t */\n \t\t\t\t\t$$ = $2;\n \t\t\t\t}\n-\t\t| Typename\n+\t\t| func_type\n \t\t\t\t{\n \t\t\t\t\t$$ = $1;\n \t\t\t\t}\n@@ -2526,7 +2526,7 @@ func_as: Sconst\n \t\t\t\t{ \t$$ = makeList2(makeString($1), makeString($3)); }\n \t\t;\n \n-func_return: Typename\n+func_return: func_type\n \t\t\t\t{\n \t\t\t\t\t/* We can catch over-specified arguments here if we want to,\n \t\t\t\t\t * but for now better to silently swallow typmod, etc.\n@@ -2536,6 +2536,18 @@ func_return: Typename\n \t\t\t\t}\n \t\t;\n \n+func_type:\tTypename\n+\t\t\t\t{\n+\t\t\t\t\t$$ = $1;\n+\t\t\t\t}\n+\t\t| IDENT '.' ColId '%' TYPE_P\n+\t\t\t\t{\n+\t\t\t\t\t$$ = makeNode(TypeName);\n+\t\t\t\t\t$$->name = $1;\n+\t\t\t\t\t$$->typmod = -1;\n+\t\t\t\t\t$$->attrname = $3;\n+\t\t\t\t}\n+\t\t;\n \n /*****************************************************************************\n *\nIndex: src/backend/parser/parse_expr.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/parser/parse_expr.c,v\nretrieving revision 1.96\ndiff -u -p -r1.96 parse_expr.c\n--- src/backend/parser/parse_expr.c\t2001/05/21 18:42:08\t1.96\n+++ src/backend/parser/parse_expr.c\t2001/06/01 16:53:03\n@@ -942,6 +942,7 @@ parser_typecast_expression(ParseState *p\n char *\n TypeNameToInternalName(TypeName *typename)\n {\n+\tAssert(typename->attrname == NULL);\n \tif (typename->arrayBounds != NIL)\n \t{\n \nIndex: src/include/nodes/parsenodes.h\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/include/nodes/parsenodes.h,v\nretrieving revision 1.129\ndiff -u -p -r1.129 parsenodes.h\n--- src/include/nodes/parsenodes.h\t2001/05/21 18:42:08\t1.129\n+++ src/include/nodes/parsenodes.h\t2001/06/01 16:53:09\n@@ -951,6 +951,7 @@ typedef struct TypeName\n \tbool\t\tsetof;\t\t\t/* is a set? */\n \tint32\t\ttypmod;\t\t\t/* type modifier */\n \tList\t *arrayBounds;\t/* array bounds */\n+\tchar\t *attrname;\t\t/* field name when using %TYPE */\n } TypeName;\n \n /*\nIndex: src/test/regress/input/create_function_2.source\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/test/regress/input/create_function_2.source,v\nretrieving revision 1.12\ndiff -u -p -r1.12 create_function_2.source\n--- src/test/regress/input/create_function_2.source\t2000/11/20 20:36:54\t1.12\n+++ src/test/regress/input/create_function_2.source\t2001/06/01 16:53:18\n@@ -13,6 +13,12 @@ CREATE FUNCTION hobby_construct(text, te\n LANGUAGE 'sql';\n \n \n+CREATE FUNCTION hobbies_by_name(hobbies_r.name%TYPE)\n+ RETURNS hobbies_r.person%TYPE\n+ AS 'select person from hobbies_r where name = $1'\n+ LANGUAGE 'sql';\n+\n+\n CREATE FUNCTION equipment(hobbies_r)\n RETURNS setof equipment_r\n AS 'select * from equipment_r where hobby = $1.name'\nIndex: src/test/regress/input/misc.source\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/test/regress/input/misc.source,v\nretrieving revision 1.14\ndiff -u -p -r1.14 misc.source\n--- src/test/regress/input/misc.source\t2000/11/20 20:36:54\t1.14\n+++ src/test/regress/input/misc.source\t2001/06/01 16:53:18\n@@ -214,6 +214,7 @@ SELECT user_relns() AS user_relns\n \n --SELECT name(equipment(hobby_construct(text 'skywalking', text 'mer'))) AS equip_name;\n \n+SELECT hobbies_by_name('basketball');\n \n --\n -- check that old-style C functions work properly with TOASTed values\nIndex: src/test/regress/output/create_function_2.source\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/test/regress/output/create_function_2.source,v\nretrieving revision 1.13\ndiff -u -p -r1.13 create_function_2.source\n--- src/test/regress/output/create_function_2.source\t2000/11/20 20:36:54\t1.13\n+++ src/test/regress/output/create_function_2.source\t2001/06/01 16:53:18\n@@ -9,6 +9,12 @@ CREATE FUNCTION hobby_construct(text, te\n RETURNS hobbies_r\n AS 'select $1 as name, $2 as hobby'\n LANGUAGE 'sql';\n+CREATE FUNCTION hobbies_by_name(hobbies_r.name%TYPE)\n+ RETURNS hobbies_r.person%TYPE\n+ AS 'select person from hobbies_r where name = $1'\n+ LANGUAGE 'sql';\n+NOTICE: hobbies_r.name%TYPE converted to text\n+NOTICE: hobbies_r.person%TYPE converted to text\n CREATE FUNCTION equipment(hobbies_r)\n RETURNS setof equipment_r\n AS 'select * from equipment_r where hobby = $1.name'\nIndex: src/test/regress/output/misc.source\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/test/regress/output/misc.source,v\nretrieving revision 1.27\ndiff -u -p -r1.27 misc.source\n--- src/test/regress/output/misc.source\t2000/11/20 20:36:54\t1.27\n+++ src/test/regress/output/misc.source\t2001/06/01 16:53:18\n@@ -656,6 +656,12 @@ SELECT user_relns() AS user_relns\n (90 rows)\n \n --SELECT name(equipment(hobby_construct(text 'skywalking', text 'mer'))) AS equip_name;\n+SELECT hobbies_by_name('basketball');\n+ hobbies_by_name \n+-----------------\n+ joe\n+(1 row)\n+\n --\n -- check that old-style C functions work properly with TOASTed values\n --\n",
"msg_date": "01 Jun 2001 09:59:07 -0700",
"msg_from": "Ian Lance Taylor <ian@airs.com>",
"msg_from_op": false,
"msg_subject": "Re: AW: [HACKERS] Re: Support for %TYPE in CREATE FUNCTION"
},
{
"msg_contents": "\nWhere are we on this? Tom is against it, Jan was initially against it,\nand I have counted 4-5 people who want it.\n\n\n> Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n> \n> > > > AND expect it to do more than that. So a NOTICE at the\n> > > > actual usage, telling that x%TYPE for y got resolved to\n> > > > basetype z and will currently NOT follow later changes to x\n> > > > should do it.\n> > > \n> > > So if you could implement it like that, we will be VERY happy.\n> > \n> > I also like that approach.\n> \n> Well, if it helps, here is the patch again, with the NOTICE.\n> \n> Ian\n> \n> Index: doc/src/sgml/ref/create_function.sgml\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/doc/src/sgml/ref/create_function.sgml,v\n> retrieving revision 1.23\n> diff -u -p -r1.23 create_function.sgml\n> --- doc/src/sgml/ref/create_function.sgml\t2001/05/19 09:01:10\t1.23\n> +++ doc/src/sgml/ref/create_function.sgml\t2001/06/01 16:52:56\n> @@ -55,10 +55,16 @@ CREATE FUNCTION <replaceable class=\"para\n> <listitem>\n> <para>\n> The data type(s) of the function's arguments, if any. The\n> - input types may be base or complex types, or\n> - <literal>opaque</literal>. <literal>Opaque</literal> indicates\n> + input types may be base or complex types,\n> + <literal>opaque</literal>, or the same as the type of an\n> + existing column. <literal>Opaque</literal> indicates\n> that the function accepts arguments of a non-SQL type such as\n> <type>char *</type>.\n> +\tThe type of a column is indicated using <replaceable\n> +\tclass=\"parameter\">tablename</replaceable>.<replaceable\n> +\tclass=\"parameter\">columnname</replaceable><literal>%TYPE</literal>;\n> +\tusing this can sometimes help make a function independent from\n> +\tchanges to the definition of a table.\n> </para>\n> </listitem>\n> </varlistentry>\n> @@ -69,8 +75,10 @@ CREATE FUNCTION <replaceable class=\"para\n> <listitem>\n> <para>\n> The return data type. The output type may be specified as a\n> - base type, complex type, <literal>setof</literal> type, or\n> - <literal>opaque</literal>. The <literal>setof</literal>\n> + base type, complex type, <literal>setof</literal> type,\n> + <literal>opaque</literal>, or the same as the type of an\n> + existing column.\n> + The <literal>setof</literal>\n> modifier indicates that the function will return a set of\n> items, rather than a single item. Functions with a declared\n> return type of <literal>opaque</literal> do not return a value.\n> Index: src/backend/parser/analyze.c\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/parser/analyze.c,v\n> retrieving revision 1.187\n> diff -u -p -r1.187 analyze.c\n> --- src/backend/parser/analyze.c\t2001/05/22 16:37:15\t1.187\n> +++ src/backend/parser/analyze.c\t2001/06/01 16:52:58\n> @@ -29,6 +29,7 @@\n> #include \"parser/parse_relation.h\"\n> #include \"parser/parse_target.h\"\n> #include \"parser/parse_type.h\"\n> +#include \"parser/parse_expr.h\"\n> #include \"rewrite/rewriteManip.h\"\n> #include \"utils/builtins.h\"\n> #include \"utils/fmgroids.h\"\n> @@ -51,7 +52,10 @@ static Node *transformSetOperationTree(P\n> static Query *transformUpdateStmt(ParseState *pstate, UpdateStmt *stmt);\n> static Query *transformCreateStmt(ParseState *pstate, CreateStmt *stmt);\n> static Query *transformAlterTableStmt(ParseState *pstate, AlterTableStmt *stmt);\n> +static Node *transformTypeRefs(ParseState *pstate, Node *stmt);\n> \n> +static void transformTypeRefsList(ParseState *pstate, List *l);\n> +static void transformTypeRef(ParseState *pstate, TypeName *tn);\n> static List *getSetColTypes(ParseState *pstate, Node *node);\n> static void transformForUpdate(Query *qry, List *forUpdate);\n> static void transformFkeyGetPrimaryKey(FkConstraint *fkconstraint, Oid *pktypoid);\n> @@ -232,6 +236,17 @@ transformStmt(ParseState *pstate, Node *\n> \t\t\t\t\t\t\t\t\t\t\t (SelectStmt *) parseTree);\n> \t\t\tbreak;\n> \n> +\t\t\t/*\n> +\t\t\t * Convert use of %TYPE in statements where it is permitted.\n> +\t\t\t */\n> +\t\tcase T_ProcedureStmt:\n> +\t\tcase T_CommentStmt:\n> +\t\tcase T_RemoveFuncStmt:\n> +\t\tcase T_DefineStmt:\n> +\t\t\tresult = makeNode(Query);\n> +\t\t\tresult->commandType = CMD_UTILITY;\n> +\t\t\tresult->utilityStmt = transformTypeRefs(pstate, parseTree);\n> +\t\t\tbreak;\n> \n> \t\tdefault:\n> \n> @@ -2686,6 +2701,107 @@ transformAlterTableStmt(ParseState *psta\n> \t}\n> \tqry->utilityStmt = (Node *) stmt;\n> \treturn qry;\n> +}\n> +\n> +/* \n> + * Transform uses of %TYPE in a statement.\n> + */\n> +static Node *\n> +transformTypeRefs(ParseState *pstate, Node *stmt)\n> +{\n> +\tswitch (nodeTag(stmt))\n> +\t{\n> +\t\tcase T_ProcedureStmt:\n> +\t\t{\n> +\t\t\tProcedureStmt *ps = (ProcedureStmt *) stmt;\n> +\n> +\t\t\ttransformTypeRefsList(pstate, ps->argTypes);\n> +\t\t\ttransformTypeRef(pstate, (TypeName *) ps->returnType);\n> +\t\t\ttransformTypeRefsList(pstate, ps->withClause);\n> +\t\t}\n> +\t\tbreak;\n> +\n> +\t\tcase T_CommentStmt:\n> +\t\t{\n> +\t\t\tCommentStmt\t *cs = (CommentStmt *) stmt;\n> +\n> +\t\t\ttransformTypeRefsList(pstate, cs->objlist);\n> +\t\t}\n> +\t\tbreak;\n> +\n> +\t\tcase T_RemoveFuncStmt:\n> +\t\t{\n> +\t\t\tRemoveFuncStmt *rs = (RemoveFuncStmt *) stmt;\n> +\n> +\t\t\ttransformTypeRefsList(pstate, rs->args);\n> +\t\t}\n> +\t\tbreak;\n> +\n> +\t\tcase T_DefineStmt:\n> +\t\t{\n> +\t\t\tDefineStmt *ds = (DefineStmt *) stmt;\n> +\t\t\tList\t *ele;\n> +\n> +\t\t\tforeach(ele, ds->definition)\n> +\t\t\t{\n> +\t\t\t\tDefElem\t *de = (DefElem *) lfirst(ele);\n> +\n> +\t\t\t\tif (de->arg != NULL\n> +\t\t\t\t\t&& IsA(de->arg, TypeName))\n> +\t\t\t\t{\n> +\t\t\t\t\ttransformTypeRef(pstate, (TypeName *) de->arg);\n> +\t\t\t\t}\n> +\t\t\t}\n> +\t\t}\n> +\t\tbreak;\n> +\n> +\t\tdefault:\n> +\t\t\telog(ERROR, \"Unsupported type %d in transformTypeRefs\",\n> +\t\t\t\t nodeTag(stmt));\n> +\t\t\tbreak;\n> +\t}\n> +\n> +\treturn stmt;\n> +}\n> +\n> +/*\n> + * Transform uses of %TYPE in a list.\n> + */\n> +static void\n> +transformTypeRefsList(ParseState *pstate, List *l)\n> +{\n> +\tList\t *ele;\n> +\n> +\tforeach(ele, l)\n> +\t{\n> +\t\tif (IsA(lfirst(ele), TypeName))\n> +\t\t\ttransformTypeRef(pstate, (TypeName *) lfirst(ele));\n> +\t}\n> +}\n> +\n> +/*\n> + * Transform a TypeName to not use %TYPE.\n> + */\n> +static void\n> +transformTypeRef(ParseState *pstate, TypeName *tn)\n> +{\n> +\tAttr *att;\n> +\tNode *n;\n> +\tVar\t *v;\n> +\tchar *tyn;\n> +\n> +\tif (tn->attrname == NULL)\n> +\t\treturn;\n> +\tatt = makeAttr(tn->name, tn->attrname);\n> +\tn = transformExpr(pstate, (Node *) att, EXPR_COLUMN_FIRST);\n> +\tif (! IsA(n, Var))\n> +\t\telog(ERROR, \"unsupported expression in %%TYPE\");\n> +\tv = (Var *) n;\n> +\ttyn = typeidTypeName(v->vartype);\n> +\telog(NOTICE, \"%s.%s%%TYPE converted to %s\", tn->name, tn->attrname, tyn);\n> +\ttn->name = tyn;\n> +\ttn->typmod = v->vartypmod;\n> +\ttn->attrname = NULL;\n> }\n> \n> /* exported so planner can check again after rewriting, query pullup, etc */\n> Index: src/backend/parser/gram.y\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/parser/gram.y,v\n> retrieving revision 2.227\n> diff -u -p -r2.227 gram.y\n> --- src/backend/parser/gram.y\t2001/05/27 09:59:29\t2.227\n> +++ src/backend/parser/gram.y\t2001/06/01 16:53:02\n> @@ -192,7 +192,7 @@ static void doNegateFloat(Value *v);\n> \t\tdef_list, opt_indirection, group_clause, TriggerFuncArgs,\n> \t\tselect_limit, opt_select_limit\n> \n> -%type <typnam>\tfunc_arg, func_return, aggr_argtype\n> +%type <typnam>\tfunc_arg, func_return, func_type, aggr_argtype\n> \n> %type <boolean>\topt_arg, TriggerForOpt, TriggerForType, OptTemp\n> \n> @@ -2490,7 +2490,7 @@ func_args_list: func_arg\n> \t\t\t\t{\t$$ = lappend($1, $3); }\n> \t\t;\n> \n> -func_arg: opt_arg Typename\n> +func_arg: opt_arg func_type\n> \t\t\t\t{\n> \t\t\t\t\t/* We can catch over-specified arguments here if we want to,\n> \t\t\t\t\t * but for now better to silently swallow typmod, etc.\n> @@ -2498,7 +2498,7 @@ func_arg: opt_arg Typename\n> \t\t\t\t\t */\n> \t\t\t\t\t$$ = $2;\n> \t\t\t\t}\n> -\t\t| Typename\n> +\t\t| func_type\n> \t\t\t\t{\n> \t\t\t\t\t$$ = $1;\n> \t\t\t\t}\n> @@ -2526,7 +2526,7 @@ func_as: Sconst\n> \t\t\t\t{ \t$$ = makeList2(makeString($1), makeString($3)); }\n> \t\t;\n> \n> -func_return: Typename\n> +func_return: func_type\n> \t\t\t\t{\n> \t\t\t\t\t/* We can catch over-specified arguments here if we want to,\n> \t\t\t\t\t * but for now better to silently swallow typmod, etc.\n> @@ -2536,6 +2536,18 @@ func_return: Typename\n> \t\t\t\t}\n> \t\t;\n> \n> +func_type:\tTypename\n> +\t\t\t\t{\n> +\t\t\t\t\t$$ = $1;\n> +\t\t\t\t}\n> +\t\t| IDENT '.' ColId '%' TYPE_P\n> +\t\t\t\t{\n> +\t\t\t\t\t$$ = makeNode(TypeName);\n> +\t\t\t\t\t$$->name = $1;\n> +\t\t\t\t\t$$->typmod = -1;\n> +\t\t\t\t\t$$->attrname = $3;\n> +\t\t\t\t}\n> +\t\t;\n> \n> /*****************************************************************************\n> *\n> Index: src/backend/parser/parse_expr.c\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/parser/parse_expr.c,v\n> retrieving revision 1.96\n> diff -u -p -r1.96 parse_expr.c\n> --- src/backend/parser/parse_expr.c\t2001/05/21 18:42:08\t1.96\n> +++ src/backend/parser/parse_expr.c\t2001/06/01 16:53:03\n> @@ -942,6 +942,7 @@ parser_typecast_expression(ParseState *p\n> char *\n> TypeNameToInternalName(TypeName *typename)\n> {\n> +\tAssert(typename->attrname == NULL);\n> \tif (typename->arrayBounds != NIL)\n> \t{\n> \n> Index: src/include/nodes/parsenodes.h\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/include/nodes/parsenodes.h,v\n> retrieving revision 1.129\n> diff -u -p -r1.129 parsenodes.h\n> --- src/include/nodes/parsenodes.h\t2001/05/21 18:42:08\t1.129\n> +++ src/include/nodes/parsenodes.h\t2001/06/01 16:53:09\n> @@ -951,6 +951,7 @@ typedef struct TypeName\n> \tbool\t\tsetof;\t\t\t/* is a set? */\n> \tint32\t\ttypmod;\t\t\t/* type modifier */\n> \tList\t *arrayBounds;\t/* array bounds */\n> +\tchar\t *attrname;\t\t/* field name when using %TYPE */\n> } TypeName;\n> \n> /*\n> Index: src/test/regress/input/create_function_2.source\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/test/regress/input/create_function_2.source,v\n> retrieving revision 1.12\n> diff -u -p -r1.12 create_function_2.source\n> --- src/test/regress/input/create_function_2.source\t2000/11/20 20:36:54\t1.12\n> +++ src/test/regress/input/create_function_2.source\t2001/06/01 16:53:18\n> @@ -13,6 +13,12 @@ CREATE FUNCTION hobby_construct(text, te\n> LANGUAGE 'sql';\n> \n> \n> +CREATE FUNCTION hobbies_by_name(hobbies_r.name%TYPE)\n> + RETURNS hobbies_r.person%TYPE\n> + AS 'select person from hobbies_r where name = $1'\n> + LANGUAGE 'sql';\n> +\n> +\n> CREATE FUNCTION equipment(hobbies_r)\n> RETURNS setof equipment_r\n> AS 'select * from equipment_r where hobby = $1.name'\n> Index: src/test/regress/input/misc.source\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/test/regress/input/misc.source,v\n> retrieving revision 1.14\n> diff -u -p -r1.14 misc.source\n> --- src/test/regress/input/misc.source\t2000/11/20 20:36:54\t1.14\n> +++ src/test/regress/input/misc.source\t2001/06/01 16:53:18\n> @@ -214,6 +214,7 @@ SELECT user_relns() AS user_relns\n> \n> --SELECT name(equipment(hobby_construct(text 'skywalking', text 'mer'))) AS equip_name;\n> \n> +SELECT hobbies_by_name('basketball');\n> \n> --\n> -- check that old-style C functions work properly with TOASTed values\n> Index: src/test/regress/output/create_function_2.source\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/test/regress/output/create_function_2.source,v\n> retrieving revision 1.13\n> diff -u -p -r1.13 create_function_2.source\n> --- src/test/regress/output/create_function_2.source\t2000/11/20 20:36:54\t1.13\n> +++ src/test/regress/output/create_function_2.source\t2001/06/01 16:53:18\n> @@ -9,6 +9,12 @@ CREATE FUNCTION hobby_construct(text, te\n> RETURNS hobbies_r\n> AS 'select $1 as name, $2 as hobby'\n> LANGUAGE 'sql';\n> +CREATE FUNCTION hobbies_by_name(hobbies_r.name%TYPE)\n> + RETURNS hobbies_r.person%TYPE\n> + AS 'select person from hobbies_r where name = $1'\n> + LANGUAGE 'sql';\n> +NOTICE: hobbies_r.name%TYPE converted to text\n> +NOTICE: hobbies_r.person%TYPE converted to text\n> CREATE FUNCTION equipment(hobbies_r)\n> RETURNS setof equipment_r\n> AS 'select * from equipment_r where hobby = $1.name'\n> Index: src/test/regress/output/misc.source\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/test/regress/output/misc.source,v\n> retrieving revision 1.27\n> diff -u -p -r1.27 misc.source\n> --- src/test/regress/output/misc.source\t2000/11/20 20:36:54\t1.27\n> +++ src/test/regress/output/misc.source\t2001/06/01 16:53:18\n> @@ -656,6 +656,12 @@ SELECT user_relns() AS user_relns\n> (90 rows)\n> \n> --SELECT name(equipment(hobby_construct(text 'skywalking', text 'mer'))) AS equip_name;\n> +SELECT hobbies_by_name('basketball');\n> + hobbies_by_name \n> +-----------------\n> + joe\n> +(1 row)\n> +\n> --\n> -- check that old-style C functions work properly with TOASTed values\n> --\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 2 Jun 2001 12:39:41 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: AW: [HACKERS] Re: Support for %TYPE in CREATE FUNCTION"
},
{
"msg_contents": "\nBecause several people want this patch, Tom has withdrawn his\nobjection. Jan also stated that the elog(NOTICE) was good enough for\nhim.\n\nPatch applied.\n\n> Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n> \n> > > > AND expect it to do more than that. So a NOTICE at the\n> > > > actual usage, telling that x%TYPE for y got resolved to\n> > > > basetype z and will currently NOT follow later changes to x\n> > > > should do it.\n> > > \n> > > So if you could implement it like that, we will be VERY happy.\n> > \n> > I also like that approach.\n> \n> Well, if it helps, here is the patch again, with the NOTICE.\n> \n> Ian\n> \n> Index: doc/src/sgml/ref/create_function.sgml\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/doc/src/sgml/ref/create_function.sgml,v\n> retrieving revision 1.23\n> diff -u -p -r1.23 create_function.sgml\n> --- doc/src/sgml/ref/create_function.sgml\t2001/05/19 09:01:10\t1.23\n> +++ doc/src/sgml/ref/create_function.sgml\t2001/06/01 16:52:56\n> @@ -55,10 +55,16 @@ CREATE FUNCTION <replaceable class=\"para\n> <listitem>\n> <para>\n> The data type(s) of the function's arguments, if any. The\n> - input types may be base or complex types, or\n> - <literal>opaque</literal>. <literal>Opaque</literal> indicates\n> + input types may be base or complex types,\n> + <literal>opaque</literal>, or the same as the type of an\n> + existing column. <literal>Opaque</literal> indicates\n> that the function accepts arguments of a non-SQL type such as\n> <type>char *</type>.\n> +\tThe type of a column is indicated using <replaceable\n> +\tclass=\"parameter\">tablename</replaceable>.<replaceable\n> +\tclass=\"parameter\">columnname</replaceable><literal>%TYPE</literal>;\n> +\tusing this can sometimes help make a function independent from\n> +\tchanges to the definition of a table.\n> </para>\n> </listitem>\n> </varlistentry>\n> @@ -69,8 +75,10 @@ CREATE FUNCTION <replaceable class=\"para\n> <listitem>\n> <para>\n> The return data type. The output type may be specified as a\n> - base type, complex type, <literal>setof</literal> type, or\n> - <literal>opaque</literal>. The <literal>setof</literal>\n> + base type, complex type, <literal>setof</literal> type,\n> + <literal>opaque</literal>, or the same as the type of an\n> + existing column.\n> + The <literal>setof</literal>\n> modifier indicates that the function will return a set of\n> items, rather than a single item. Functions with a declared\n> return type of <literal>opaque</literal> do not return a value.\n> Index: src/backend/parser/analyze.c\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/parser/analyze.c,v\n> retrieving revision 1.187\n> diff -u -p -r1.187 analyze.c\n> --- src/backend/parser/analyze.c\t2001/05/22 16:37:15\t1.187\n> +++ src/backend/parser/analyze.c\t2001/06/01 16:52:58\n> @@ -29,6 +29,7 @@\n> #include \"parser/parse_relation.h\"\n> #include \"parser/parse_target.h\"\n> #include \"parser/parse_type.h\"\n> +#include \"parser/parse_expr.h\"\n> #include \"rewrite/rewriteManip.h\"\n> #include \"utils/builtins.h\"\n> #include \"utils/fmgroids.h\"\n> @@ -51,7 +52,10 @@ static Node *transformSetOperationTree(P\n> static Query *transformUpdateStmt(ParseState *pstate, UpdateStmt *stmt);\n> static Query *transformCreateStmt(ParseState *pstate, CreateStmt *stmt);\n> static Query *transformAlterTableStmt(ParseState *pstate, AlterTableStmt *stmt);\n> +static Node *transformTypeRefs(ParseState *pstate, Node *stmt);\n> \n> +static void transformTypeRefsList(ParseState *pstate, List *l);\n> +static void transformTypeRef(ParseState *pstate, TypeName *tn);\n> static List *getSetColTypes(ParseState *pstate, Node *node);\n> static void transformForUpdate(Query *qry, List *forUpdate);\n> static void transformFkeyGetPrimaryKey(FkConstraint *fkconstraint, Oid *pktypoid);\n> @@ -232,6 +236,17 @@ transformStmt(ParseState *pstate, Node *\n> \t\t\t\t\t\t\t\t\t\t\t (SelectStmt *) parseTree);\n> \t\t\tbreak;\n> \n> +\t\t\t/*\n> +\t\t\t * Convert use of %TYPE in statements where it is permitted.\n> +\t\t\t */\n> +\t\tcase T_ProcedureStmt:\n> +\t\tcase T_CommentStmt:\n> +\t\tcase T_RemoveFuncStmt:\n> +\t\tcase T_DefineStmt:\n> +\t\t\tresult = makeNode(Query);\n> +\t\t\tresult->commandType = CMD_UTILITY;\n> +\t\t\tresult->utilityStmt = transformTypeRefs(pstate, parseTree);\n> +\t\t\tbreak;\n> \n> \t\tdefault:\n> \n> @@ -2686,6 +2701,107 @@ transformAlterTableStmt(ParseState *psta\n> \t}\n> \tqry->utilityStmt = (Node *) stmt;\n> \treturn qry;\n> +}\n> +\n> +/* \n> + * Transform uses of %TYPE in a statement.\n> + */\n> +static Node *\n> +transformTypeRefs(ParseState *pstate, Node *stmt)\n> +{\n> +\tswitch (nodeTag(stmt))\n> +\t{\n> +\t\tcase T_ProcedureStmt:\n> +\t\t{\n> +\t\t\tProcedureStmt *ps = (ProcedureStmt *) stmt;\n> +\n> +\t\t\ttransformTypeRefsList(pstate, ps->argTypes);\n> +\t\t\ttransformTypeRef(pstate, (TypeName *) ps->returnType);\n> +\t\t\ttransformTypeRefsList(pstate, ps->withClause);\n> +\t\t}\n> +\t\tbreak;\n> +\n> +\t\tcase T_CommentStmt:\n> +\t\t{\n> +\t\t\tCommentStmt\t *cs = (CommentStmt *) stmt;\n> +\n> +\t\t\ttransformTypeRefsList(pstate, cs->objlist);\n> +\t\t}\n> +\t\tbreak;\n> +\n> +\t\tcase T_RemoveFuncStmt:\n> +\t\t{\n> +\t\t\tRemoveFuncStmt *rs = (RemoveFuncStmt *) stmt;\n> +\n> +\t\t\ttransformTypeRefsList(pstate, rs->args);\n> +\t\t}\n> +\t\tbreak;\n> +\n> +\t\tcase T_DefineStmt:\n> +\t\t{\n> +\t\t\tDefineStmt *ds = (DefineStmt *) stmt;\n> +\t\t\tList\t *ele;\n> +\n> +\t\t\tforeach(ele, ds->definition)\n> +\t\t\t{\n> +\t\t\t\tDefElem\t *de = (DefElem *) lfirst(ele);\n> +\n> +\t\t\t\tif (de->arg != NULL\n> +\t\t\t\t\t&& IsA(de->arg, TypeName))\n> +\t\t\t\t{\n> +\t\t\t\t\ttransformTypeRef(pstate, (TypeName *) de->arg);\n> +\t\t\t\t}\n> +\t\t\t}\n> +\t\t}\n> +\t\tbreak;\n> +\n> +\t\tdefault:\n> +\t\t\telog(ERROR, \"Unsupported type %d in transformTypeRefs\",\n> +\t\t\t\t nodeTag(stmt));\n> +\t\t\tbreak;\n> +\t}\n> +\n> +\treturn stmt;\n> +}\n> +\n> +/*\n> + * Transform uses of %TYPE in a list.\n> + */\n> +static void\n> +transformTypeRefsList(ParseState *pstate, List *l)\n> +{\n> +\tList\t *ele;\n> +\n> +\tforeach(ele, l)\n> +\t{\n> +\t\tif (IsA(lfirst(ele), TypeName))\n> +\t\t\ttransformTypeRef(pstate, (TypeName *) lfirst(ele));\n> +\t}\n> +}\n> +\n> +/*\n> + * Transform a TypeName to not use %TYPE.\n> + */\n> +static void\n> +transformTypeRef(ParseState *pstate, TypeName *tn)\n> +{\n> +\tAttr *att;\n> +\tNode *n;\n> +\tVar\t *v;\n> +\tchar *tyn;\n> +\n> +\tif (tn->attrname == NULL)\n> +\t\treturn;\n> +\tatt = makeAttr(tn->name, tn->attrname);\n> +\tn = transformExpr(pstate, (Node *) att, EXPR_COLUMN_FIRST);\n> +\tif (! IsA(n, Var))\n> +\t\telog(ERROR, \"unsupported expression in %%TYPE\");\n> +\tv = (Var *) n;\n> +\ttyn = typeidTypeName(v->vartype);\n> +\telog(NOTICE, \"%s.%s%%TYPE converted to %s\", tn->name, tn->attrname, tyn);\n> +\ttn->name = tyn;\n> +\ttn->typmod = v->vartypmod;\n> +\ttn->attrname = NULL;\n> }\n> \n> /* exported so planner can check again after rewriting, query pullup, etc */\n> Index: src/backend/parser/gram.y\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/parser/gram.y,v\n> retrieving revision 2.227\n> diff -u -p -r2.227 gram.y\n> --- src/backend/parser/gram.y\t2001/05/27 09:59:29\t2.227\n> +++ src/backend/parser/gram.y\t2001/06/01 16:53:02\n> @@ -192,7 +192,7 @@ static void doNegateFloat(Value *v);\n> \t\tdef_list, opt_indirection, group_clause, TriggerFuncArgs,\n> \t\tselect_limit, opt_select_limit\n> \n> -%type <typnam>\tfunc_arg, func_return, aggr_argtype\n> +%type <typnam>\tfunc_arg, func_return, func_type, aggr_argtype\n> \n> %type <boolean>\topt_arg, TriggerForOpt, TriggerForType, OptTemp\n> \n> @@ -2490,7 +2490,7 @@ func_args_list: func_arg\n> \t\t\t\t{\t$$ = lappend($1, $3); }\n> \t\t;\n> \n> -func_arg: opt_arg Typename\n> +func_arg: opt_arg func_type\n> \t\t\t\t{\n> \t\t\t\t\t/* We can catch over-specified arguments here if we want to,\n> \t\t\t\t\t * but for now better to silently swallow typmod, etc.\n> @@ -2498,7 +2498,7 @@ func_arg: opt_arg Typename\n> \t\t\t\t\t */\n> \t\t\t\t\t$$ = $2;\n> \t\t\t\t}\n> -\t\t| Typename\n> +\t\t| func_type\n> \t\t\t\t{\n> \t\t\t\t\t$$ = $1;\n> \t\t\t\t}\n> @@ -2526,7 +2526,7 @@ func_as: Sconst\n> \t\t\t\t{ \t$$ = makeList2(makeString($1), makeString($3)); }\n> \t\t;\n> \n> -func_return: Typename\n> +func_return: func_type\n> \t\t\t\t{\n> \t\t\t\t\t/* We can catch over-specified arguments here if we want to,\n> \t\t\t\t\t * but for now better to silently swallow typmod, etc.\n> @@ -2536,6 +2536,18 @@ func_return: Typename\n> \t\t\t\t}\n> \t\t;\n> \n> +func_type:\tTypename\n> +\t\t\t\t{\n> +\t\t\t\t\t$$ = $1;\n> +\t\t\t\t}\n> +\t\t| IDENT '.' ColId '%' TYPE_P\n> +\t\t\t\t{\n> +\t\t\t\t\t$$ = makeNode(TypeName);\n> +\t\t\t\t\t$$->name = $1;\n> +\t\t\t\t\t$$->typmod = -1;\n> +\t\t\t\t\t$$->attrname = $3;\n> +\t\t\t\t}\n> +\t\t;\n> \n> /*****************************************************************************\n> *\n> Index: src/backend/parser/parse_expr.c\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/parser/parse_expr.c,v\n> retrieving revision 1.96\n> diff -u -p -r1.96 parse_expr.c\n> --- src/backend/parser/parse_expr.c\t2001/05/21 18:42:08\t1.96\n> +++ src/backend/parser/parse_expr.c\t2001/06/01 16:53:03\n> @@ -942,6 +942,7 @@ parser_typecast_expression(ParseState *p\n> char *\n> TypeNameToInternalName(TypeName *typename)\n> {\n> +\tAssert(typename->attrname == NULL);\n> \tif (typename->arrayBounds != NIL)\n> \t{\n> \n> Index: src/include/nodes/parsenodes.h\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/include/nodes/parsenodes.h,v\n> retrieving revision 1.129\n> diff -u -p -r1.129 parsenodes.h\n> --- src/include/nodes/parsenodes.h\t2001/05/21 18:42:08\t1.129\n> +++ src/include/nodes/parsenodes.h\t2001/06/01 16:53:09\n> @@ -951,6 +951,7 @@ typedef struct TypeName\n> \tbool\t\tsetof;\t\t\t/* is a set? */\n> \tint32\t\ttypmod;\t\t\t/* type modifier */\n> \tList\t *arrayBounds;\t/* array bounds */\n> +\tchar\t *attrname;\t\t/* field name when using %TYPE */\n> } TypeName;\n> \n> /*\n> Index: src/test/regress/input/create_function_2.source\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/test/regress/input/create_function_2.source,v\n> retrieving revision 1.12\n> diff -u -p -r1.12 create_function_2.source\n> --- src/test/regress/input/create_function_2.source\t2000/11/20 20:36:54\t1.12\n> +++ src/test/regress/input/create_function_2.source\t2001/06/01 16:53:18\n> @@ -13,6 +13,12 @@ CREATE FUNCTION hobby_construct(text, te\n> LANGUAGE 'sql';\n> \n> \n> +CREATE FUNCTION hobbies_by_name(hobbies_r.name%TYPE)\n> + RETURNS hobbies_r.person%TYPE\n> + AS 'select person from hobbies_r where name = $1'\n> + LANGUAGE 'sql';\n> +\n> +\n> CREATE FUNCTION equipment(hobbies_r)\n> RETURNS setof equipment_r\n> AS 'select * from equipment_r where hobby = $1.name'\n> Index: src/test/regress/input/misc.source\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/test/regress/input/misc.source,v\n> retrieving revision 1.14\n> diff -u -p -r1.14 misc.source\n> --- src/test/regress/input/misc.source\t2000/11/20 20:36:54\t1.14\n> +++ src/test/regress/input/misc.source\t2001/06/01 16:53:18\n> @@ -214,6 +214,7 @@ SELECT user_relns() AS user_relns\n> \n> --SELECT name(equipment(hobby_construct(text 'skywalking', text 'mer'))) AS equip_name;\n> \n> +SELECT hobbies_by_name('basketball');\n> \n> --\n> -- check that old-style C functions work properly with TOASTed values\n> Index: src/test/regress/output/create_function_2.source\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/test/regress/output/create_function_2.source,v\n> retrieving revision 1.13\n> diff -u -p -r1.13 create_function_2.source\n> --- src/test/regress/output/create_function_2.source\t2000/11/20 20:36:54\t1.13\n> +++ src/test/regress/output/create_function_2.source\t2001/06/01 16:53:18\n> @@ -9,6 +9,12 @@ CREATE FUNCTION hobby_construct(text, te\n> RETURNS hobbies_r\n> AS 'select $1 as name, $2 as hobby'\n> LANGUAGE 'sql';\n> +CREATE FUNCTION hobbies_by_name(hobbies_r.name%TYPE)\n> + RETURNS hobbies_r.person%TYPE\n> + AS 'select person from hobbies_r where name = $1'\n> + LANGUAGE 'sql';\n> +NOTICE: hobbies_r.name%TYPE converted to text\n> +NOTICE: hobbies_r.person%TYPE converted to text\n> CREATE FUNCTION equipment(hobbies_r)\n> RETURNS setof equipment_r\n> AS 'select * from equipment_r where hobby = $1.name'\n> Index: src/test/regress/output/misc.source\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/test/regress/output/misc.source,v\n> retrieving revision 1.27\n> diff -u -p -r1.27 misc.source\n> --- src/test/regress/output/misc.source\t2000/11/20 20:36:54\t1.27\n> +++ src/test/regress/output/misc.source\t2001/06/01 16:53:18\n> @@ -656,6 +656,12 @@ SELECT user_relns() AS user_relns\n> (90 rows)\n> \n> --SELECT name(equipment(hobby_construct(text 'skywalking', text 'mer'))) AS equip_name;\n> +SELECT hobbies_by_name('basketball');\n> + hobbies_by_name \n> +-----------------\n> + joe\n> +(1 row)\n> +\n> --\n> -- check that old-style C functions work properly with TOASTed values\n> --\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 4 Jun 2001 19:25:54 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: AW: [HACKERS] Re: Support for %TYPE in CREATE FUNCTION"
},
{
"msg_contents": "On Mon, 4 Jun 2001, Bruce Momjian wrote:\n\n> Because several people want this patch, Tom has withdrawn his\n> objection. Jan also stated that the elog(NOTICE) was good enough for\n> him.\n>\n> Patch applied.\n\nWonderful! Thank you all! Do you have any kind of ETA for when this\nfeature will be publicly available? Is this going to be included in 7.1.3\nor is it 7.2 stuff (just curious)?\n\nPascal.\n\n",
"msg_date": "Tue, 5 Jun 2001 11:07:03 +0200 (CEST)",
"msg_from": "Pascal Scheffers <pascal@scheffers.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Re: AW: Re: Support for %TYPE in CREATE FUNCTION"
},
{
"msg_contents": "On Tue, Jun 05, 2001 at 11:07:03AM +0200, Pascal Scheffers wrote:\n> On Mon, 4 Jun 2001, Bruce Momjian wrote:\n> \n> > Because several people want this patch, Tom has withdrawn his\n> > objection. Jan also stated that the elog(NOTICE) was good enough for\n> > him.\n> >\n> > Patch applied.\n> \n> Wonderful! Thank you all! Do you have any kind of ETA for when this\n> feature will be publicly available? Is this going to be included in 7.1.3\n> or is it 7.2 stuff (just curious)?\n\n\n I mean we're in 7.2 cycle -- into 7.1.x go bugfixes only.\n\n Karel\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Tue, 5 Jun 2001 11:30:41 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCHES] Re: AW: Re: Support for %TYPE in CREATE FUNCTION"
}
] |
[
{
"msg_contents": "For what its worth,\n\nOracle has a nice feature for resource management called PROFILEs:\n\nCREATE PROFILE profile LIMIT\n[ SESSION_PER_USER [ session_limit | UNLIMITED | DEFAULT ] ]\n[ CPU_PER_SESSION cpu_session_limit | UNLIMITED | DEFAULT ] ]\n[ CPU_PER_CALL cpu_call_limit | UNLIMITED | DEFAULT ] ]\n[ CONNECT_TIME connect_limit | UNLIMITED | DEFAULT ] ]\n[ IDLE_TIME idle_limit | UNLIMITED | DEFAULT ] ]\t\t\n[ LOGICAL READS_PER_SESSION read_session_limit | UNLIMITED | DEFAULT \n] ]\t\t\n[ LOGICAL READS_PER_CALL read_call_limit | UNLIMITED | DEFAULT ] ]\t\t\n[ PRIVATE_SGA memory_limit | UNLIMITED | DEFAULT ] ]\t\t\n[ COMPOSITE_LIMIT resource_cost_limit | UNLIMITED | DEFAULT ] ]\n\nwhich limits things like CPU_PER_CALL and LOGICAL_READS_PER_SESSION \nto a profile. The ALTER USER command then allows you to assign a \nPROFILE to a user. This is really nice, since you can prevent \nrun-away queries from denying service by the database to other users. \nIt also can prevent a user from soaking up all of the available \nconnections. You must set a flag in your initSID.ora configuration \nprofile for ORACLE to support profiles. Since Jan is collecting these \nstatistics any way (if the appropriate configuration flag is set), it \nwould be pretty trivial to implement PROFILEs in PostgreSQL.\n\nMike Mascari\nmascarm@mascari.com\n\n-----Original Message-----\nFrom:\tTom Lane [SMTP:tgl@sss.pgh.pa.us]\n\nJan Wieck <JanWieck@Yahoo.com> writes:\n> So outing myself not beeing a *real programmer*, this is what\n> I have so far:\n\nHmm ... what is the performance of all this like? Seems like a lot\nof overhead. Can it be turned off?\n\n> * Backends call some collector functions at various places\n> now (these will finally be macros), that count up table\n> scans, tuples returned by scans, buffer fetches/hits and\n> the like.\n\nHave you removed the existing stats-gathering support\n(backend/access/heap/stats.c and so on)? That would buy back\nat least a few of the cycles involved ...\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 1 Jun 2001 10:45:03 -0400",
"msg_from": "Mike Mascari <mascarm@mascari.com>",
"msg_from_op": true,
"msg_subject": "RE: Access statistics "
}
] |
[
{
"msg_contents": "I just got bit by the identifier name is too long and will be truncated \nlimitation in Postgresql.\n\nAFIAA there is a limit of 64 characters for identifiers (names of \ntables, sequences, indexes, etc...)\n\nI had just started to get in the habit of using serial data types until \nI made to tables with long names and the automatic sequence names that \nwere generated conflicted, *ouch* ...\n\nIs there the possibility of a name conflict resolution during the table \ncreation phase similar to \"the name I want to assign is already taken, \nso I'll pick a different name...\" on the serial data type?\n\n\n\n",
"msg_date": "Fri, 01 Jun 2001 10:15:42 -0500",
"msg_from": "Thomas Swan <tswan@olemiss.edu>",
"msg_from_op": true,
"msg_subject": "Feature request : Remove identifier length constraints"
},
{
"msg_contents": "> I had just started to get in the habit of using serial data types until\n> I made to tables with long names and the automatic sequence names that\n> were generated conflicted, *ouch* ...\n>\n> Is there the possibility of a name conflict resolution during the table\n> creation phase similar to \"the name I want to assign is already taken,\n> so I'll pick a different name...\" on the serial data type?\n\nI'll have a look, but it'll be a few weeks before I have a patch for it (as\nI'm doing other things at the moment)\n\nChris\n\n",
"msg_date": "Wed, 6 Jun 2001 09:58:05 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "RE: Feature request : Remove identifier length constraints"
}
] |
[
{
"msg_contents": "\n I'm running postgres 6.5.3 and 7.0.3 and pg_dump gives me the following\noutput:\n\nDROP TABLE \"genrenametable\";\nCREATE TABLE \"genrenametable\" (\n \"genreid\" int4,\n \"name\" character varying(128),\n \"parentgenre\" int4,\n \"enabled\" bool DEFAULT 'f' NOT NULL\n);\nREVOKE ALL on \"genrenametable\" from PUBLIC;\nGRANT SELECT on \"genrenametable\" to \"hammor\";\nGRANT SELECT on \"genrenametable\" to \"johnbr\";\nCOPY \"genrenametable\" FROM stdin;\n4115 80s Alt Hits 4096 t\n4138 New Wave Hits 4096 t\n4117 90s Alt Hits 4096 t\n...\n\n As you can guess, this will not successfully restore the table.\n\n Perhaps the REVOKE/GRANT statements can be moved to after the COPY. \n\n The fancy solution would be to make sure the table owner has\npermissions to do the COPY, and then revoke the permissions afterward if\nnecessary.\n",
"msg_date": "Fri, 01 Jun 2001 13:06:27 -0400",
"msg_from": "Robert Forsman <thoth@purplefrog.com>",
"msg_from_op": true,
"msg_subject": "bug in pg_dump with GRANT/REVOKE"
},
{
"msg_contents": "Off Topic post:\n\nThought some people might find this article interesting.\n\nhttp://www.zend.com/zend/art/databases.php\n\n\n\n",
"msg_date": "Fri, 1 Jun 2001 12:32:46 -0500",
"msg_from": "\"Matthew T. O'Connor\" <matthew@zeut.net>",
"msg_from_op": false,
"msg_subject": "Interesting Atricle"
},
{
"msg_contents": "I think there is really something weird about the Zend site - I use the \ncurrent IE on an NT machine, and every page loads, but then I have to wait \nabout 10 additional seconds before IE \"wakes up\" and I can click any links \nor go to a different page.\n\nI think it may have something to do with the DHTML menus... but I haven't \nreally looked into it.\n\n-r\n\nAt 09:18 PM 6/1/01 -0400, Bruce Momjian wrote:\n\n> > > Thought some people might find this article interesting.\n> > > http://www.zend.com/zend/art/databases.php\n> >\n> > The only interesting thing I noticed is how fast it crashes my\n> > Netscape-4.76 browser ;)\n>\n>Yours too? I turned off Java/Javascript to get it to load and I am on\n>BSD/OS. Strange it so univerally crashes.\n>\n>--\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 6: Have you searched our list archives?\n>\n>http://www.postgresql.org/search.mpl\n>\n>\n>\n>---\n>Incoming mail is certified Virus Free.\n>Checked by AVG anti-virus system (http://www.grisoft.com).\n>Version: 6.0.251 / Virus Database: 124 - Release Date: 4/26/01\n\n\n---\nOutgoing mail is certified Virus Free.\nChecked by AVG anti-virus system (http://www.grisoft.com).\nVersion: 6.0.251 / Virus Database: 124 - Release Date: 4/26/01",
"msg_date": "Fri, 01 Jun 2001 22:08:07 +0100",
"msg_from": "Ryan Mahoney <ryan@paymentalliance.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: Interesting Atricle"
},
{
"msg_contents": "> Thought some people might find this article interesting.\n> http://www.zend.com/zend/art/databases.php\n\nThe only interesting thing I noticed is how fast it crashes my\nNetscape-4.76 browser ;)\n\n - Thomas\n",
"msg_date": "Fri, 01 Jun 2001 23:41:45 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: Interesting Atricle"
},
{
"msg_contents": "> > Thought some people might find this article interesting.\n> > http://www.zend.com/zend/art/databases.php\n> \n> The only interesting thing I noticed is how fast it crashes my\n> Netscape-4.76 browser ;)\n\nYours too? I turned off Java/Javascript to get it to load and I am on\nBSD/OS. Strange it so univerally crashes.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 1 Jun 2001 21:18:28 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: Interesting Atricle"
},
{
"msg_contents": "> > Thought some people might find this article interesting.\n> > http://www.zend.com/zend/art/databases.php\n> The only interesting thing I noticed is how fast it crashes my\n> Netscape-4.76 browser ;)\n\nUse Konqueror :-)\n\nI can't tell if the results are biased, but it puts PostgreSQL way ahead of \nMySQL and Interchange in performance (if more than a couple of simultaneous \nusers) and most other features.\nOnly bad words are about lack of LO support.\n\n-- \nKaare Rasmussen --Linux, spil,-- Tlf: 3816 2582\nKaki Data tshirts, merchandize Fax: 3816 2501\nHowitzvej 75 �ben 14.00-18.00 Web: www.suse.dk\n2000 Frederiksberg L�rdag 11.00-17.00 Email: kar@webline.dk\n",
"msg_date": "Sat, 2 Jun 2001 09:45:44 +0200",
"msg_from": "Kaare Rasmussen <kar@webline.dk>",
"msg_from_op": false,
"msg_subject": "Re: Re: Interesting Atricle"
},
{
"msg_contents": "Thus spake Bruce Momjian\n> > > Thought some people might find this article interesting.\n> > > http://www.zend.com/zend/art/databases.php\n> > \n> > The only interesting thing I noticed is how fast it crashes my\n> > Netscape-4.76 browser ;)\n> \n> Yours too? I turned off Java/Javascript to get it to load and I am on\n> BSD/OS. Strange it so univerally crashes.\n\nInteresting. I run Opera and I generally find it fails on pages that no\none else has problems with but it didn't have any problem with this one.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Sat, 2 Jun 2001 06:16:46 -0400 (EDT)",
"msg_from": "darcy@druid.net (D'Arcy J.M. Cain)",
"msg_from_op": false,
"msg_subject": "Re: Re: Interesting Atricle"
},
{
"msg_contents": "On Fri, 1 Jun 2001, Bruce Momjian wrote:\n\n> > > Thought some people might find this article interesting.\n> > > http://www.zend.com/zend/art/databases.php\n> >\n> > The only interesting thing I noticed is how fast it crashes my\n> > Netscape-4.76 browser ;)\n>\n> Yours too? I turned off Java/Javascript to get it to load and I am on\n> BSD/OS. Strange it so univerally crashes.\n\nReally odd. I have Java/Javascript with FreeBSD and Netscape 4.76 and\nread it just fine. One difference tho probably, I keep style sheets\nshut off. Netscape crashes about 1% as often as it used to.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Sat, 2 Jun 2001 10:59:20 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: Interesting Atricle"
},
{
"msg_contents": "> > Yours too? I turned off Java/Javascript to get it to load and I am on\n> > BSD/OS. Strange it so univerally crashes.\n> \n> Really odd. I have Java/Javascript with FreeBSD and Netscape 4.76 and\n> read it just fine. One difference tho probably, I keep style sheets\n> shut off. Netscape crashes about 1% as often as it used to.\n\nI can confirm turning off stylesheets fixed the crash on my system too. \nVince, what disadvantages are there to keep stylesheets off?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 2 Jun 2001 12:35:42 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: Interesting Atricle"
},
{
"msg_contents": "On Sat, 2 Jun 2001, Bruce Momjian wrote:\n\n> > > Yours too? I turned off Java/Javascript to get it to load and I am on\n> > > BSD/OS. Strange it so univerally crashes.\n> >\n> > Really odd. I have Java/Javascript with FreeBSD and Netscape 4.76 and\n> > read it just fine. One difference tho probably, I keep style sheets\n> > shut off. Netscape crashes about 1% as often as it used to.\n>\n> I can confirm turning off stylesheets fixed the crash on my system too.\n> Vince, what disadvantages are there to keep stylesheets off?\n\nText placement and colors on some sites gets a bit out of whack. Datek's\nwebsite has text on top of other text, but still works. I trashed the\nstyle sheets on the PostgreSQL website long ago and never used them on\nother sites I write.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Sat, 2 Jun 2001 12:56:57 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: Interesting Atricle"
},
{
"msg_contents": "On Sat, Jun 02, 2001 at 10:59:20AM -0400, Vince Vielhaber wrote:\n> On Fri, 1 Jun 2001, Bruce Momjian wrote:\n> \n> > > > Thought some people might find this article interesting.\n> > > > http://www.zend.com/zend/art/databases.php\n> > >\n> > > The only interesting thing I noticed is how fast it crashes my\n> > > Netscape-4.76 browser ;)\n> >\n> > Yours too? I turned off Java/Javascript to get it to load and I am on\n> > BSD/OS. Strange it so univerally crashes.\n> \n> Really odd. I have Java/Javascript with FreeBSD and Netscape 4.76 and\n> read it just fine. One difference tho probably, I keep style sheets\n> shut off. Netscape crashes about 1% as often as it used to.\n\nThis is getting off-topic, but ... \n\nI keep CSS, Javascript, Java, dynamic fonts, and images turned off, and\nNetscape 4.77 stays up for many weeks at a time. I also have no Flash \nplugin. All together it makes for a far more pleasant web experience.\n\nI didn't notice any problem with the Zend page.\n\nNathan Myers\nncm@zembu.com\n",
"msg_date": "Mon, 4 Jun 2001 12:13:49 -0700",
"msg_from": "ncm@zembu.com (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: Re: Interesting Atricle"
},
{
"msg_contents": "> This is getting off-topic, but ... \n> \n> I keep CSS, Javascript, Java, dynamic fonts, and images turned off, and\n> Netscape 4.77 stays up for many weeks at a time. I also have no Flash \n> plugin. All together it makes for a far more pleasant web experience.\n> \n> I didn't notice any problem with the Zend page.\n\nYou are running no images! You may as well have Netscape minimized and\nsay it is running for weeks. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 4 Jun 2001 16:55:13 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: Interesting Atricle"
},
{
"msg_contents": "On Mon, Jun 04, 2001 at 04:55:13PM -0400, Bruce Momjian wrote:\n> > This is getting off-topic, but ... \n> > \n> > I keep CSS, Javascript, Java, dynamic fonts, and images turned off, and\n> > Netscape 4.77 stays up for many weeks at a time. I also have no Flash \n> > plugin. All together it makes for a far more pleasant web experience.\n> > \n> > I didn't notice any problem with the Zend page.\n> \n> You are running no images! You may as well have Netscape minimized and\n> say it is running for weeks. :-)\n\nOver 98% of the images on the web are either pr0n or wankage. \nIf you don't need to see that, you can save a lot of time.\n\nBut it's usually Javascript that crashes Netscape. (CSS appears to\nbe implemented using Javascript, because if you turn off Javascript,\nthen CSS stops working (and crashing).) That's not to say that Java \ndoesn't also crash Netscape; it's just that pages with Java in them \nare not very common.\n\nThere's little point in bookmarking a site that depends on client-side\nJavascript or Java, because it won't be up for very long.\n\nBut this is *really* off topic, now.\n\nNathan Myers\nncm@zembu.com\n",
"msg_date": "Mon, 4 Jun 2001 14:58:04 -0700",
"msg_from": "ncm@zembu.com (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: Re: Interesting Atricle"
}
] |
[
{
"msg_contents": "\nhello all\n\nHow can i do dump and reload in all databases?\n\nthanks...\n",
"msg_date": "1 Jun 2001 20:29:27 -0000",
"msg_from": "\"gabriel\" <gabriel@workingnetsp.com.br>",
"msg_from_op": true,
"msg_subject": "dump+reload?"
}
] |
[
{
"msg_contents": "Well,\n\n got late and needa break before I make any mistakes now. Will\n send the access statistics diff to PATCHES on saturday.\n\n G'night yall.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Fri, 1 Jun 2001 19:17:37 -0400 (EDT)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": true,
"msg_subject": "Sorry"
}
] |
[
{
"msg_contents": "Hello, All!\n\nI had PostgreSQL 7.0.3 (7.1 now) and one nice day I've noticed that much\nnumber of my BLOBs are broken! Although they seems to be with good content\nin file system (xinv[0-9]+ files) I was not able to get them via\nlo_export... After spending some time trying to fix it, I decided to write\nmy own xinv2plainfile converter. I hope if someone has same troubles this\nconverter will help him.\nJust compile it, put it in the dir with your xinv[0-9]+ files and run.\nIt will create new files with name eq to BLOB id from apropriate xinv.\n----------------------------xinv2palinfile.c-------------------------------\n#include \"sys/types.h\"\n#include \"dirent.h\"\n#include \"stdio.h\"\n#include \"string.h\"\n \n#define BLCKSIZE 8192\n#define HPT_LEN 40\n \n#define DEBUG\n//#undef DEBUG\n \ntypedef unsigned short uint16;\ntypedef unsigned int uint32;\n \ntypedef struct ItemIdData\n{\n unsigned lp_off:15,\n lp_flags:2,\n lp_len:15;\n} ItemIdData;\n\ntypedef struct PageHeaderData\n{\n uint16 pd_lower;\n uint16 pd_upper;\n uint16 pd_special;\n uint16 pd_opaque; //page size\n ItemIdData pd_linp[1];\n} PageHeaderData;\n \nint\nextract(const char * filename)\n{\n FILE * infile;\n FILE * outfile;\n ItemIdData linp;\n PageHeaderData* pg_head;\n char buff[BLCKSIZE];\n char data[BLCKSIZE];\n int tuple_no;\n\n //opening outpur file, if it is already presents, overwrite it!\n if ((outfile = fopen(filename + 4, \"w\")) == NULL)\n return -1;\n \n //opening input file\n if ((infile = fopen(filename, \"r\")) == NULL)\n return -1;\n \n while (fread(&buff, BLCKSIZE, 1, infile))\n {\n pg_head = (PageHeaderData*)buff;\n#ifdef DEBUG\n printf(\"Page data: pd_lower=%d, pd_upper=%d, pd_special=%d, \npd_opaque=%d\\\n pg_head->pd_lower, pg_head->pd_upper, pg_head->pd_special,\npg_head\n#endif\n\n for(tuple_no = 0; pg_head->pd_linp[tuple_no].lp_len; ++tuple_no)\n {\n linp = pg_head->pd_linp[tuple_no];\n memcpy(data, buff + linp.lp_off + HPT_LEN, linp.lp_len -\nHPT_LEN);\n data[linp.lp_len - HPT_LEN] = 0;\n // }\n#ifdef DEBUG\n printf(\"Tuple %d: off=%d,\\tflags=%d,\\tlen=%d\\n\",\\\n tuple_no, linp.lp_off, linp.lp_flags, linp.lp_len);\n printf(\"Data:\\n%s\\n----------\\n\", data);\n#endif\n fprintf(outfile, \"%s\", data);\n }\n }\n fclose(infile);\n fclose(outfile);\n return 0;\n}\n\nint\nmain(void)\n{\n DIR * curdir;\n struct dirent * curdirentry;\n \n //open current directory\n curdir = opendir(\".\");\n if (curdir == NULL)\n {\n printf(\"Cannot open curdir!!!\\n\");\n return -1;\n }\n\n //search through curdir for files 'xinv[0-9]+'\n while ((curdirentry = readdir(curdir)) != NULL)\n {\n if (strstr(curdirentry->d_name, \"xinv\") != curdirentry->d_name)\n continue;\n //found entry with name begining with xinv.\n //let's hope this is what we are looking for :)\n printf(\"Trying to extract file '%s'... \", curdirentry->d_name);\n if (extract(curdirentry->d_name))\n printf(\"failed\\n\");\n else\n printf(\"successed\\n\");\n }\n \n return 0;\n}\n\n---------------------------------------------------------------------------\nWith Best Regards,\n Maks N. Polunin.\nBrainbench: http://www.brainbench.com/transcript.jsp?pid=111472\nICQ#:18265775\n\n",
"msg_date": "Sat, 2 Jun 2001 21:50:59 +0700",
"msg_from": "\"Maks N. Polunin\" <pmn836@cclib.nsu.ru>",
"msg_from_op": true,
"msg_subject": "large objects dump"
},
{
"msg_contents": "Hi,\n\n> I had PostgreSQL 7.0.3 (7.1 now) and one nice day I've noticed that much\n> number of my BLOBs are broken! Although they seems to be with good content\n> in file system (xinv[0-9]+ files) I was not able to get them via\n> lo_export... After spending some time trying to fix it, I decided to write\n> my own xinv2plainfile converter. I hope if someone has same troubles this\n> converter will help him.\n> Just compile it, put it in the dir with your xinv[0-9]+ files and run.\n> It will create new files with name eq to BLOB id from apropriate xinv.\n\nEither use 7.1.x, or apply my patch to 7.0.3. And you will have no such \nproblems at all. :-))\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: dyp@perchine.com\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------\n",
"msg_date": "Wed, 6 Jun 2001 12:07:36 +0700",
"msg_from": "Denis Perchine <dyp@perchine.com>",
"msg_from_op": false,
"msg_subject": "Re: large objects dump"
},
{
"msg_contents": "I have a cronjob that does a vacuumdb -a -z every night.\nWhen i came to work this morning i saw a lot of postgres processes hanging\non wait.\n\nThe last thing i see before it hangs is this:\n\n-------------------------------------------------------------\nNOTICE: --Relation pg_toast_1216--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0,\nKeep/VTL 0/0, Crash 0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail.\nSpace 0/0; EndEmpty/Avail. Pages 0/0. CPU 0.00s/0.00u sec.\nNOTICE: Index pg_toast_1216_idx: Pages 1; Tuples 0. CPU 0.00s/0.00u sec.\nNOTICE: Analyzing...\n-------------------------------------------------------------\n\nis this a known problem?\nPostgres is version 7.1.1.\n\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n Programmer/Networker [|] Magnus Naeslund\n PGP Key: http://www.genline.nu/mag_pgp.txt\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n\n\n",
"msg_date": "Wed, 6 Jun 2001 15:59:13 +0200",
"msg_from": "\"Magnus Naeslund\\(f\\)\" <mag@fbab.net>",
"msg_from_op": false,
"msg_subject": "vacuumdb -a -z hangs"
},
{
"msg_contents": "\"Magnus Naeslund\\(f\\)\" <mag@fbab.net> writes:\n> I have a cronjob that does a vacuumdb -a -z every night.\n> When i came to work this morning i saw a lot of postgres processes hanging\n> on wait.\n\nSounds to me like you have an open transaction that is holding a lock\nthat everybody else needs. Very likely it's not the VACUUM that's at\nfault, at least not directly.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 06 Jun 2001 10:52:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: vacuumdb -a -z hangs "
}
] |
[
{
"msg_contents": "I frequently rant on this about full text searching.\n\nThe fulltextsearch package under contrib is an interesting approach. As anyone\nrolling their eyes at my frequent posts, know that I am also working on a full\ntext search system.\n\nThe two packages offer two very different approaches.\n\n\"/contrib/fulltextsearch\" uses a table of word/oid pairs to find words. Such\nthat a full text search query would look something like this:\n\nselect * from foo where fti_foo.string = 'word' and fti_foo.id = foo.oid;\n\n(or something like that)\n\nMine has, as yet, has not been contributed. I am dubious that it will happen in\nthe near future for various reasons. However, it is designed around a\ncompletely different strategy, it is more like a search engine.\n\nLike the fti package, my system parses out a table into component words,\nhowever, rather than storing them word/oid pairs, I use an external file format\nwhich stores a searchable dictionary of words. Each word is associated with a\nbitmap.\n\nThe bitmap contains a bit for each record. Multiple words in a query are parsed\nand the bitmaps retrieved. The bitmaps are then combined with boolean\noperations (and/or/not).\n\nEach bit (0...count) is associated with an integer key. The integer key can be\nan oid or some other unique table ID.\n\nBeing an external system, it is implemented using 'C' based extensions to\npostgres. A query on my system looks something like:\n\nselect * from table, (select ftss_direct('all { word1 word2 }') as id) as\nresult, where table.id = result.id;\n\nThe advantages of fti is that it, by being tied closely to Postgres, updates\nautomatically on inserts and updates. The disadvantage is that it is relatively\nslow to perform complex searches.\n\nThe advantage of my system is that it is very fast, a loaded system can search\nthe bitmap index of roughly 4 words within 4 million records in about 20ms~50ms\ndepending on the popularity of the words. (Performance lags at startup) The\ndisadvantage of my system is that it is not designed to be very dynamic, it\nrelies extensively on the virtual memory management of the system (and is thus\nlimited), and not being tied too closely to PostgreSQL, has been somewhat\nfrustrating to getting it working as well as I would like.\n\nI would love to find a way to get a bitmap like index native to Postgres. I\nwould, in fact, love and expect to do an amount of it. The problem is to do it\n\"right\" will require a lot of work at very low levels of Postgres.\n\nIs anyone interested in pursuing this?\nHow should it look?\nHow deeply rooted in postgres does it need to be?\n",
"msg_date": "Sat, 02 Jun 2001 12:53:15 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Full text searching, anyone interested?"
},
{
"msg_contents": "Hi guys,\n\n\nOn Sat, 2 Jun 2001, mlw wrote:\n\n> I frequently rant on this about full text searching.\n> \n[snip]\n> I would love to find a way to get a bitmap like index native to Postgres. I\n> would, in fact, love and expect to do an amount of it. The problem is to do it\n> \"right\" will require a lot of work at very low levels of Postgres.\n> \n> Is anyone interested in pursuing this?\n\nYes. I think this would be an important feature of PostgreSQL. I have\nhacked contrib/fulltextindex to bits in order to segment the index so that\nI can better deploy it on a cluster of postgres machines. I have also\nchanged it to a score/rating style system. However, it is still a word ->\noid relationship and is not scaling as well as I had hoped.\n\n> How should it look?\n\nIn terms of interface to SQL, the function call which activates your FTI\nsearch is much neater than the way I do it -- build a query based on the\nnumber of search terms and the boolean operator used.\n\nIn terms of how it is interfaced to Postgres backend, I think it should be\nan index type which one can apply to any character orientated\ncolumns(s). It would be important that the index capable of handling\nmultiple columns so that many textual fields in a single row could be\nindexed.\n\nThe index itself is where troubles would be found. People expect a lot\nfrom full text searches. My own implementations allow chronological\nsearching, score based searching and searching on similar words. It would\nbe hard to interface this to CREATE INDEX as well as select. So, if a\nnative full text index was to be build it would have to be able to:\n\na) Index multiple columns\nb) Be configurable: score/frequency based sorting, sorting in terms of a\ncolumn in an index row?\nc) Be interfaced to a user level fti() function\nd) Be ignored by the planner (if we want searches to occur only through\nfti())\ne) honour insert/delete.\n\nSomething else which is an issue is the size of the index. Indices on text\ncolumns are generally very large. In my applications I have managed to\nreduce this through segmenting the indices along the following lines: case\nsensitivity/insensitivty, leading characters. This dramatically reduces\nthe IO load of an index scan -- but it would be quite difficult to build\nthis into a dynamic framework for the backend. For one, a VACUUM or some\nequivalent would need to evaluate the size of a given index and, based on\nother configuration information (is the user allowing index\nsegmentation?) segment the index based on some criteria. The problem then\nis that for large indices, this could take quite some time.\n\nI've probably over looked a fair number of other things which would need\nto be considered. However, it's safe to say that such a feature native to\nPostgres would be greatly appreciated by many.\n\nGavin\n\n",
"msg_date": "Sun, 3 Jun 2001 12:10:09 +1000 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Full text searching, anyone interested?"
},
{
"msg_contents": "Gavin Sherry wrote:\n> \n> Hi guys,\n> \n> On Sat, 2 Jun 2001, mlw wrote:\n> \n> > I frequently rant on this about full text searching.\n> >\n> [snip]\n> > I would love to find a way to get a bitmap like index native to Postgres. I\n> > would, in fact, love and expect to do an amount of it. The problem is to do it\n> > \"right\" will require a lot of work at very low levels of Postgres.\n> >\n> > Is anyone interested in pursuing this?\n> \n> Yes. I think this would be an important feature of PostgreSQL. I have\n> hacked contrib/fulltextindex to bits in order to segment the index so that\n> I can better deploy it on a cluster of postgres machines. I have also\n> changed it to a score/rating style system. However, it is still a word ->\n> oid relationship and is not scaling as well as I had hoped.\n> \n> > How should it look?\n> \n> In terms of interface to SQL, the function call which activates your FTI\n> search is much neater than the way I do it -- build a query based on the\n> number of search terms and the boolean operator used.\n> \n> In terms of how it is interfaced to Postgres backend, I think it should be\n> an index type which one can apply to any character orientated\n> columns(s). It would be important that the index capable of handling\n> multiple columns so that many textual fields in a single row could be\n> indexed.\n> \n> The index itself is where troubles would be found. People expect a lot\n> from full text searches. My own implementations allow chronological\n> searching, score based searching and searching on similar words. It would\n> be hard to interface this to CREATE INDEX as well as select. So, if a\n> native full text index was to be build it would have to be able to:\n> \n> a) Index multiple columns\n> b) Be configurable: score/frequency based sorting, sorting in terms of a\n> column in an index row?\n> c) Be interfaced to a user level fti() function\n> d) Be ignored by the planner (if we want searches to occur only through\n> fti())\n> e) honour insert/delete.\n> \n> Something else which is an issue is the size of the index. Indices on text\n> columns are generally very large. In my applications I have managed to\n> reduce this through segmenting the indices along the following lines: case\n> sensitivity/insensitivty, leading characters. This dramatically reduces\n> the IO load of an index scan -- but it would be quite difficult to build\n> this into a dynamic framework for the backend. For one, a VACUUM or some\n> equivalent would need to evaluate the size of a given index and, based on\n> other configuration information (is the user allowing index\n> segmentation?) segment the index based on some criteria. The problem then\n> is that for large indices, this could take quite some time.\n> \n> I've probably over looked a fair number of other things which would need\n> to be considered. However, it's safe to say that such a feature native to\n> Postgres would be greatly appreciated by many.\n> \n\nActually, I was sort of thinking.....\n\nWe could implement bitmap handling functions based on one dimentional arrays of\nintegers. That's how my stuff deals with them, and postgres already manages\nthem. \n\ncreate table fubar_words (char string, bitmap int4 []);\n\nStore various info in the bitmap.\n\nThen we can create another table:\n\ncreate table fubar_oids(bit integer, id oid);\n\n\nIt would work much like my system, but would use postgres to manage all the\ndata. It would be less efficient than the stand alone external system, but\nperform much better than the fti package.\n\nHmmm. thinking. \n\n\n\n------------------------\nhttp://www.mohawksoft.com\n",
"msg_date": "Sun, 03 Jun 2001 13:22:03 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Full text searching, anyone interested?"
},
{
"msg_contents": "> > > I would love to find a way to get a bitmap like index native to Postgres. I\n[skip]\n\n> We could implement bitmap handling functions based on one dimentional arrays of\n> integers. That's how my stuff deals with them, and postgres already manages\n> them.\n> \n\nlook at contrib/intarray. gist__intbig_ops is a variant of signature\ntree (from each array get bitmap signature).\n\nRegards,\n Teodor\n",
"msg_date": "Sun, 03 Jun 2001 23:03:26 +0400",
"msg_from": "Teodor <teodor@machaon.ru>",
"msg_from_op": false,
"msg_subject": "Re: Re: Full text searching, anyone interested?"
}
] |
[
{
"msg_contents": "I know we're not in the business of copying mySQL,\nbut the REPLACE INTO table (...) values (...) could be\na useful semantic. This is a combination INSERT or\nUPDATE statement. For one thing, it is atomic, and\neasier to work with at the application level. Also\nif the application doesn't care about previous values,\nthen execution has fewer locking issues and race\nconditions.\n\ncomments?\n\nDale Johnson\n\n\n",
"msg_date": "Mon, 04 Jun 2001 03:15:03 GMT",
"msg_from": "\"Dale Johnson\" <djohnson@mi.ab.ca>",
"msg_from_op": true,
"msg_subject": "REPLACE INTO table a la mySQL"
},
{
"msg_contents": "Dale Johnson wrote:\n> \n> I know we're not in the business of copying mySQL,\n> but the REPLACE INTO table (...) values (...) could be\n> a useful semantic. This is a combination INSERT or\n> UPDATE statement. For one thing, it is atomic, and\n> easier to work with at the application level. Also\n> if the application doesn't care about previous values,\n> then execution has fewer locking issues and race\n> conditions.\n> \n> comments?\n> \n> Dale Johnson\n\nI don't know if it is standard SQL, but it will save hundreds of lines of code\nin applications everywhere. I LOVE the idea. I just finished writing a database\nmerge/update program which could have been made much easier to write with this\nsyntax.\n",
"msg_date": "Tue, 05 Jun 2001 18:26:22 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: REPLACE INTO table a la mySQL"
},
{
"msg_contents": "> > I know we're not in the business of copying mySQL,\n> > but the REPLACE INTO table (...) values (...) could be\n> > a useful semantic. This is a combination INSERT or\n> > UPDATE statement. For one thing, it is atomic, and\n> > easier to work with at the application level. Also\n> > if the application doesn't care about previous values,\n> > then execution has fewer locking issues and race\n> > conditions.\n>\n> I don't know if it is standard SQL, but it will save hundreds of\n> lines of code\n> in applications everywhere. I LOVE the idea. I just finished\n> writing a database\n> merge/update program which could have been made much easier to\n> write with this\n> syntax.\n\nThe reason MySQL probably has it though is because it doesn't support proper\ntransactions.\n\nWhile we're at it, why not support the MySQL alternate INSERT syntax\n(rehetorical):\n\nINSERT INTO table SET field1='value1', field2='value2';\n\n...\n\nChris\n\n",
"msg_date": "Wed, 6 Jun 2001 10:10:44 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "RE: Re: REPLACE INTO table a la mySQL"
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n> \n> > > I know we're not in the business of copying mySQL,\n> > > but the REPLACE INTO table (...) values (...) could be\n> > > a useful semantic. This is a combination INSERT or\n> > > UPDATE statement. For one thing, it is atomic, and\n> > > easier to work with at the application level. Also\n> > > if the application doesn't care about previous values,\n> > > then execution has fewer locking issues and race\n> > > conditions.\n> >\n> > I don't know if it is standard SQL, but it will save hundreds of\n> > lines of code\n> > in applications everywhere. I LOVE the idea. I just finished\n> > writing a database\n> > merge/update program which could have been made much easier to\n> > write with this\n> > syntax.\n> \n> The reason MySQL probably has it though is because it doesn't support proper\n> transactions.\n> \n> While we're at it, why not support the MySQL alternate INSERT syntax\n> (rehetorical):\n> \n> INSERT INTO table SET field1='value1', field2='value2';\n\nThat is not an issue, but a \"REPLACE\" syntax can take the place of this:\n\nSQL(\"select * from table where ID = fubar\");\n\nif(HAS_VALUES(SQL))\n\tSQL(\"update table set xx=yy, www=zz where ID = fubar\");\nelse\n\tSQL(\"insert into table (...) values (...)\");\n\n\nREPLACE into table set xx=yy, ww = zz where ID = fubar;\n\nA MUCH better solution!\n",
"msg_date": "Tue, 05 Jun 2001 22:26:44 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: REPLACE INTO table a la mySQL"
},
{
"msg_contents": "Dale Johnson wrote:\n> I know we're not in the business of copying mySQL,\n> but the REPLACE INTO table (...) values (...) could be\n> a useful semantic. This is a combination INSERT or\n> UPDATE statement. For one thing, it is atomic, and\n> easier to work with at the application level. Also\n> if the application doesn't care about previous values,\n> then execution has fewer locking issues and race\n> conditions.\n>\n> comments?\n\n First it's not standard SQL, so chances aren't that good.\n Second, how do you think the system should behave in the\n following case:\n\n * Table A has one trigger BEFORE INSERT doing some checks\n plus inserting a row into table newA and updating a row in\n table balanceA. It also has triggers BEFORE UPDATE and\n BEFORE DELETE that update balanceA.\n\n * Now we do your REPLACE INTO\n\n The problem is that in a concurrent multiuser environment you\n cannot know if that row exists until you actually do the\n insert (except you lock the entire table and check for).\n Since there's a BEFORE trigger which potentially could\n suppress the INSERT, you can't do the insert before fireing\n it. Now it has been run, did it's inserts and updates and the\n statement must be converted into an UPDATE because the row\n exists - how do you undo the trigger work?\n\n I know, mySQL doesn't have triggers, referential integrity\n and all that damned complicated stuff. That's why it can have\n such a powerful non-standard command like REPLACE INTO.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Wed, 6 Jun 2001 11:00:11 -0400 (EDT)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: REPLACE INTO table a la mySQL"
},
{
"msg_contents": "mlw wrote:\n> [...]\n> REPLACE into table set xx=yy, ww = zz where ID = fubar;\n>\n> A MUCH better solution!\n\n Please solve the trigger problem at least theoretical before\n claiming that mySQL is that MUCH better. And please don't\n solve it by ripping out trigger support :-)\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Wed, 6 Jun 2001 11:06:39 -0400 (EDT)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: REPLACE INTO table a la mySQL"
},
{
"msg_contents": "Jan Wieck wrote:\n\n> Dale Johnson wrote:\n> > I know we're not in the business of copying mySQL,\n> > but the REPLACE INTO table (...) values (...) could be\n> > a useful semantic. This is a combination INSERT or\n> > UPDATE statement. For one thing, it is atomic, and\n> > easier to work with at the application level. Also\n> > if the application doesn't care about previous values,\n> > then execution has fewer locking issues and race\n> > conditions.\n> >\n> > comments?\n>\n> First it's not standard SQL, so chances aren't that good.\n> Second, how do you think the system should behave in the\n> following case:\n>\n> * Table A has one trigger BEFORE INSERT doing some checks\n> plus inserting a row into table newA and updating a row in\n> table balanceA. It also has triggers BEFORE UPDATE and\n> BEFORE DELETE that update balanceA.\n>\n> * Now we do your REPLACE INTO\n>\n> The problem is that in a concurrent multiuser environment you\n> cannot know if that row exists until you actually do the\n> insert (except you lock the entire table and check for).\n> Since there's a BEFORE trigger which potentially could\n> suppress the INSERT, you can't do the insert before fireing\n> it. Now it has been run, did it's inserts and updates and the\n> statement must be converted into an UPDATE because the row\n> exists - how do you undo the trigger work?\n>\n> I know, mySQL doesn't have triggers, referential integrity\n> and all that damned complicated stuff. That's why it can have\n> such a powerful non-standard command like REPLACE INTO.\n>\n> Jan\n\nPerhaps it is as easy as saying that this feature is a non-standard\nextension to SQL, thus a non-standard trigger mechanism is used.\n\nThe trigger will be on the statement replace. The trigger function will\ncarry with it the tuple, and the previous one if one exists.\n\ncreate trigger my_trigger before update or insert or delete or replace\n\n\n",
"msg_date": "Wed, 06 Jun 2001 16:57:35 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: REPLACE INTO table a la mySQL"
},
{
"msg_contents": "\n\"Jan Wieck\" <JanWieck@Yahoo.com> wrote in message\nnews:200106061506.f56F6dV01843@jupiter.us.greatbridge.com...\n> mlw wrote:\n> > [...]\n> > REPLACE into table set xx=yy, ww = zz where ID = fubar;\n> >\n> > A MUCH better solution!\n>\n> Please solve the trigger problem at least theoretical before\n> claiming that mySQL is that MUCH better. And please don't\n> solve it by ripping out trigger support :-)\n>\nfor INSERT OR REPLACE into table ...\nif the record was not there, fire the insert trigger\nelse\n delete the row (fire delete trigger)\n insert the new row (fire the insert trigger)\nfi\n\nsemantically no other way, I think\n\nDale.\n\n\n",
"msg_date": "Fri, 08 Jun 2001 09:19:49 GMT",
"msg_from": "\"Dale Johnson\" <djohnson@mi.ab.ca>",
"msg_from_op": true,
"msg_subject": "Re: Re: REPLACE INTO table a la mySQL"
},
{
"msg_contents": "Dale Johnson wrote:\n\n> \"Jan Wieck\" <JanWieck@Yahoo.com> wrote in message\n> news:200106061506.f56F6dV01843@jupiter.us.greatbridge.com...\n> > mlw wrote:\n> > > [...]\n> > > REPLACE into table set xx=yy, ww = zz where ID = fubar;\n> > >\n> > > A MUCH better solution!\n> >\n> > Please solve the trigger problem at least theoretical before\n> > claiming that mySQL is that MUCH better. And please don't\n> > solve it by ripping out trigger support :-)\n> >\n> for INSERT OR REPLACE into table ...\n> if the record was not there, fire the insert trigger\n> else\n> delete the row (fire delete trigger)\n> insert the new row (fire the insert trigger)\n> fi\n>\n> semantically no other way, I think\n\nI'm not sure I agree. There are explicit triggers for update, insert, and\ndelete, therefor why not also have a trigger for replace? It is one more\ncase. Rather than try to figure out how to map replace into two distinct\nbehaviors of insert or update based on some conditional logic, why not just\nhave a replace trigger?\n\n\n\n",
"msg_date": "Mon, 11 Jun 2001 11:52:28 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: REPLACE INTO table a la mySQL"
},
{
"msg_contents": "mlw wrote:\n> Dale Johnson wrote:\n>\n> > \"Jan Wieck\" <JanWieck@Yahoo.com> wrote in message\n> > news:200106061506.f56F6dV01843@jupiter.us.greatbridge.com...\n> > > mlw wrote:\n> > > > [...]\n> > > > REPLACE into table set xx=yy, ww = zz where ID = fubar;\n> > > >\n> > > > A MUCH better solution!\n> > >\n> > > Please solve the trigger problem at least theoretical before\n> > > claiming that mySQL is that MUCH better. And please don't\n> > > solve it by ripping out trigger support :-)\n> > >\n> > for INSERT OR REPLACE into table ...\n> > if the record was not there, fire the insert trigger\n> > else\n> > delete the row (fire delete trigger)\n> > insert the new row (fire the insert trigger)\n> > fi\n> >\n> > semantically no other way, I think\n>\n> I'm not sure I agree. There are explicit triggers for update, insert, and\n> delete, therefor why not also have a trigger for replace? It is one more\n> case. Rather than try to figure out how to map replace into two distinct\n> behaviors of insert or update based on some conditional logic, why not just\n> have a replace trigger?\n\n Adding another trigger event type will break every existing\n DB schema that relies on custom triggers to ensure logical\n data integrity. Thus it is unacceptable as solution to\n support a non-standard feature - period.\n\n The question \"does this row exist\" can only be answered by\n looking at the primary key. Now BEFORE triggers are allowed\n to alter the key attributes, so the final primary key isn't\n known before they are executed.\n\n Thus the DELETE then INSERT semantic might be the only way.\n Pretty havy restriction, making the entire REPLACE INTO\n somewhat useless IMHO.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Mon, 11 Jun 2001 16:44:58 -0400 (EDT)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: REPLACE INTO table a la mySQL"
},
{
"msg_contents": "Jan Wieck wrote:\n> \n> mlw wrote:\n> > Dale Johnson wrote:\n> >\n> > > \"Jan Wieck\" <JanWieck@Yahoo.com> wrote in message\n> > > news:200106061506.f56F6dV01843@jupiter.us.greatbridge.com...\n> > > > mlw wrote:\n> > > > > [...]\n> > > > > REPLACE into table set xx=yy, ww = zz where ID = fubar;\n> > > > >\n> > > > > A MUCH better solution!\n> > > >\n> > > > Please solve the trigger problem at least theoretical before\n> > > > claiming that mySQL is that MUCH better. And please don't\n> > > > solve it by ripping out trigger support :-)\n> > > >\n> > > for INSERT OR REPLACE into table ...\n> > > if the record was not there, fire the insert trigger\n> > > else\n> > > delete the row (fire delete trigger)\n> > > insert the new row (fire the insert trigger)\n> > > fi\n> > >\n> > > semantically no other way, I think\n> >\n> > I'm not sure I agree. There are explicit triggers for update, insert, and\n> > delete, therefor why not also have a trigger for replace? It is one more\n> > case. Rather than try to figure out how to map replace into two distinct\n> > behaviors of insert or update based on some conditional logic, why not just\n> > have a replace trigger?\n> \n> Adding another trigger event type will break every existing\n> DB schema that relies on custom triggers to ensure logical\n> data integrity. Thus it is unacceptable as solution to\n> support a non-standard feature - period.\n> \n> The question \"does this row exist\" can only be answered by\n> looking at the primary key. Now BEFORE triggers are allowed\n> to alter the key attributes, so the final primary key isn't\n> known before they are executed.\n> \n> Thus the DELETE then INSERT semantic might be the only way.\n> Pretty havy restriction, making the entire REPLACE INTO\n> somewhat useless IMHO.\n\nThe only issue I have with your conclusion about DB schema is that REPLACE is\nnot part of standard SQL, so we do not need be too concerned. Just give them a\nREPLACE trigger and be done with it. If that isn't good enough, in the FAQ, say\nthat the standard way is insert or update.\n",
"msg_date": "Mon, 11 Jun 2001 18:42:41 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: REPLACE INTO table a la mySQL"
},
{
"msg_contents": "On Mon, 11 Jun 2001, mlw wrote:\n\n> > Adding another trigger event type will break every existing\n> > DB schema that relies on custom triggers to ensure logical\n> > data integrity. Thus it is unacceptable as solution to\n> > support a non-standard feature - period.\n> > \n> > The question \"does this row exist\" can only be answered by\n> > looking at the primary key. Now BEFORE triggers are allowed\n> > to alter the key attributes, so the final primary key isn't\n> > known before they are executed.\n> > \n> > Thus the DELETE then INSERT semantic might be the only way.\n> > Pretty havy restriction, making the entire REPLACE INTO\n> > somewhat useless IMHO.\n> \n> The only issue I have with your conclusion about DB schema is that\n> REPLACE is not part of standard SQL, so we do not need be too\n> concerned. Just give them a REPLACE trigger and be done with it. If\n> that isn't good enough, in the FAQ, say that the standard way is\n> insert or update.\nI am not sure I like this: it is possible that someone's security is based\non triggers, and adding replace as a trigger will let them get around\nit...Possibly this could be controlled by serverwide option\n'enable_replace_into' or something like that for people with such setup..?\n\n-alex \n\n",
"msg_date": "Mon, 11 Jun 2001 19:11:58 -0400 (EDT)",
"msg_from": "Alex Pilosov <alex@pilosoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: REPLACE INTO table a la mySQL"
},
{
"msg_contents": "mlw wrote:\n> \n> Dale Johnson wrote:\n> \n> > \"Jan Wieck\" <JanWieck@Yahoo.com> wrote in message\n> > news:200106061506.f56F6dV01843@jupiter.us.greatbridge.com...\n> > > mlw wrote:\n> > > > [...]\n> > > > REPLACE into table set xx=yy, ww = zz where ID = fubar;\n> > > >\n> > > > A MUCH better solution!\n> > >\n> > > Please solve the trigger problem at least theoretical before\n> > > claiming that mySQL is that MUCH better. And please don't\n> > > solve it by ripping out trigger support :-)\n\nI was recently told about a similar feature coming to Oracle (or perhaps\nalready in v9.x)\n\nHas anyone any knowledge of it ?\n\n--------------------\nHannu\n",
"msg_date": "Tue, 12 Jun 2001 22:56:28 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Re: REPLACE INTO table a la mySQL"
},
{
"msg_contents": "Alex Pilosov wrote:\n> \n> On Mon, 11 Jun 2001, mlw wrote:\n> \n> > > Adding another trigger event type will break every existing\n> > > DB schema that relies on custom triggers to ensure logical\n> > > data integrity. Thus it is unacceptable as solution to\n> > > support a non-standard feature - period.\n> > >\n> > > The question \"does this row exist\" can only be answered by\n> > > looking at the primary key. Now BEFORE triggers are allowed\n> > > to alter the key attributes, so the final primary key isn't\n> > > known before they are executed.\n> > >\n> > > Thus the DELETE then INSERT semantic might be the only way.\n> > > Pretty havy restriction, making the entire REPLACE INTO\n> > > somewhat useless IMHO.\n> >\n> > The only issue I have with your conclusion about DB schema is that\n> > REPLACE is not part of standard SQL, so we do not need be too\n> > concerned. Just give them a REPLACE trigger and be done with it. If\n> > that isn't good enough, in the FAQ, say that the standard way is\n> > insert or update.\n> I am not sure I like this: it is possible that someone's security is based\n> on triggers, and adding replace as a trigger will let them get around\n> it...\n\nBTW, does current LOAD INTO trigger INSERT triggers ?\n\n>Possibly this could be controlled by serverwide option\n> 'enable_replace_into' or something like that for people with such setup..?\n> \n> -alex\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n",
"msg_date": "Tue, 12 Jun 2001 23:34:29 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Re: REPLACE INTO table a la mySQL"
},
{
"msg_contents": "Hannu Krosing wrote:\n>\n> BTW, does current LOAD INTO trigger INSERT triggers ?\n\n If you mean COPY, yes.\n\n BTW2, we still allow TRUNCATE on tables that have DELETE\n triggers. Since it's a way to violate constraints it should\n IMHO not be allowed, or at least restricted to DBA.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Wed, 13 Jun 2001 09:12:05 -0400 (EDT)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: REPLACE INTO table a la mySQL"
},
{
"msg_contents": "> Hannu Krosing wrote:\n> >\n> > BTW, does current LOAD INTO trigger INSERT triggers ?\n> \n> If you mean COPY, yes.\n> \n> BTW2, we still allow TRUNCATE on tables that have DELETE\n> triggers. Since it's a way to violate constraints it should\n> IMHO not be allowed, or at least restricted to DBA.\n\nYou want a TODO item added?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 13 Jun 2001 11:07:11 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: REPLACE INTO table a la mySQL"
},
{
"msg_contents": "\n\"Jan Wieck\" <JanWieck@Yahoo.com> wrote in message\nnews:200106112044.f5BKiwa03120@jupiter.us.greatbridge.com...\n> mlw wrote:\n> > Dale Johnson wrote:\n> >\n> > > \"Jan Wieck\" <JanWieck@Yahoo.com> wrote in message\n> > > news:200106061506.f56F6dV01843@jupiter.us.greatbridge.com...\n> > > > mlw wrote:\n> > > > > [...]\n> > > > > REPLACE into table set xx=yy, ww = zz where ID = fubar;\n> > > > >\n> > > > > A MUCH better solution!\n> > > >\n> > > > Please solve the trigger problem at least theoretical before\n> > > > claiming that mySQL is that MUCH better. And please don't\n> > > > solve it by ripping out trigger support :-)\n> > > >\n> > > for INSERT OR REPLACE into table ...\n> > > if the record was not there, fire the insert trigger\n> > > else\n> > > delete the row (fire delete trigger)\n> > > insert the new row (fire the insert trigger)\n> > > fi\n> > >\n> > > semantically no other way, I think\n> >\n> > I'm not sure I agree. There are explicit triggers for update, insert,\nand\n> > delete, therefor why not also have a trigger for replace? It is one more\n> > case. Rather than try to figure out how to map replace into two distinct\n> > behaviors of insert or update based on some conditional logic, why not\njust\n> > have a replace trigger?\n>\n> Adding another trigger event type will break every existing\n> DB schema that relies on custom triggers to ensure logical\n> data integrity. Thus it is unacceptable as solution to\n> support a non-standard feature - period.\n>\n> The question \"does this row exist\" can only be answered by\n> looking at the primary key. Now BEFORE triggers are allowed\n> to alter the key attributes, so the final primary key isn't\n> known before they are executed.\n>\n> Thus the DELETE then INSERT semantic might be the only way.\n> Pretty havy restriction, making the entire REPLACE INTO\n> somewhat useless IMHO.\n>\n\nI think that application people would probably prefer the delete trigger,\ninsert trigger. It makes more sense, because I would interpret replace\nas \"get rid of the old if it exists\" and \"put in a new item\". If people\nwanted\nto make sure code is run on delete, and they have to put it into a\ndelete trigger and a replace trigger, it would be two places for them.\n\nFrankly, I'm not sure why this is being seen as a weak approach.\nMy indended semantic was atomic delete (ignoring error) and insert.\n\nDale.\n\n\n\n",
"msg_date": "Fri, 15 Jun 2001 09:58:18 GMT",
"msg_from": "\"Dale Johnson\" <djohnson@mi.ab.ca>",
"msg_from_op": true,
"msg_subject": "Re: Re: REPLACE INTO table a la mySQL"
}
] |
[
{
"msg_contents": "PostgreSQL is going multi-lingual! The infrastructure for message\ninternationalization is now in place, and if you want to see your favorite\nlanguage supported in the next release, this would be a good time to\ngather up and volunteer for translation.\n\nAt this time, most of psql and much of the backend server have been\nprepared for message translation. Over the coming weeks I hope to finish\nboth of these and prepare libpq, so that we have a presentable end-to-end\nsolution available. The rest will follow as time permits.\n\nIf you think you want to work on message translation, see this URL for an\nintroduction:\n\nhttp://www.de.postgresql.org/devel-corner/docs/postgres/nls.html\n\nAt this time there exists a fairly well progressed German translation for\npsql and placeholder-type German translation for the backend, which can\nboth serve as examples.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Mon, 4 Jun 2001 23:22:29 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "FYI: status of native language support"
},
{
"msg_contents": "Peter Eisentraut wrote:\n\n> language supported in the next release, this would be a good time to\n> gather up and volunteer for translation.\n\nI can help with Italian translation if no one else is volunteering (or\ncoordinating a team)\n\n-- \nAlessio F. Bragadini\t\talessio@albourne.com\nAPL Financial Services\t\thttp://village.albourne.com\nNicosia, Cyprus\t\t \tphone: +357-2-755750\n\n\"It is more complicated than you think\"\n\t\t-- The Eighth Networking Truth from RFC 1925\n",
"msg_date": "Tue, 05 Jun 2001 15:38:10 +0300",
"msg_from": "Alessio Bragadini <alessio@albourne.com>",
"msg_from_op": false,
"msg_subject": "Re: FYI: status of native language support"
},
{
"msg_contents": "I can help translating it to Spanish, just tell me :-)\n\nDiego Naya\nOSEDA\nSistemas\ndiegonaya@oseda.com.ar \n----- Original Message ----- \nFrom: \"Alessio Bragadini\" <alessio@albourne.com>\nTo: <pgsql-hackers@postgresql.org>\nSent: Tuesday, June 05, 2001 9:38 AM\nSubject: [HACKERS] Re: FYI: status of native language support\n\n\n> Peter Eisentraut wrote:\n> \n> > language supported in the next release, this would be a good time to\n> > gather up and volunteer for translation.\n> \n> I can help with Italian translation if no one else is volunteering (or\n> coordinating a team)\n> \n> -- \n> Alessio F. Bragadini alessio@albourne.com\n> APL Financial Services http://village.albourne.com\n> Nicosia, Cyprus phone: +357-2-755750\n> \n> \"It is more complicated than you think\"\n> -- The Eighth Networking Truth from RFC 1925\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n",
"msg_date": "Tue, 5 Jun 2001 10:37:34 -0300",
"msg_from": "\"Diego Naya\" <mpddanl@ciudad.com.ar>",
"msg_from_op": false,
"msg_subject": "Re: Re: FYI: status of native language support"
}
] |
[
{
"msg_contents": "A question from Joe Mitchell led me to investigate some access-checking\nbehavior that seems kinda broken. Currently, when aclinsert3() creates\na new entry in an ACL list, it effectively initializes the entry with\nthe current PUBLIC access rights, rather than with zero rights. Thus:\n\nregression=# create user u1;\nCREATE USER\nregression=# create table t1 (f1 int);\nCREATE\nregression=# grant select on t1 to public;\nCHANGE\nregression=# grant update on t1 to u1;\nCHANGE\nregression=# \\z t1\n Access permissions for database \"regression\"\n Relation | Access permissions\n----------+-----------------------------------\n t1 | {\"=r\",\"postgres=arwdRxt\",\"u1=rw\"}\n(1 row)\n\nNotice it says \"u1=rw\", not just \"u1=w\" which is what one might expect.\n\nThe reason why it does this, apparently, is that when aclcheck() finds a\nmatch on userid, it stops with that ACL entry and doesn't look at any\ngroup or world entries. So, if I now do\n\nregression=# revoke select on t1 from u1;\nCHANGE\nregression=# \\z t1\nAccess permissions for database \"regression\"\n Relation | Access permissions\n----------+----------------------------------\n t1 | {\"=r\",\"postgres=arwdRxt\",\"u1=w\"}\n(1 row)\n\nI now have a situation where u1 can't read t1, even though the rest of\nthe world can:\n\nregression=> select * from t1;\nERROR: t1: Permission denied.\n\nThis is inconsistent because the same does not hold true for privileges\ngranted via groups. aclcheck will succeed if *any* group you are in\nhas the desired privilege, *or* if PUBLIC does. Thus:\n\nregression=# create group g1 with user u1;\nCREATE GROUP\nregression=# create table t2 (f1 int);\nCREATE\nregression=# grant select on t2 to public;\nCHANGE\nregression=# grant update on t2 to group g1;\nCHANGE\nregression=# \\z t2\n Access permissions for database \"regression\"\n Relation | Access permissions\n----------+-----------------------------------------\n t2 | {\"=r\",\"postgres=arwdRxt\",\"group g1=rw\"}\n(1 row)\n\n(At this point u1 is able to read t2)\n\nregression=# revoke select on t2 from group g1;\nCHANGE\nregression=# \\z t2\n Access permissions for database \"regression\"\n Relation | Access permissions\n----------+----------------------------------------\n t2 | {\"=r\",\"postgres=arwdRxt\",\"group g1=w\"}\n(1 row)\n\n(At this point u1 is still able to read t2)\n\nAnother problem is that if you do\n\tgrant select to public;\n\tgrant update to u1;\n\trevoke select from public;\nyou will find that u1 still has select rights, which is undoubtedly\nnot what you wanted.\n\nI believe that a more consistent approach would be to say that a user's\nprivileges are the union of what is granted directly to himself, to any\ngroup he is currently a member of, and to PUBLIC. So if aclcheck\ndoesn't see the desired privilege granted in the user entry (if found),\nit has to continue on looking at groups and then world, not just fail.\nAnd aclinsert3 should initialize new entries to zero access rights, not\ncopy PUBLIC.\n\nThe only downside of this is that we'd lose the \"feature\" of being able\nto revoke from a particular user a right that is available via PUBLIC to\neveryone else. I'm not convinced that that behavior has any real use,\nand certainly keeping it doesn't seem important compared to making these\nother behaviors more reasonable. That \"feature\" doesn't work reliably\nanyway, since ACL entries are dropped as soon as they go to zero rights.\n\nComments, objections?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 04 Jun 2001 18:21:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Curious (mis)behavior of access rights"
},
{
"msg_contents": "----- Original Message -----\nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>\n>\n> The only downside of this is that we'd lose the \"feature\" of being able\n> to revoke from a particular user a right that is available via PUBLIC to\n> everyone else.\n\nCould we add additional privlideges that explicitly restrict a user?\nPerhaps negative permissions like -x -r etc... This would override group\nand public permissions and could be set via revoke. What does the SQL Spec\nsay the behaviour should be when group and user permissions are in conflict?\n\n\n",
"msg_date": "Mon, 4 Jun 2001 18:16:21 -0500",
"msg_from": "\"Matthew T. O'Connor\" <matthew@zeut.net>",
"msg_from_op": false,
"msg_subject": "Re: Curious (mis)behavior of access rights"
},
{
"msg_contents": "\"Matthew T. O'Connor\" <matthew@zeut.net> writes:\n>> The only downside of this is that we'd lose the \"feature\" of being able\n>> to revoke from a particular user a right that is available via PUBLIC to\n>> everyone else.\n\n> Could we add additional privlideges that explicitly restrict a user?\n> Perhaps negative permissions like -x -r etc... This would override group\n> and public permissions and could be set via revoke. What does the SQL Spec\n> say the behaviour should be when group and user permissions are in conflict?\n\nAFAICS the SQL spec's notion of REVOKE is the same as ours: it removes\na previously granted privilege bit. There is no concept of negative\nprivilege, and I can't say that I want to add one ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 04 Jun 2001 19:58:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Curious (mis)behavior of access rights "
}
] |
[
{
"msg_contents": "Hi guys,\n\nIt's relatively straightforward to allow check constraints to be inherited -\nbut is it really possible to ever do the same with primary, unique or even\nforeign constraints?\n\nie. Say a table has a primary key and I inherit from this table. Since the\nprimary key is an index on the parent table, I could just create another\nindex on the child table, on the same column.\n\nHowever - because we are dealing with two separate indices, it should still\nbe possible to insert duplicate values into the parent table and the child\ntable shouldn't it? This means that when a query is run over the parent\ntable that includes results from the child table then you will get duplicate\nresults in a supposedly primary index.\n\nSimilar arguments seem to apply to unique and foreign constraints. If you\ncould use aggregate functions in check constraints - you'd have another\nproblem. And if asserts were ever implemented - same thing...\n\nAm I misunderstanding how the mechanism works, or is this a big, not easily\nsolved, problem?\n\nChris\n\n",
"msg_date": "Tue, 5 Jun 2001 09:42:38 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Question about inheritance"
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> Am I misunderstanding how the mechanism works, or is this a big, not easily\n> solved, problem?\n\nThe latter. Check the list archives for previous debates about this.\nIt's not real clear whether an inherited primary key should be expected\nto be unique across the whole inheritance tree, or only unique per-table\n(IIRC, plausible examples have been advanced for each case). If we want\nuniqueness across multiple tables, it'll take considerable work to\ncreate an index mechanism that'd enforce it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 04 Jun 2001 22:07:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Question about inheritance "
},
{
"msg_contents": "On Tue, 5 Jun 2001, Christopher Kings-Lynne wrote:\n\n> Hi guys,\n> \n> It's relatively straightforward to allow check constraints to be inherited -\n> but is it really possible to ever do the same with primary, unique or even\n> foreign constraints?\n> \n> ie. Say a table has a primary key and I inherit from this table. Since the\n> primary key is an index on the parent table, I could just create another\n> index on the child table, on the same column.\n> \n> However - because we are dealing with two separate indices, it should still\n> be possible to insert duplicate values into the parent table and the child\n> table shouldn't it? This means that when a query is run over the parent\n> table that includes results from the child table then you will get duplicate\n> results in a supposedly primary index.\n> \n> Similar arguments seem to apply to unique and foreign constraints. If you\n> could use aggregate functions in check constraints - you'd have another\n> problem. And if asserts were ever implemented - same thing...\n> \n> Am I misunderstanding how the mechanism works, or is this a big, not easily\n> solved, problem?\n\nIt's a big deal. Actually check constraints have a similar problem if you\nallow inherited constraints to be dropped. \"Why does 'select * from\nbase;' give me rows where value<10 since there's a check value>=10 \non the table?\"\n\nAs Tom said, the unique constraint thing is still questionable which is\nthe more meaningful semantics. If we ever want to allow foreign key\nconstraints to inheritance trees, we need *some* way to guarantees\nuniqueness across the tree even if that isn't through the unique\nconstraint.\n\n",
"msg_date": "Mon, 4 Jun 2001 20:46:34 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: Question about inheritance"
}
] |
[
{
"msg_contents": "On hub, in /home/projects/pgsql/ftp/pub/dev I see\n\n*.tar.gz.md5 postgresql-opt-snapshot.tar.gz\ndoc postgresql-opt-snapshot.tar.gz.md5\npostgresql-base-snapshot.tar.gz postgresql-snapshot.tar.gz\npostgresql-base-snapshot.tar.gz.md5 postgresql-snapshot.tar.gz.md5\npostgresql-docs-snapshot.tar.gz postgresql-test-snapshot.tar.gz\npostgresql-docs-snapshot.tar.gz.md5 postgresql-test-snapshot.tar.gz.md5\n\nwhich agrees with the view at http://www.ca.postgresql.org/ftpsite/dev/.\n\nHowever, it seems that the mirrors have a lot more stuff:\nftp://postgresql.readysetnet.com/pub/postgresql/dev/ shows dozens\nof files back to 7.1beta6, and so do the other several I checked in\na random sample. Is the update mechanism failing to cause old files\nto be removed from the mirrors?\n\nAlso, some of the mirrors claimed to be up-to-date by \nhttp://www.postgresql.org/index.html aren't. Fr instance,\ndownload.sourceforge.net doesn't have 7.1.1 nor 7.1.2.\nI thought that the up-to-date check was automated?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 04 Jun 2001 21:56:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Mirrors not tracking main ftp site?"
},
{
"msg_contents": "On Mon, 4 Jun 2001, Tom Lane wrote:\n\n> However, it seems that the mirrors have a lot more stuff:\n> ftp://postgresql.readysetnet.com/pub/postgresql/dev/ shows dozens\n> of files back to 7.1beta6, and so do the other several I checked in\n> a random sample. Is the update mechanism failing to cause old files\n> to be removed from the mirrors?\n\nI'm doing:\n\n/usr/local/bin/rsync -avz --delete hub.org::postgresql-ftp\n/mnt/ftpd/pub/postgresql\n\nevery 4 hours and still have these files too. Not really sure where the\nfiles are comming from, but when I deleted a file from /dev it didn't\ncome back on a sync.\n\nI will look into the rsync info a bit more, but...\n\n- Brandon\n\n----------------------------------------------------------------------------\n b. palmer, bpalmer@crimelabs.net pgp:crimelabs.net/bpalmer.pgp5\n\n",
"msg_date": "Mon, 4 Jun 2001 22:16:34 -0400 (EDT)",
"msg_from": "bpalmer <bpalmer@crimelabs.net>",
"msg_from_op": false,
"msg_subject": "Re: Mirrors not tracking main ftp site?"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n> However, it seems that the mirrors have a lot more stuff:\n> ftp://postgresql.readysetnet.com/pub/postgresql/dev/ shows dozens\n> of files back to 7.1beta6, and so do the other several I checked in\n> a random sample. Is the update mechanism failing to cause old files\n> to be removed from the mirrors?\n\nFound the problem. Since rsync gets a perms denied from .hidden, it\nrefuses to delete files.\n\nroot@seraph:/root# ./rsync-postgres-ftp\nreceiving file list ... opendir(.hidden): Permission denied\ndone\nIO error encountered - skipping file deletion\nwrote 105 bytes read 20762 bytes 2782.27 bytes/sec\ntotal size is 521221478 speedup is 24978.27\n\nWhen I changed the script to:\n\n#/usr/local/bin/rsync -avz --delete hub.org::postgresql-ftp\n/mnt/ftpd/pub/postgresql\n/usr/local/bin/rsync -avz --ignore-errors --delete\nhub.org::postgresql-ftp /mnt/ftpd/pub/postgresql\n\nIt worked. People need to either use the --ignore-errors or have the\n.hidden folder on the server removed.\n\n- - Brandon\n\n\n- ----------------------------------------------------------------------------\n b. palmer, bpalmer@crimelabs.net pgp:crimelabs.net/bpalmer.pgp5\n\n-----BEGIN PGP SIGNATURE-----\nVersion: PGPfreeware 5.0i for non-commercial use\nCharset: noconv\n\niQA/AwUBOxxDFPYgmKoG+YbuEQLeiACeIhRJQ0HTZQCJc+aqHzqSfTods7IAnjEO\nm9vtW2WRh3PMPXdlWeEBzTzY\n=u6ep\n-----END PGP SIGNATURE-----\n\n\n",
"msg_date": "Mon, 4 Jun 2001 22:24:51 -0400 (EDT)",
"msg_from": "bpalmer <bpalmer@crimelabs.net>",
"msg_from_op": false,
"msg_subject": "Re: Mirrors not tracking main ftp site?"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> which agrees with the view at http://www.ca.postgresql.org/ftpsite/dev/.\n\nThanks for the pointer to ftp-sites http url ;)\n\nNow that I was able to verify (after not getting in to ftp:// for days)\nthat there are no 7.1.2 RPMS, I would like to inquire, if there are any \nplans to make RPMS's.\n\n----------------\nHannu\n",
"msg_date": "Tue, 05 Jun 2001 08:27:57 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Mirrors not tracking main ftp site?"
},
{
"msg_contents": "bpalmer <bpalmer@crimelabs.net> writes:\n> Found the problem. Since rsync gets a perms denied from .hidden, it\n> refuses to delete files.\n\nAh-hah. And that directory seems to have appeared on 13 Apr, which is\nright about the time that the oldest un-deleted files on the mirrors are\nfrom:\n\n> ls -ld ~pgsql/ftp/pub/.hidden\nd--x--x--x 2 root pgsql 512 Apr 13 14:58 /home/projects/pgsql/ftp/pub/.hidden\n\nMarc, what is that thing? Can we get rid of it?\n\n> It worked. People need to either use the --ignore-errors or have the\n> .hidden folder on the server removed.\n\nI don't much care for the first alternative ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 05 Jun 2001 00:29:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Mirrors not tracking main ftp site? "
},
{
"msg_contents": "On Mon, 4 Jun 2001, Tom Lane wrote:\n\n> On hub, in /home/projects/pgsql/ftp/pub/dev I see\n>\n> *.tar.gz.md5 postgresql-opt-snapshot.tar.gz\n> doc postgresql-opt-snapshot.tar.gz.md5\n> postgresql-base-snapshot.tar.gz postgresql-snapshot.tar.gz\n> postgresql-base-snapshot.tar.gz.md5 postgresql-snapshot.tar.gz.md5\n> postgresql-docs-snapshot.tar.gz postgresql-test-snapshot.tar.gz\n> postgresql-docs-snapshot.tar.gz.md5 postgresql-test-snapshot.tar.gz.md5\n>\n> which agrees with the view at http://www.ca.postgresql.org/ftpsite/dev/.\n>\n> However, it seems that the mirrors have a lot more stuff:\n> ftp://postgresql.readysetnet.com/pub/postgresql/dev/ shows dozens\n> of files back to 7.1beta6, and so do the other several I checked in\n> a random sample. Is the update mechanism failing to cause old files\n> to be removed from the mirrors?\n\nHere's the syntax we tell them to use:\n\n rsync -avz --delete hub.org::[remote]/ [destination directory]\n\nIf that's not what they're using I can't go into their cronjobs and\nfix it.\n\n>\n> Also, some of the mirrors claimed to be up-to-date by\n> http://www.postgresql.org/index.html aren't. Fr instance,\n> download.sourceforge.net doesn't have 7.1.1 nor 7.1.2.\n> I thought that the up-to-date check was automated?\n\nIt is and here's the directory from sourceforge:\n\n227 Entering Passive Mode (64,28,67,101,18,128).\n150 Opening ASCII mode data connection for file list\n-rw-r--r-- 1 root root 8117016 May 24 16:37 postgresql-7.1.2.tar.gz\n-rw-r--r-- 1 root root 65 May 24 16:38 postgresql-7.1.2.tar.gz.md5\n-rw-r--r-- 1 root root 3240364 May 24 16:38 postgresql-base-7.1.2.tar.gz\n-rw-r--r-- 1 root root 70 May 24 16:38 postgresql-base-7.1.2.tar.gz.md5\n-rw-r--r-- 1 root root 2072096 May 24 16:38 postgresql-docs-7.1.2.tar.gz\n-rw-r--r-- 1 root root 70 May 24 16:38 postgresql-docs-7.1.2.tar.gz.md5\n-rw-r--r-- 1 root root 1803742 May 24 16:38 postgresql-opt-7.1.2.tar.gz\n-rw-r--r-- 1 root root 69 May 24 16:38 postgresql-opt-7.1.2.tar.gz.md5\n-rw-r--r-- 1 root root 1002166 May 24 16:38 postgresql-test-7.1.2.tar.gz\n-rw-r--r-- 1 root root 70 May 24 16:38 postgresql-test-7.1.2.tar.gz.md5\n226-Transfer complete.\n226 Quotas off\n\nWhat is it you find missing about 7.1.2? What were you actually looking\nat?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 5 Jun 2001 06:16:35 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Mirrors not tracking main ftp site?"
},
{
"msg_contents": "\nokay, just removed the .hidden directory from the ftp server, which should\ncorrect that ... I had setup that .hidden directory to be excluded though,\nnot sure why it was bothering things :(\n\nOn Mon, 4 Jun 2001, bpalmer wrote:\n\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n>\n> > However, it seems that the mirrors have a lot more stuff:\n> > ftp://postgresql.readysetnet.com/pub/postgresql/dev/ shows dozens\n> > of files back to 7.1beta6, and so do the other several I checked in\n> > a random sample. Is the update mechanism failing to cause old files\n> > to be removed from the mirrors?\n>\n> Found the problem. Since rsync gets a perms denied from .hidden, it\n> refuses to delete files.\n>\n> root@seraph:/root# ./rsync-postgres-ftp\n> receiving file list ... opendir(.hidden): Permission denied\n> done\n> IO error encountered - skipping file deletion\n> wrote 105 bytes read 20762 bytes 2782.27 bytes/sec\n> total size is 521221478 speedup is 24978.27\n>\n> When I changed the script to:\n>\n> #/usr/local/bin/rsync -avz --delete hub.org::postgresql-ftp\n> /mnt/ftpd/pub/postgresql\n> /usr/local/bin/rsync -avz --ignore-errors --delete\n> hub.org::postgresql-ftp /mnt/ftpd/pub/postgresql\n>\n> It worked. People need to either use the --ignore-errors or have the\n> .hidden folder on the server removed.\n>\n> - - Brandon\n>\n>\n> - ----------------------------------------------------------------------------\n> b. palmer, bpalmer@crimelabs.net pgp:crimelabs.net/bpalmer.pgp5\n>\n> -----BEGIN PGP SIGNATURE-----\n> Version: PGPfreeware 5.0i for non-commercial use\n> Charset: noconv\n>\n> iQA/AwUBOxxDFPYgmKoG+YbuEQLeiACeIhRJQ0HTZQCJc+aqHzqSfTods7IAnjEO\n> m9vtW2WRh3PMPXdlWeEBzTzY\n> =u6ep\n> -----END PGP SIGNATURE-----\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n",
"msg_date": "Tue, 5 Jun 2001 08:23:43 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: [CORE] Re: Mirrors not tracking main ftp site?"
},
{
"msg_contents": "Vince Vielhaber <vev@michvhf.com> writes:\n> On Mon, 4 Jun 2001, Tom Lane wrote:\n>> Also, some of the mirrors claimed to be up-to-date by\n>> http://www.postgresql.org/index.html aren't. Fr instance,\n>> download.sourceforge.net doesn't have 7.1.1 nor 7.1.2.\n\n> What is it you find missing about 7.1.2? What were you actually looking\n> at?\n\nI went to ftp://download.sourceforge.net/pub/mirrors/postgresql/\n(the link given by our homepage) and didn't see the v7.1.2 symlink,\nnor did the source subdirectory have a v7.1.2 subdirectory.\n\nAs of this morning, though, both are there. I suppose they synced up\novernight.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 05 Jun 2001 09:50:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Mirrors not tracking main ftp site? "
},
{
"msg_contents": "On Tue, 5 Jun 2001, Tom Lane wrote:\n\n> Vince Vielhaber <vev@michvhf.com> writes:\n> > On Mon, 4 Jun 2001, Tom Lane wrote:\n> >> Also, some of the mirrors claimed to be up-to-date by\n> >> http://www.postgresql.org/index.html aren't. Fr instance,\n> >> download.sourceforge.net doesn't have 7.1.1 nor 7.1.2.\n>\n> > What is it you find missing about 7.1.2? What were you actually looking\n> > at?\n>\n> I went to ftp://download.sourceforge.net/pub/mirrors/postgresql/\n> (the link given by our homepage) and didn't see the v7.1.2 symlink,\n> nor did the source subdirectory have a v7.1.2 subdirectory.\n>\n> As of this morning, though, both are there. I suppose they synced up\n> overnight.\n\nDon't know what could have happened to it, I'm fairly certain I downloaded\nit from them less than a week ago when I did some upgrading here.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 5 Jun 2001 09:55:49 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Mirrors not tracking main ftp site? "
}
] |
[
{
"msg_contents": "\n\n\n>It's relatively straightforward to allow check constraints to be inherited -\n>but is it really possible to ever do the same with primary, unique or even\n>foreign constraints?\n\nYou would either have to check each index in the hierarchy or else have\na single index across the whole hierarchy and check that. Obviously the\nlatter would be generally more useful.\n\nAs with all things inheritance, it is usually the right thing, and a good\ndefault that things be inherited. So ideally, indexes should work across\nwhole hierarchies as well as primary, unique and foreign constraints.\nIt could be argued that not inheriting is of very limited usefulness.\n\n\n\n",
"msg_date": "Tue, 5 Jun 2001 13:08:58 +1000",
"msg_from": "chris.bitmead@health.gov.au",
"msg_from_op": true,
"msg_subject": "Re: Question about inheritance"
}
] |
[
{
"msg_contents": "Why does pgindent sometimes insert whitespace into the return type\npart of a function definition? Here's an example from the last\npgindent run:\n\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/optimizer/plan/createplan.c,v\nretrieving revision 1.103\nretrieving revision 1.104\ndiff -c -r1.103 -r1.104\n*** pgsql/src/backend/optimizer/plan/createplan.c 2001/01/24 19:42:58 1.103\n--- pgsql/src/backend/optimizer/plan/createplan.c 2001/03/22 03:59:36 1.104\n***************\n*** 1493,1499 ****\n return make_sort(sort_tlist, lefttree, numsortkeys);\n }\n \n! Material *\n make_material(List *tlist, Plan *lefttree)\n {\n Material *node = makeNode(Material);\n--- 1495,1501 ----\n return make_sort(sort_tlist, lefttree, numsortkeys);\n }\n \n! Material *\n make_material(List *tlist, Plan *lefttree)\n {\n Material *node = makeNode(Material);\n\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 05 Jun 2001 01:18:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Another pgindent gripe"
},
{
"msg_contents": "\nOK, fixed. Not sure why indent likes to add the tab, but I had some\ncode to change that:\n\n# move trailing * in function return type\n sed 's;^\\([A-Za-z_][^ ]*\\)[ ][ ]*\\*$;\\1 *;' |\n ^^^^\nTurns out the marked area was missing a space, and was only tab. Making\nit tab and space in the indicated brackets fixed the problem. My guess\nis that the failure was only happening when the function return type was\nexaclty eight characters, like \"Material\".\n\n\n> Why does pgindent sometimes insert whitespace into the return type\n> part of a function definition? Here's an example from the last\n> pgindent run:\n> \n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/optimizer/plan/createplan.c,v\n> retrieving revision 1.103\n> retrieving revision 1.104\n> diff -c -r1.103 -r1.104\n> *** pgsql/src/backend/optimizer/plan/createplan.c 2001/01/24 19:42:58 1.103\n> --- pgsql/src/backend/optimizer/plan/createplan.c 2001/03/22 03:59:36 1.104\n> ***************\n> *** 1493,1499 ****\n> return make_sort(sort_tlist, lefttree, numsortkeys);\n> }\n> \n> ! Material *\n> make_material(List *tlist, Plan *lefttree)\n> {\n> Material *node = makeNode(Material);\n> --- 1495,1501 ----\n> return make_sort(sort_tlist, lefttree, numsortkeys);\n> }\n> \n> ! Material *\n> make_material(List *tlist, Plan *lefttree)\n> {\n> Material *node = makeNode(Material);\n> \n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 6 Jun 2001 00:51:56 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Another pgindent gripe"
}
] |
[
{
"msg_contents": " > \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n > > Am I misunderstanding how the mechanism works, or is this a big, not\n easily\n > > solved, problem?\n >\n > The latter. Check the list archives for previous debates about this.\n > It's not real clear whether an inherited primary key should be expected\n > to be unique across the whole inheritance tree, or only unique per-table\n > (IIRC, plausible examples have been advanced for each case). If we want\n > uniqueness across multiple tables, it'll take considerable work to\n > create an index mechanism that'd enforce it.\n >\n IMHO current behaviour of PostgreSQL with inherited PK, FK, UNIQUE is\nsimply\n bug not only from object-oriented but even object-related point of view.\nNow\n I can violate parent PK by inserting duplicate key in child!\n\n Inherited tables should honours all constraints from parent. If I change\n some constraint (seems only FK, but not PK or UNIQUE) I should be able to\ndo\n it in more restrictive manner. For example, two base table is connected via\n FK. I can change such FK in childs from base1->base2 to child1->child2 (or\n child3) but not to child1->not_inherited_from_base2. CHECK, DEFAULT, NOT\n NULL are more free to changes, isn't it?\n\n IMHO last message in doc/TODO.details/inheritance from Oliver Elphick is a\n good direction for implementing with exception on more rectrictive child FK\n constraint (p.3 of message).\n\n As for me, I was pushed to rollback to scheme with no inheritance at all in\n my project for now. So I'm very interesting in implementing of right\n inheritance and I wanted to ask similar question in one of the lists in\nnear\n future.\n\n Regards,\n Dmitry\n\n\n\n",
"msg_date": "Tue, 5 Jun 2001 14:17:33 +0400",
"msg_from": "\"Dmitry G. Mastrukov\" <dmitry@taurussoft.org>",
"msg_from_op": true,
"msg_subject": "Re: Question about inheritance "
},
{
"msg_contents": "\nI have added this thread to TODO.detail/inheritance.\n\n> > \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > > Am I misunderstanding how the mechanism works, or is this a big, not\n> easily\n> > > solved, problem?\n> >\n> > The latter. Check the list archives for previous debates about this.\n> > It's not real clear whether an inherited primary key should be expected\n> > to be unique across the whole inheritance tree, or only unique per-table\n> > (IIRC, plausible examples have been advanced for each case). If we want\n> > uniqueness across multiple tables, it'll take considerable work to\n> > create an index mechanism that'd enforce it.\n> >\n> IMHO current behaviour of PostgreSQL with inherited PK, FK, UNIQUE is\n> simply\n> bug not only from object-oriented but even object-related point of view.\n> Now\n> I can violate parent PK by inserting duplicate key in child!\n> \n> Inherited tables should honours all constraints from parent. If I change\n> some constraint (seems only FK, but not PK or UNIQUE) I should be able to\n> do\n> it in more restrictive manner. For example, two base table is connected via\n> FK. I can change such FK in childs from base1->base2 to child1->child2 (or\n> child3) but not to child1->not_inherited_from_base2. CHECK, DEFAULT, NOT\n> NULL are more free to changes, isn't it?\n> \n> IMHO last message in doc/TODO.details/inheritance from Oliver Elphick is a\n> good direction for implementing with exception on more rectrictive child FK\n> constraint (p.3 of message).\n> \n> As for me, I was pushed to rollback to scheme with no inheritance at all in\n> my project for now. So I'm very interesting in implementing of right\n> inheritance and I wanted to ask similar question in one of the lists in\n> near\n> future.\n> \n> Regards,\n> Dmitry\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 9 Jun 2001 23:48:30 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Question about inheritance"
}
] |
[
{
"msg_contents": "Greetings,\n\nI need to implement a full write audit trail (every write access needs to be \nlogged as a complete SQL statement with timestamp, user and host) in our \ndatabase.\n\nWhich is the most efficient way to do this on the server side in Postgres? I \ntried to find something relevant in the documentation, but I could not find \nanything.\n\nHorst\n",
"msg_date": "Tue, 5 Jun 2001 21:05:59 +1000",
"msg_from": "Horst Herb <hherb@malleenet.net.au>",
"msg_from_op": true,
"msg_subject": "full write log"
}
] |
[
{
"msg_contents": "Hi all,\n\nI'm not a postgres hacker, but I' think that you must be the most\nappropriate person to give me a pointer about this question.... sorry for\nany possible mistake.\n\nNow I'm trying to use postgresql plus the pgbench like a\nfirst test to stress the interconnection system in a parallel machine. I\nknow that tpc-b is just a toy (no too much real... but before to do\nsomething more complex like tpc-c y want to see the postgres behavior).\n\nOk...well I'm running this benchmarks in different SMP machines (SGI with 4\nto 8 processors and the results are odd). The best performance is achieved\nwith just one backend (1 client). When I try to run more clients the tps\nfalls quickly.\n\nIn all cases I see that when I increase the number of clients the total CPU\nusage falls. With one client I can see a 100% usage (after a warm-up to get\nall data from disk - I'm running without fsync and with a large shared\nbuffer).My systems have a lot of memory then this is normal. But when I try\nwith more clients each CPU usage falls between 40% for 2 clients to 10% to 8\nclients. I assume the access to the shared memory through critical regions\n(lock-unlock) must be one reason... but this is too much. I've heard that\nlocks in postgress are at page level instead tuple level. I'm wrong?.\n\nSome suggestion about this?.\n\nThanks in advance for your support.\n\n--vpuente\n\n",
"msg_date": "Tue, 5 Jun 2001 14:39:55 +0200",
"msg_from": "\"Valentin Puente\" <vpuente@atc.unican.es>",
"msg_from_op": true,
"msg_subject": "Multiprocessor performance"
},
{
"msg_contents": "\"Valentin Puente\" <vpuente@atc.unican.es> writes:\n> Ok...well I'm running this benchmarks in different SMP machines (SGI with 4\n> to 8 processors and the results are odd). The best performance is achieved\n> with just one backend (1 client). When I try to run more clients the tps\n> falls quickly.\n\nWhat scale factor (-s parameter for pgbench init) are you using for the\nbenchmark?\n\nAt scale factor 1, there's only one \"branch\" row, so all the\ntransactions have to update the same row and naturally will spend most\nof their time waiting to do so.\n\nYou want scale factor >> # of concurrent clients to avoid interlock\neffects.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 05 Jun 2001 10:04:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Multiprocessor performance "
},
{
"msg_contents": "Valentin Puente wrote:\n> Hi all,\n>\n> I'm not a postgres hacker, but I' think that you must be the most\n> appropriate person to give me a pointer about this question.... sorry for\n> any possible mistake.\n>\n> Now I'm trying to use postgresql plus the pgbench like a\n> first test to stress the interconnection system in a parallel machine. I\n> know that tpc-b is just a toy (no too much real... but before to do\n> something more complex like tpc-c y want to see the postgres behavior).\n>\n> Ok...well I'm running this benchmarks in different SMP machines (SGI with 4\n> to 8 processors and the results are odd). The best performance is achieved\n> with just one backend (1 client). When I try to run more clients the tps\n> falls quickly.\n>\n> In all cases I see that when I increase the number of clients the total CPU\n> usage falls. With one client I can see a 100% usage (after a warm-up to get\n> all data from disk - I'm running without fsync and with a large shared\n> buffer).My systems have a lot of memory then this is normal. But when I try\n> with more clients each CPU usage falls between 40% for 2 clients to 10% to 8\n> clients. I assume the access to the shared memory through critical regions\n> (lock-unlock) must be one reason... but this is too much. I've heard that\n> locks in postgress are at page level instead tuple level. I'm wrong?.\n>\n> Some suggestion about this?.\n\n What was the scaling factor on pgbench initialization? if you\n used the 1-default, your bottleneck is the single row in the\n branches table, which everyone wants to lock for update. Try\n\n pgbench -i -s <10 or higher> <dbname>\n\n to give it a kick.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Tue, 5 Jun 2001 11:37:23 -0400 (EDT)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Multiprocessor performance"
}
] |
[
{
"msg_contents": "Tom,\n\nI have a problem with slow query execution (postgresql 7.1.2):\n\nThere are 2 tables - idx, msg_prt:\nbug=# \\dt\n List of relations\n Name | Type | Owner\n---------+-------+--------\n idx | table | megera\n msg_prt | table | megera\n(2 rows)\n\nbug=# \\d idx\n Table \"idx\"\n Attribute | Type | Modifier\n-----------+---------+----------\n tid | integer |\n lid | integer |\n did | integer |\nIndex: idxidx\n\nbug=# \\d msg_prt\n Table \"msg_prt\"\n Attribute | Type | Modifier\n-----------+---------+----------\n tid | integer |\nIndex: mprt_tid\n\n\nAlso there are 2 indexes - idxidx, mprt_tid\n\nbug=# \\d idxidx\n Index \"idxidx\"\n Attribute | Type\n-----------+---------\n lid | integer\n did | integer\n tid | integer\nunique btree\n\nbug=# \\d mprt_tid\n Index \"mprt_tid\"\n Attribute | Type\n-----------+---------\n tid | integer\nunique btree\n\n\nQuery is:\nselect msg_prt.tid as mid from msg_prt\nwhere exists\n (select idx.tid from idx where msg_prt.tid=idx.tid\n and idx.did=1 and idx.lid in (1207,59587) )\n\nPlan for this query looks very ineffective and query is very slow:\n\nselect msg_prt.tid as mid from msg_prt\n where exists (select idx.tid from idx where msg_prt.tid=idx.tid\n and idx.did=1 and idx.lid in (1207,59587) )\nNOTICE: QUERY PLAN:\n\nSeq Scan on msg_prt (cost=0.00..119090807.13 rows=69505 width=4)\n SubPlan\n -> Index Scan using idxidx, idxidx on idx (cost=0.00..1713.40 rows=1 width=4)\n\ntotal: 6.80 sec; number: 1; for one: 6.796 sec;\n\nStatistics on tables:\nidx - 103651 rows\nmsg_prt - 69505 rows\n\nThere are only 16 rows in 'idx' table satisfied subselect condition.\nI did vacuum analyze.\n\nAdding another index 'create index tididx on idx (tid);' helps:\nselect msg_prt.tid as mid from msg_prt\n where exists (select idx.tid from idx where msg_prt.tid=idx.tid\n and idx.did=1 and idx.lid in (1207,59587) )\nNOTICE: QUERY PLAN:\n\nSeq Scan on msg_prt (cost=0.00..1134474.94 rows=69505 width=4)\n SubPlan\n -> Index Scan using tididx on idx (cost=0.00..16.31 rows=1 width=4)\n\ntotal: 1.71 sec; number: 1; for one: 1.711 sec;\n\nbut still plan looks ineffective.\n\nThe best plan I've got eliminating IN predicate:\nselect msg_prt.tid as mid from msg_prt\n where exists (select idx.tid from idx where msg_prt.tid=idx.tid\n and idx.did=1 and idx.lid = 1207 and idx.lid=59587 )\nNOTICE: QUERY PLAN:\n\nSeq Scan on msg_prt (cost=0.00..167368.47 rows=69505 width=4)\n SubPlan\n -> Index Scan using idxidx on idx (cost=0.00..2.39 rows=1 width=4)\n\ntotal: 0.54 sec; number: 1; for one: 0.541 sec;\n\nUnfortunately I can't use this way in general case.\n\nDoes it's a known problem ?\n\ndata+schema is available from\nhttp://www.sai.msu.su/~megera/postgres/data/bug.dump.gz\nIt's about 500Kb !\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Tue, 5 Jun 2001 17:07:47 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "Strange query plan"
},
{
"msg_contents": "Oleg Bartunov <oleg@sai.msu.su> writes:\n> The best plan I've got eliminating IN predicate:\n> select msg_prt.tid as mid from msg_prt\n> where exists (select idx.tid from idx where msg_prt.tid=idx.tid\n> and idx.did=1 and idx.lid = 1207 and idx.lid=59587 )\n\nSurely that returns zero rows?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 05 Jun 2001 10:21:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Strange query plan "
},
{
"msg_contents": "On Tue, 5 Jun 2001, Tom Lane wrote:\n\n> Oleg Bartunov <oleg@sai.msu.su> writes:\n> > The best plan I've got eliminating IN predicate:\n> > select msg_prt.tid as mid from msg_prt\n> > where exists (select idx.tid from idx where msg_prt.tid=idx.tid\n> > and idx.did=1 and idx.lid = 1207 and idx.lid=59587 )\n>\n> Surely that returns zero rows?\n>\n\nOops, sorry :-)\nshould be\nselect msg_prt.tid as mid from msg_prt\n where exists (select idx.tid from idx where msg_prt.tid=idx.tid\n and idx.did=1 and ( idx.lid = 1207 or idx.lid=59587 ));\n\nbut this is not a big win.\n\nAnyway, what's about original query ?\n\n\n\n> \t\t\tregards, tom lane\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Tue, 5 Jun 2001 17:33:04 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "Re: Strange query plan "
},
{
"msg_contents": "Oleg Bartunov <oleg@sai.msu.su> writes:\n> should be\n> select msg_prt.tid as mid from msg_prt\n> where exists (select idx.tid from idx where msg_prt.tid=idx.tid\n> and idx.did=1 and ( idx.lid = 1207 or idx.lid=59587 ));\n> but this is not a big win.\n\nShouldn't be any win at all: the IN expression-list notation will get\ntranslated to exactly that form.\n\n> Anyway, what's about original query ?\n\nIN/EXISTS subqueries suck. This has been true for a long time and is\ngoing to be true for a while longer, unless someone else fixes it before\nI have a chance to look at it. See if you can't rewrite your query as\na plain join.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 05 Jun 2001 10:42:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Strange query plan "
},
{
"msg_contents": "On Tue, 5 Jun 2001, Tom Lane wrote:\n\n> Oleg Bartunov <oleg@sai.msu.su> writes:\n> > should be\n> > select msg_prt.tid as mid from msg_prt\n> > where exists (select idx.tid from idx where msg_prt.tid=idx.tid\n> > and idx.did=1 and ( idx.lid = 1207 or idx.lid=59587 ));\n> > but this is not a big win.\n>\n> Shouldn't be any win at all: the IN expression-list notation will get\n> translated to exactly that form.\n\nSure\n\n>\n> > Anyway, what's about original query ?\n>\n> IN/EXISTS subqueries suck. This has been true for a long time and is\n> going to be true for a while longer, unless someone else fixes it before\n> I have a chance to look at it. See if you can't rewrite your query as\n> a plain join.\n>\n\nThat's why we've moved to GiST and feel fine :-) That query we used in\nour old project, now swithing to GiST.\n\n\n> \t\t\tregards, tom lane\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Tue, 5 Jun 2001 18:02:00 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "Re: Strange query plan "
},
{
"msg_contents": "Oleg Bartunov <oleg@sai.msu.su> writes:\n> select msg_prt.tid as mid from msg_prt\n> where exists (select idx.tid from idx where msg_prt.tid=idx.tid\n> and idx.did=1 and idx.lid in (1207,59587) )\n> NOTICE: QUERY PLAN:\n\n> Seq Scan on msg_prt (cost=0.00..119090807.13 rows=69505 width=4)\n> SubPlan\n> -> Index Scan using idxidx, idxidx on idx (cost=0.00..1713.40 rows=1 width=4)\n\nActually, this example does reveal an unnecessary inefficiency: the\nplanner is only using the \"idx.lid in (1207,59587)\" clause for the\nindexscan, ignoring the fact that the did and tid clauses match the\nadditional columns of your three-column index. The attached patch\nshould improve matters.\n\n\t\t\tregards, tom lane\n\n\n*** src/backend/optimizer/path/indxpath.c.orig\tSun May 20 16:28:18 2001\n--- src/backend/optimizer/path/indxpath.c\tTue Jun 5 12:38:21 2001\n***************\n*** 397,403 ****\n \t\t\t\t\t\t\t\t\t\tclause, false);\n }\n \n! /*\n * Given an OR subclause that has previously been determined to match\n * the specified index, extract a list of specific opclauses that can be\n * used as indexquals.\n--- 397,403 ----\n \t\t\t\t\t\t\t\t\t\tclause, false);\n }\n \n! /*----------\n * Given an OR subclause that has previously been determined to match\n * the specified index, extract a list of specific opclauses that can be\n * used as indexquals.\n***************\n*** 406,415 ****\n * given opclause.\tHowever, if the OR subclause is an AND, we have to\n * scan it to find the opclause(s) that match the index. (There should\n * be at least one, if match_or_subclause_to_indexkey succeeded, but there\n! * could be more.)\tAlso, we apply expand_indexqual_conditions() to convert\n! * any special matching opclauses to indexable operators.\n *\n * The passed-in clause is not changed.\n */\n List *\n extract_or_indexqual_conditions(RelOptInfo *rel,\n--- 406,430 ----\n * given opclause.\tHowever, if the OR subclause is an AND, we have to\n * scan it to find the opclause(s) that match the index. (There should\n * be at least one, if match_or_subclause_to_indexkey succeeded, but there\n! * could be more.)\n! *\n! * Also, we can look at other restriction clauses of the rel to discover\n! * additional candidate indexquals: for example, consider\n! *\t\t\t... where (a = 11 or a = 12) and b = 42;\n! * If we are dealing with an index on (a,b) then we can include the clause\n! * b = 42 in the indexqual list generated for each of the OR subclauses.\n! * Essentially, we are making an index-specific transformation from CNF to\n! * DNF. (NOTE: when we do this, we end up with a slightly inefficient plan\n! * because create_indexscan_plan is not very bright about figuring out which\n! * restriction clauses are implied by the generated indexqual condition.\n! * Currently we'll end up rechecking both the OR clause and the transferred\n! * restriction clause as qpquals. FIXME someday.)\n! *\n! * Also, we apply expand_indexqual_conditions() to convert any special\n! * matching opclauses to indexable operators.\n *\n * The passed-in clause is not changed.\n+ *----------\n */\n List *\n extract_or_indexqual_conditions(RelOptInfo *rel,\n***************\n*** 417,470 ****\n \t\t\t\t\t\t\t\tExpr *orsubclause)\n {\n \tList\t *quals = NIL;\n \n! \tif (and_clause((Node *) orsubclause))\n \t{\n \n! \t\t/*\n! \t\t * Extract relevant sub-subclauses in indexkey order. This is\n! \t\t * just like group_clauses_by_indexkey() except that the input and\n! \t\t * output are lists of bare clauses, not of RestrictInfo nodes.\n! \t\t */\n! \t\tint\t\t *indexkeys = index->indexkeys;\n! \t\tOid\t\t *classes = index->classlist;\n \n! \t\tdo\n \t\t{\n! \t\t\tint\t\t\tcurIndxKey = indexkeys[0];\n! \t\t\tOid\t\t\tcurClass = classes[0];\n! \t\t\tList\t *clausegroup = NIL;\n! \t\t\tList\t *item;\n \n! \t\t\tforeach(item, orsubclause->args)\n \t\t\t{\n \t\t\t\tif (match_clause_to_indexkey(rel, index,\n \t\t\t\t\t\t\t\t\t\t\t curIndxKey, curClass,\n! \t\t\t\t\t\t\t\t\t\t\t lfirst(item), false))\n! \t\t\t\t\tclausegroup = lappend(clausegroup, lfirst(item));\n \t\t\t}\n \n! \t\t\t/*\n! \t\t\t * If no clauses match this key, we're done; we don't want to\n! \t\t\t * look at keys to its right.\n! \t\t\t */\n! \t\t\tif (clausegroup == NIL)\n! \t\t\t\tbreak;\n! \n! \t\t\tquals = nconc(quals, clausegroup);\n! \n! \t\t\tindexkeys++;\n! \t\t\tclasses++;\n! \t\t} while (!DoneMatchingIndexKeys(indexkeys, index));\n! \n! \t\tif (quals == NIL)\n! \t\t\telog(ERROR, \"extract_or_indexqual_conditions: no matching clause\");\n! \t}\n! \telse\n! \t{\n! \t\t/* we assume the caller passed a valid indexable qual */\n! \t\tquals = makeList1(orsubclause);\n! \t}\n \n \treturn expand_indexqual_conditions(quals);\n }\n--- 432,503 ----\n \t\t\t\t\t\t\t\tExpr *orsubclause)\n {\n \tList\t *quals = NIL;\n+ \tint\t\t *indexkeys = index->indexkeys;\n+ \tOid\t\t *classes = index->classlist;\n \n! \t/*\n! \t * Extract relevant indexclauses in indexkey order. This is essentially\n! \t * just like group_clauses_by_indexkey() except that the input and\n! \t * output are lists of bare clauses, not of RestrictInfo nodes.\n! \t */\n! \tdo\n \t{\n+ \t\tint\t\t\tcurIndxKey = indexkeys[0];\n+ \t\tOid\t\t\tcurClass = classes[0];\n+ \t\tList\t *clausegroup = NIL;\n+ \t\tList\t *item;\n \n! \t\tif (and_clause((Node *) orsubclause))\n! \t\t{\n! \t\t\tforeach(item, orsubclause->args)\n! \t\t\t{\n! \t\t\t\tExpr *subsubclause = (Expr *) lfirst(item);\n \n! \t\t\t\tif (match_clause_to_indexkey(rel, index,\n! \t\t\t\t\t\t\t\t\t\t\t curIndxKey, curClass,\n! \t\t\t\t\t\t\t\t\t\t\t subsubclause, false))\n! \t\t\t\t\tclausegroup = lappend(clausegroup, subsubclause);\n! \t\t\t}\n! \t\t}\n! \t\telse if (match_clause_to_indexkey(rel, index,\n! \t\t\t\t\t\t\t\t\t\t curIndxKey, curClass,\n! \t\t\t\t\t\t\t\t\t\t orsubclause, false))\n \t\t{\n! \t\t\tclausegroup = makeList1(orsubclause);\n! \t\t}\n \n! \t\t/*\n! \t\t * If we found no clauses for this indexkey in the OR subclause\n! \t\t * itself, try looking in the rel's top-level restriction list.\n! \t\t */\n! \t\tif (clausegroup == NIL)\n! \t\t{\n! \t\t\tforeach(item, rel->baserestrictinfo)\n \t\t\t{\n+ \t\t\t\tRestrictInfo *rinfo = (RestrictInfo *) lfirst(item);\n+ \n \t\t\t\tif (match_clause_to_indexkey(rel, index,\n \t\t\t\t\t\t\t\t\t\t\t curIndxKey, curClass,\n! \t\t\t\t\t\t\t\t\t\t\t rinfo->clause, false))\n! \t\t\t\t\tclausegroup = lappend(clausegroup, rinfo->clause);\n \t\t\t}\n+ \t\t}\n \n! \t\t/*\n! \t\t * If still no clauses match this key, we're done; we don't want to\n! \t\t * look at keys to its right.\n! \t\t */\n! \t\tif (clausegroup == NIL)\n! \t\t\tbreak;\n! \n! \t\tquals = nconc(quals, clausegroup);\n! \n! \t\tindexkeys++;\n! \t\tclasses++;\n! \t} while (!DoneMatchingIndexKeys(indexkeys, index));\n! \n! \tif (quals == NIL)\n! \t\telog(ERROR, \"extract_or_indexqual_conditions: no matching clause\");\n \n \treturn expand_indexqual_conditions(quals);\n }\n",
"msg_date": "Tue, 05 Jun 2001 12:46:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Strange query plan "
},
{
"msg_contents": "On Tue, 5 Jun 2001, Tom Lane wrote:\n\n> Oleg Bartunov <oleg@sai.msu.su> writes:\n> > select msg_prt.tid as mid from msg_prt\n> > where exists (select idx.tid from idx where msg_prt.tid=idx.tid\n> > and idx.did=1 and idx.lid in (1207,59587) )\n> > NOTICE: QUERY PLAN:\n>\n> > Seq Scan on msg_prt (cost=0.00..119090807.13 rows=69505 width=4)\n> > SubPlan\n> > -> Index Scan using idxidx, idxidx on idx (cost=0.00..1713.40 rows=1 width=4)\n>\n> Actually, this example does reveal an unnecessary inefficiency: the\n> planner is only using the \"idx.lid in (1207,59587)\" clause for the\n> indexscan, ignoring the fact that the did and tid clauses match the\n> additional columns of your three-column index. The attached patch\n> should improve matters.\n>\n> \t\t\tregards, tom lane\n\nCool. Looks better\nselect msg_prt.tid as mid from msg_prt where exists (select idx.tid from idx where msg_prt.tid=idx.tid and idx.did=1 and idx.lid in ( 1207, 59587) )\nNOTICE: QUERY PLAN:\n\nSeq Scan on msg_prt (cost=0.00..333700.88 rows=69505 width=4)\n SubPlan\n -> Index Scan using idxidx, idxidx on idx (cost=0.00..4.79 rows=1 width=4)\n\ntotal: 3.15 sec; number: 1; for one: 3.153 sec;\n\ninteresting that droping index 'idxidx' and creating simple\ncreate index tididx on idx (tid);\nbehaves better, while plan looks worse. Notice, index on tididx\nestimates cost better (16).\n\nselect msg_prt.tid as mid from msg_prt where exists (select idx.tid from idx where msg_prt.tid=idx.tid and idx.did=1 and idx.lid in ( 1207, 59587) )\nNOTICE: QUERY PLAN:\n\nSeq Scan on msg_prt (cost=0.00..1134474.94 rows=69505 width=4)\n SubPlan\n -> Index Scan using tididx on idx (cost=0.00..16.31 rows=1 width=4)\n\ntotal: 1.70 sec; number: 1; for one: 1.703 sec;\n\nInteresting that earlier if I have 2 indexes - idxidx and tididx\noptimizer choose tididx, while now (after patching) optimizer\nalways choose idxidx.\n\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Tue, 5 Jun 2001 22:02:57 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "Re: Strange query plan "
}
] |
[
{
"msg_contents": "\nI'm trying to import data from a sybase bcp (tab separated dump) and am\nencountering a really odd datetime type:\n\n Mar 27 1994 12:00:00:000AM\n\nI've been looking in the books but haven't found anything yet and see\nnothing in any of the PostgreSQL docs. Anyone have any idea how I can\nbring this data in without having to write something to read from the\nsybase table and write to the postgres table? I'd like to use copy to\nkeep things simple.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 5 Jun 2001 13:26:38 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": true,
"msg_subject": "importing from sybase"
}
] |
[
{
"msg_contents": "In PQexec() and also in parseInput() (both fe-exec.c) there is a provision\nfor, if more than one result set is returned, to concatenate the error\nmessages (while only returning the last result set). My question is how a\nbackend can return more than one error message per query string? The\ndescription of the protocol indicates that an ErrorResponse will either\ncause a connection close or the end of a query cycle.\n\nI am currently looking into extending the protocol so that more fields can\nbe in an ErrorResponse (e.g., error codes). If this were to happen then\nwe'd need a smarter way of handling more than one error message per cycle.\nHowever, I'd rather avoid that case in the first place.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Tue, 5 Jun 2001 21:50:38 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Can the backend return more than one error message per PQexec?"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> In PQexec() and also in parseInput() (both fe-exec.c) there is a provision\n> for, if more than one result set is returned, to concatenate the error\n> messages (while only returning the last result set). My question is how a\n> backend can return more than one error message per query string?\n\nThat concatenation hack was added to deal with an actual case where\ninformation was getting dropped, but I am not sure that it was something\nthat would arise in the normal protocol. IIRC it was something like\n\n1. backend sends error in response to bogus user query;\n\n2. backend encounters fatal problem during error cleanup (or gets\n shutdown signal from postmaster), and sends another error message\n to indicate this before it closes up shop.\n\nI think there may also be cases where we need to stuff both\nbackend-generated messages and libpq-generated messages into the\nerror result. That doesn't directly affect the protocol however.\n\nSince there will always be asynchronous conditions to deal with, it'd\nbe pretty foolish to design a protocol that assumes that exactly one\n'E' message will arrive during a PQexec cycle.\n\n> I am currently looking into extending the protocol so that more fields can\n> be in an ErrorResponse (e.g., error codes). If this were to happen then\n> we'd need a smarter way of handling more than one error message per cycle.\n\nOnly if you want to overload ErrorResponse so that successive 'E'\nmessages mean different things. I do not think that would be a good\ndesign. It'd be better to allow ErrorResponse to carry multiple fields.\nThis'd imply a protocol version bump, but so what? Changing the\nsemantics of ErrorResponse probably ought to require that anyway.\n\n(I have some other ideas that would require a protocol version bump too,\nlike fixing the broken COPY and FastPath parts of the protocol...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 05 Jun 2001 17:54:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Can the backend return more than one error message per PQexec? "
},
{
"msg_contents": "Tom Lane writes:\n\n> Since there will always be asynchronous conditions to deal with, it'd\n> be pretty foolish to design a protocol that assumes that exactly one\n> 'E' message will arrive during a PQexec cycle.\n\nReasonable.\n\n> > I am currently looking into extending the protocol so that more fields can\n> > be in an ErrorResponse (e.g., error codes). If this were to happen then\n> > we'd need a smarter way of handling more than one error message per cycle.\n>\n> Only if you want to overload ErrorResponse so that successive 'E'\n> messages mean different things. I do not think that would be a good\n> design. It'd be better to allow ErrorResponse to carry multiple fields.\n\nThat's the idea. But I can hardly concatenate the error codes, can I? I\nlooks as though we need an API where all the messages (errors + notices)\nfrom each query cycle are collected and can be cycled through after\ncompletion.\n\n> This'd imply a protocol version bump, but so what? Changing the\n> semantics of ErrorResponse probably ought to require that anyway.\n\nI think I could have done with a minor bump, but if you have some plans,\ntoo, the easier.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Wed, 6 Jun 2001 17:00:54 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: Can the backend return more than one error message\n per PQexec?"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n>> It'd be better to allow ErrorResponse to carry multiple fields.\n\n> That's the idea. But I can hardly concatenate the error codes, can I? I\n> looks as though we need an API where all the messages (errors + notices)\n> from each query cycle are collected and can be cycled through after\n> completion.\n\nOne way to do this that wouldn't involve breaking the protocol is\nto assign significance to linebreaks in an 'E' message's payload.\nI think someone proposed this already:\n\n\tERROR: blah blah\n\tCODE: 12345\n\tLOCATION: some/file.c line NNN\n\nie, lines starting with agreed-on keywords would be taken as conveying\nspecific fields. An arrangement like this could still work with plain\nconcatenation of multiple errors. Also, it would work tolerably well\nwhen fed to an old application that knows nothing of the convention and\njust prints out the error string. I'm leery of defining a whole new API\nthat must be used before one gets to see any of the added error\ninformation --- that would mean that a lot of people never will see it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 06 Jun 2001 11:18:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Can the backend return more than one error message per PQexec? "
},
{
"msg_contents": "Tom Lane writes:\n\n> One way to do this that wouldn't involve breaking the protocol is\n> to assign significance to linebreaks in an 'E' message's payload.\n\nSome fields may contain line breaks; for example, error messages\ndefinitely do now. We could pick a non-printing character (e.g., \\001)\nas the separator, but it might get ugly with multibyte. Of course table\nnames and such things can contain these characters as well, but I guess\nthere wouldn't be so much resistance in changing that. At worst we could\nmake up an escape mechanism.\n\n> Also, it would work tolerably well when fed to an old application that\n> knows nothing of the convention and just prints out the error string.\n> I'm leery of defining a whole new API that must be used before one\n> gets to see any of the added error information --- that would mean\n> that a lot of people never will see it.\n\nOkay, so PQerrorMessage will print the whole glob, whereas there would be\nan accessor method that would filter out the fields. That would also work\ntransparently with the notice processor, which I was concerned about until\nnow. I like it.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Wed, 6 Jun 2001 17:47:25 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: Can the backend return more than one error message\n per PQexec?"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> One way to do this that wouldn't involve breaking the protocol is\n>> to assign significance to linebreaks in an 'E' message's payload.\n\n> Some fields may contain line breaks; for example, error messages\n> definitely do now.\n\nYes. I believe most of them are set up to indent the continuation\nlines, so there wouldn't be much of a problem interpreting the format.\nIn any case, we could say that only a line beginning with a known\nkeyword starts a new field.\n\n> Okay, so PQerrorMessage will print the whole glob, whereas there would be\n> an accessor method that would filter out the fields. That would also work\n> transparently with the notice processor, which I was concerned about until\n> now. I like it.\n\nWorks for me.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 06 Jun 2001 11:54:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Can the backend return more than one error message per PQexec? "
},
{
"msg_contents": "Tom Lane writes:\n\n> > Some fields may contain line breaks; for example, error messages\n> > definitely do now.\n>\n> Yes. I believe most of them are set up to indent the continuation\n> lines, so there wouldn't be much of a problem interpreting the format.\n\nThis is not reliable.\n\n(Actually, it's not desirable either. Think about GUI or web applications\nor syslog: These formatting attempts are meaningless there. It'd be\nbetter to make error message mostly one-liners and worry about formatting\nin the front-ends. This is not a permanent workaround for the protocol\nproblems, though.)\n\n> In any case, we could say that only a line beginning with a known\n> keyword starts a new field.\n\nThat would probably require a protocol minor version bump anytime a\nkeyword is added. A maintenance pain keeping both sides in sync.\n\nAlso, I imagine that with the nature of data that parse tree dumps and\nother such debugging info (vacuum verbose?) throw out, it's possible to\nhave misinterpretations -- or even malicious attacks -- with these scheme.\n\n[I wrote:]\n> > Okay, so PQerrorMessage will print the whole glob, whereas there would be\n> > an accessor method that would filter out the fields.\n\nI think there will a problem with programs that parse the error messages\nin lack of a better idea. Also, newlines are non-printing sometimes (see\nabove), so this would produce a glob of garbage.\n\nI think an additional API is necessary. If you want extra information you\nneed to ask for it. In fact, most of the additional information will be\nintended to be processed by a program, the error message text is the only\nthing humans need to see by default.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Thu, 7 Jun 2001 18:07:27 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: Can the backend return more than one error message\n per PQexec?"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n> Some fields may contain line breaks; for example, error messages\n> definitely do now.\n>> \n>> Yes. I believe most of them are set up to indent the continuation\n>> lines, so there wouldn't be much of a problem interpreting the format.\n\n> This is not reliable.\n\nIt could be made reliable if we wanted it to be. AFAIR, in all the\nplaces that currently emit error messages formatted like\n\n\tERROR: something\n\t\tsomething else\n\nthe \"something else\" is not really part of the error message anyway.\nIt is explanatory material, a suggested workaround, or something like\nthat. IMHO that ought to be treated as a secondary field of the error\nnow that we're going to have secondary fields. Something like\n\n\tERROR: Unable to identify an operator '!' for types 'int4' and 'int4'\n\tREMARK: You will have to retype this query using an explicit cast\n\n> (Actually, it's not desirable either. Think about GUI or web applications\n> or syslog: These formatting attempts are meaningless there. It'd be\n> better to make error message mostly one-liners and worry about formatting\n> in the front-ends.\n\nI agree, but see above.\n\n> Also, I imagine that with the nature of data that parse tree dumps and\n> other such debugging info (vacuum verbose?) throw out, it's possible to\n> have misinterpretations -- or even malicious attacks -- with these scheme.\n\nI was not anticipating imposing any such structure on NOTICE messages.\n\n> [I wrote:]\n> Okay, so PQerrorMessage will print the whole glob, whereas there would be\n> an accessor method that would filter out the fields.\n\nI still think that is the right approach. The accessor methods would\nbecome the preferred way over time, but we shouldn't hide available\ninformation just because someone is using an old client.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Jun 2001 11:01:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Can the backend return more than one error message per PQexec? "
}
] |
[
{
"msg_contents": "I know that BLOBs are on the TODO list, but I had an idea.\n\nI think the storage of a BLOB outside of the table is an elegant \nsolution and keeps table sizes down without the bloat of the stored \nobject. Granted, if you are searching with a regular expression or \nusing like or ilike clauses, you're likely to be a little slower but it \nshouldn't be by much. More than likely, you won't be searching for \npatterns in the BLOB but rather the fields in the table associated with \nthe BLOB.\n\nWouldn't it be wonderful if you used the methods you had already \nimplemented and instead create a behavoir similar to the following.\n\non an insert\n take the data that was to be the blob...\n create your externally \"to be referenced\" file\n save the data to the file\n store the reference to that file\n\non an update\n take the data that was to be the blob...\n create your externally \"to be referenced\" file\n save the data to the file\n store the reference to that file\n delete the old referenced file\n\non a delete\n delete the reference to your file\n delete the external file\n\nI was thinking that the BLOB column type might be a trigger for a macro \nthat could handle the lo_import, lo_export juggling...\n\nI know it seems overly simplified, but having fought with MySQL and then \ntrying to wrestle with postgresql and importing,exporting BLOBs, it \nseemed there might be a little more room for discussion, although I \ndoubt this may have added anything to it...\n\nI'd love to see something done with BLOB support during 7.2.x *hint* :)\n\nBesides, if someone could give me some pointers as to where I might be \nable to start, I might try to contribute something myself.\n\nThomas\n\n\n",
"msg_date": "Tue, 05 Jun 2001 16:31:05 -0500",
"msg_from": "Thomas Swan <tswan@olemiss.edu>",
"msg_from_op": true,
"msg_subject": "BLOBs"
},
{
"msg_contents": "Thomas Swan <tswan@olemiss.edu> writes:\n> I know that BLOBs are on the TODO list, but I had an idea.\n\nI think you just rediscovered TOAST.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 05 Jun 2001 18:58:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BLOBs "
},
{
"msg_contents": "> Thomas Swan <tswan@olemiss.edu> writes:\n> > I know that BLOBs are on the TODO list, but I had an idea.\n> \n> I think you just rediscovered TOAST.\n\nWe have TOAST and people want to keep large objects for performance. I\nthink we could us an API that allows TOAST binary access and large\nobject access using the same API, and hopefully an improved one.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 10 Jun 2001 23:15:42 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BLOBs"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n>>Thomas Swan <tswan@olemiss.edu> writes:\n>>\n>>>I know that BLOBs are on the TODO list, but I had an idea.\n>>>\n>>I think you just rediscovered TOAST.\n>>\n>\n>We have TOAST and people want to keep large objects for performance. I\n>think we could us an API that allows TOAST binary access and large\n>object access using the same API, and hopefully an improved one.\n>\nI think I missed what I was trying to say in my original statement. I \nthink there's a way to use the existing API with performance benefits \nleft intact.\n\nTake for example the table :\ncreate table foo {\n foo_id serial,\n foo_name varchar(32),\n foo_object BLOB,\n);\n\nOn the insert statement \"insert into foo (foo_name,foo_object) values \n('My Object','{some escaped arbitrary string of binary data}');\", flush \nthe {some escaped arbitrary string of binary data} to disk as a \ntemporary file. Then do the lo_import operation transparent to the user.\n\nOn a select, do the same thing (transparently) and return the data back \nto user.\n\nPersonally, I like LO's being stored separately from the actual table.\n\n\n\n\n\n\n\n\nBruce Momjian wrote:\n\nThomas Swan <tswan@olemiss.edu> writes:\nI know that BLOBs are on the TODO list, but I had an idea.\n\nI think you just rediscovered TOAST.\n\nWe have TOAST and people want to keep large objects for performance. Ithink we could us an API that allows TOAST binary access and largeobject access using the same API, and hopefully an improved one.\n\nI think I missed what I was trying to say in my original statement. I think\nthere's a way to use the existing API with performance benefits left intact.\n\nTake for example the table :\ncreate table foo {\n foo_id serial,\n foo_name varchar(32),\n foo_object BLOB,\n);\n\nOn the insert statement \"insert into foo (foo_name,foo_object) values\n('My Object','{some escaped arbitrary string of binary data}');\", flush\nthe {some escaped arbitrary string of binary data} to disk as a temporary\nfile. Then do the lo_import operation transparent to the user.\n\nOn a select, do the same thing (transparently) and return the data back to\nuser.\n\nPersonally, I like LO's being stored separately from the actual table.",
"msg_date": "Tue, 12 Jun 2001 17:19:46 -0500",
"msg_from": "Thomas Swan <tswan@olemiss.edu>",
"msg_from_op": true,
"msg_subject": "Re: BLOBs"
},
{
"msg_contents": "Thomas Swan <tswan@olemiss.edu> writes:\n> I think I missed what I was trying to say in my original statement. I \n> think there's a way to use the existing API with performance benefits \n> left intact.\n\n> Take for example the table :\n> create table foo {\n> foo_id serial,\n> foo_name varchar(32),\n> foo_object BLOB,\n> );\n\n> On the insert statement \"insert into foo (foo_name,foo_object) values \n> ('My Object','{some escaped arbitrary string of binary data}');\", flush \n> the {some escaped arbitrary string of binary data} to disk as a \n> temporary file. Then do the lo_import operation transparent to the user.\n\n> On a select, do the same thing (transparently) and return the data back \n> to user.\n\n> Personally, I like LO's being stored separately from the actual table.\n\nI still think you've rediscovered TOAST. How is this better than (or\neven significantly different from) foo_object being a toastable bytea\ncolumn?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Jun 2001 19:02:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BLOBs "
}
] |
[
{
"msg_contents": "Ruke:\ncheck out http://www.greatbridge.org/genpage?replication_top\nfor a project on PostGres replication and related info\n\n Mauricio\n\n\n>From: \"Ruke Wang\" <ruke@servgate.com>\n>To: <pgsql-hackers@postgresql.org>\n>Subject: [HACKERS] database synchronization\n>Date: Tue, 29 May 2001 11:36:13 -0700\n>\n>Hi there,\n>\n>I see that pgsql replication is on TODO list. I wonder whether there is \n>related sites about this issue or some developed resources.\n>\n>Thanks.\n>\n>Ruke Wang\n>Software Engineer\n>Servgate Technologies, Inc.\n>(408)324-5717\n\n_________________________________________________________________\nGet your FREE download of MSN Explorer at http://explorer.msn.com\n\n",
"msg_date": "Tue, 05 Jun 2001 17:00:02 -0500",
"msg_from": "\"Mauricio Breternitz\" <mbjsql@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: database synchronization"
}
] |
[
{
"msg_contents": "Hi All,\n\nThis is my first post, so I hope I'm in the right area and doing it correctly.\n\nWe are having MAJOR & URGENT problems with Postresql occaisonly corrupting tables on insert. I had a quick look through your archive and couldn't find anything. It seems to happen mostly on large inserts (lots of data into one text field). This results in corrupting the table and hanging the psql console whenever I try to INSERT, UPDATE, DELETE, SELECT, etc. Doing an \"EXPLAIN SELECT * FROM table\" shows that I have around 100 - 1000 extra rows. The problem is often fixed by running VACUUM against the table, however VACUUM often hangs leaving the table locked until I delete the lock file. \n\nIts only a basic INSERT statement into a basic table.\n\nThanks in advance.\n\nBruce Irvine.\nSportingPulse.\n\n\n\n\n\n\n\nHi All,\n \nThis is my first post, so I hope I'm in the right \narea and doing it correctly.\n \nWe are having MAJOR & URGENT problems with \nPostresql occaisonly corrupting tables on insert. I had a quick look \nthrough your archive and couldn't find anything. It seems to happen mostly \non large inserts (lots of data into one text field). This results in \ncorrupting the table and hanging the psql console whenever I try to \nINSERT, UPDATE, DELETE, SELECT, etc. Doing an \"EXPLAIN SELECT * FROM table\" \nshows that I have around 100 - 1000 extra rows. The problem is often fixed by \nrunning VACUUM against the table, however VACUUM often hangs leaving the table \nlocked until I delete the lock file. \n \nIts only a basic INSERT statement into a basic \ntable.\n \nThanks in advance.\n \nBruce Irvine.\nSportingPulse.",
"msg_date": "Wed, 6 Jun 2001 09:47:10 +1000",
"msg_from": "\"Bruce Irvine\" <b.irvine@sportingpulse.com>",
"msg_from_op": true,
"msg_subject": "URGENT PROBLEM"
},
{
"msg_contents": "How 'bout posting what version of pgsql you're running, and we'll start \nback at square one :)\n\n-d\n\nBruce Irvine wrote:\n\n> Hi All,\n>\n> \n>\n> This is my first post, so I hope I'm in the right area and doing it \n> correctly.\n>\n> \n>\n> We are having MAJOR & URGENT problems with Postresql occaisonly \n> corrupting tables on insert. I had a quick look through your archive \n> and couldn't find anything. It seems to happen mostly on large inserts \n> (lots of data into one text field). This results in corrupting the \n> table and hanging the psql console whenever I try to INSERT, UPDATE, \n> DELETE, SELECT, etc. Doing an \"EXPLAIN SELECT * FROM table\" shows that \n> I have around 100 - 1000 extra rows. The problem is often fixed by \n> running VACUUM against the table, however VACUUM often hangs leaving \n> the table locked until I delete the lock file. \n>\n> \n>\n> Its only a basic INSERT statement into a basic table.\n>\n> \n>\n> Thanks in advance.\n>\n> \n>\n> Bruce Irvine.\n>\n> SportingPulse.\n>\n\n\n",
"msg_date": "Tue, 05 Jun 2001 17:08:08 -0700",
"msg_from": "David Ford <david@blue-labs.org>",
"msg_from_op": false,
"msg_subject": "Re: URGENT PROBLEM"
},
{
"msg_contents": "\"Bruce Irvine\" <b.irvine@sportingpulse.com> writes:\n> We are having MAJOR & URGENT problems with Postresql occaisonly corrupting =\n> tables on insert.\n\nCan't help you with that much information.\n\nWhat Postgres version is this? (If your answer is not \"7.0.3\" or\n\"7.1.2\", I'm going to tell you to upgrade before anything else.)\n\nWhat is the table schema? What kind of corruption do you see exactly?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 05 Jun 2001 20:09:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: URGENT PROBLEM "
}
] |
[
{
"msg_contents": "Currently, if the client application dies (== closes the connection),\nthe backend will observe this and exit when it next returns to the\nouter loop and tries to read a new command. However, we might detect\nthe loss of connection much sooner; for example, if we are doing a\nSELECT that outputs large amounts of data, we will see failures from\nsend().\n\nWe have deliberately avoided trying to abort as soon as the connection\ndrops, for fear that that might cause unexpected problems. However,\nit's moderately annoying to see the postmaster log fill with\n\"pq_flush: send() failed\" messages when something like this happens.\n\nIt occurs to me that a fairly safe way to abort after loss of connection\nwould be for pq_flush or pq_recvbuf to set QueryCancel when they detect\na communications problem. This would not immediately abort the query in\nprogress, but would ensure a cancel at the next safe time in the\nper-tuple loop. You wouldn't get very much more output before that\nhappened, typically.\n\nThoughts? Is there anything about this that might be unsafe? Should\nQueryCancel be set after *any* failure of recv() or send(), or only\nif certain errno codes are detected (and if so, which ones)?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 05 Jun 2001 20:01:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Idea: quicker abort after loss of client connection"
},
{
"msg_contents": "On Tue, Jun 05, 2001 at 08:01:02PM -0400, Tom Lane wrote:\n> \n> Thoughts? Is there anything about this that might be unsafe? Should\n> QueryCancel be set after *any* failure of recv() or send(), or only\n> if certain errno codes are detected (and if so, which ones)?\n\nStevens identifies some errno codes that are not significant;\nin particular, EINTR, EAGAIN, and EWOULDBLOCK. Of these, maybe\nonly the first occurs on a blocking socket.\n\nNathan Myers\nncm@zembu.com\n",
"msg_date": "Wed, 6 Jun 2001 12:43:22 -0700",
"msg_from": "ncm@zembu.com (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: Idea: quicker abort after loss of client connection"
},
{
"msg_contents": "ncm@zembu.com (Nathan Myers) writes:\n> On Tue, Jun 05, 2001 at 08:01:02PM -0400, Tom Lane wrote:\n>> Thoughts? Is there anything about this that might be unsafe? Should\n>> QueryCancel be set after *any* failure of recv() or send(), or only\n>> if certain errno codes are detected (and if so, which ones)?\n\n> Stevens identifies some errno codes that are not significant;\n> in particular, EINTR, EAGAIN, and EWOULDBLOCK. Of these, maybe\n> only the first occurs on a blocking socket.\n\nWe already loop for EINTR. I'm just wondering what to do after we've\ngiven up retrying.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 06 Jun 2001 17:26:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Idea: quicker abort after loss of client connection "
},
{
"msg_contents": "> It occurs to me that a fairly safe way to abort after loss of connection\n> would be for pq_flush or pq_recvbuf to set QueryCancel when they detect\n> a communications problem. This would not immediately abort the query in\n> progress, but would ensure a cancel at the next safe time in the\n> per-tuple loop. You wouldn't get very much more output before that\n> happened, typically.\n> \n> Thoughts? Is there anything about this that might be unsafe? Should\n> QueryCancel be set after *any* failure of recv() or send(), or only\n> if certain errno codes are detected (and if so, which ones)?\n\nSeems like a good idea to set Cancel.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 10 Jun 2001 23:20:15 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Idea: quicker abort after loss of client connection"
}
] |
[
{
"msg_contents": "Hello...\n\nWhy does Postgresql order the uppercase letters first?\n\nI have e.g. a table with one row an in this row there are follow values:\n\nrow1\n----\nADC\naa\nABC\n\nWith this select-syntax \n\nselect * from table order by row1\n\nI become this output\n\nABC\nADC\naa\n\nbut I want this ouptut:\n\naa\nABC\nADC\n\nWhat do I wrong?\n\n-- \nGruss: Severin Olloz\n",
"msg_date": "Wed, 6 Jun 2001 02:55:50 +0200",
"msg_from": "Severin Olloz <S.Olloz@soid.ch>",
"msg_from_op": true,
"msg_subject": "ORDER BY Problem..."
},
{
"msg_contents": "Severin Olloz wrote:\n\n> Why does Postgresql order the uppercase letters first?\n\nDo you have any LOCALE configuration in place?\n\n-- \nAlessio F. Bragadini\t\talessio@albourne.com\nAPL Financial Services\t\thttp://village.albourne.com\nNicosia, Cyprus\t\t \tphone: +357-2-755750\n\n\"It is more complicated than you think\"\n\t\t-- The Eighth Networking Truth from RFC 1925\n",
"msg_date": "Wed, 06 Jun 2001 11:47:44 +0300",
"msg_from": "Alessio Bragadini <alessio@albourne.com>",
"msg_from_op": false,
"msg_subject": "Re: ORDER BY Problem..."
},
{
"msg_contents": "> Hello...\n> \n> Why does Postgresql order the uppercase letters first?\n> \n> I have e.g. a table with one row an in this row there are follow\n> values:\n> \n> row1\n> ----\n> ADC\n> aa\n> ABC\n> \n> With this select-syntax \n> \n> select * from table order by row1\n> \n> I become this output\n> \n> ABC\n> ADC\n> aa\n> \n> but I want this ouptut:\n> \n> aa\n> ABC\n> ADC\n> \n> What do I wrong?\n\nThis will not solve your problem, but a way around this is to sort on upper\n(row1):\n\n# select * from test order by col1;\n col1\n------\n ABCD\n AD\n Abc\n(3 rows)\n\n# select * from test order by upper(col1);\n col1\n------\n Abc\n ABCD\n AD\n(3 rows)\n\n\n",
"msg_date": "Wed, 6 Jun 2001 14:29:25 +0200 (CEST)",
"msg_from": "\"Reinoud van Leeuwen\" <reinoud@xs4all.nl>",
"msg_from_op": false,
"msg_subject": "Re: ORDER BY Problem..."
},
{
"msg_contents": "As far as I know, this is the standard (ASCII-ordered) way of sorting text.\nFor example, MySQL does the same thing.\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Severin Olloz\n> Sent: Wednesday, 6 June 2001 8:56 AM\n> To: pgsql-hackers@postgresql.org\n> Subject: [HACKERS] ORDER BY Problem...\n>\n>\n> Hello...\n>\n> Why does Postgresql order the uppercase letters first?\n>\n> I have e.g. a table with one row an in this row there are follow values:\n>\n> row1\n> ----\n> ADC\n> aa\n> ABC\n>\n> With this select-syntax\n>\n> select * from table order by row1\n>\n> I become this output\n>\n> ABC\n> ADC\n> aa\n>\n> but I want this ouptut:\n>\n> aa\n> ABC\n> ADC\n>\n> What do I wrong?\n>\n> --\n> Gruss: Severin Olloz\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n",
"msg_date": "Thu, 7 Jun 2001 10:10:12 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "RE: ORDER BY Problem..."
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n\n> As far as I know, this is the standard (ASCII-ordered) way of sorting text.\n\nNo, it's the \"we don't know anything about text, but we can compare\ntheir numeric values\" approach. \n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n",
"msg_date": "06 Jun 2001 22:37:57 -0400",
"msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)",
"msg_from_op": false,
"msg_subject": "Re: ORDER BY Problem..."
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n> \n> As far as I know, this is the standard (ASCII-ordered) way of sorting text.\n> For example, MySQL does the same thing.\n> \n\nActually it seems that Severin is using the ASCII locale instead of\nen_US or \nsome other case-insensitive one.\n\n> >\n> > but I want this ouptut:\n> >\n> > aa\n> > ABC\n> > ADC\n> >\n> > What do I wrong?\n> >\n\n-----------\nHannu\n",
"msg_date": "Thu, 07 Jun 2001 08:09:18 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: ORDER BY Problem..."
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.