threads
listlengths
1
2.99k
[ { "msg_contents": "Thomas Lockhart wrote:\n> ...\n> > Seems that isn't helping enough to reduce the number of people who are\n> > surprised by our behavior. I don't think anyone would be surprised by\n> > statement time.\n> \n> I think that there is no compelling reason for changing the current \n> behavior. There is no *single* convention used by all other databases, \n> and *if* the standard specifies this as \"statement time\" then afaict no \n> database implements that exactly.\n\nI was attempting to get closer to the standards and to other databases,\nand to make it perhaps more intuitive.\n\n> Transaction time is the only relatively deterministic time, and other \n> times are (or could be) available using other function calls. So what \n> problem are we trying to solve?\n> \n> There is no evidence that a different convention would change the number \n> of folks who do not understand what convention was chosen.\n> \n> Arguing to change the current implementation without offering to include \n> the functionality to handle all of the scenarios seems to be premature. \n> And arguing that a change would be clearer to some folks is not \n> compelling; \"transaction start\" is at least as easily understood as any \n> other definition we could make.\n\nYes, clearly, we will need to have all three time values available to\nusers. With three people now suggesting we don't change, I will just\nadd to TODO:\n\n\tAdd now(\"transaction|statement|clock\") functionality\n\nIs that good?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 3 Oct 2002 21:58:05 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [SQL] [GENERAL] CURRENT_TIMESTAMP" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> wrote:\n\n> Yes, clearly, we will need to have all three time values available to\n> users. With three people now suggesting we don't change, I will just\n> add to TODO:\n>\n> Add now(\"transaction|statement|clock\") functionality\n>\n> Is that good?\n\nCURRENT_TIMESTAMP etc. are converted to now()::TIMESTAMP, at least in 7.2,\nright?\nSo when there are all three options available, it would be easy to change\nthe behaviour of CURRENT_DATE/TIME/TIMESTAMP, right?\n\nSET .. or GUC would be options, no?\n\nBest Regards,\nMichael Paesold\n\n", "msg_date": "Fri, 4 Oct 2002 18:14:04 +0200", "msg_from": "\"Michael Paesold\" <mpaesold@gmx.at>", "msg_from_op": false, "msg_subject": "Re: CURRENT_TIMESTAMP" }, { "msg_contents": "Michael Paesold wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> wrote:\n> \n> > Yes, clearly, we will need to have all three time values available to\n> > users. With three people now suggesting we don't change, I will just\n> > add to TODO:\n> >\n> > Add now(\"transaction|statement|clock\") functionality\n> >\n> > Is that good?\n> \n> CURRENT_TIMESTAMP etc. are converted to now()::TIMESTAMP, at least in 7.2,\n> right?\n> So when there are all three options available, it would be easy to change\n> the behaviour of CURRENT_DATE/TIME/TIMESTAMP, right?\n> \n> SET .. or GUC would be options, no?\n\nWell, we are going to make all of them available in 7.4, but I don't see\na reason to have CURRENT_TIMESTAMP changing based on GUC. If you want a\nspecific one, use now(\"string\").\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 4 Oct 2002 13:48:44 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: CURRENT_TIMESTAMP" } ]
[ { "msg_contents": "Did anybody think about threaded sorting so far?\nAssume an SMP machine. In the case of building an index or in the case \nof sorting a lot of data there is just one backend working. Therefore \njust one CPU is used.\nWhat about starting a thread for every temporary file being created? \nThis way CREATE INDEX could use many CPUs.\nMaybe this is worth thinking about because it will speed up huge \ndatabases and enterprise level computing.\n\n Best regards,\n\n Hans-J�rgen Sch�nig\n\n-- \n*Cybertec Geschwinde u Schoenig*\nLudo-Hartmannplatz 1/14, A-1160 Vienna, Austria\nTel: +43/1/913 68 09; +43/664/233 90 75\nwww.postgresql.at <http://www.postgresql.at>, cluster.postgresql.at \n<http://cluster.postgresql.at>, www.cybertec.at \n<http://www.cybertec.at>, kernel.cybertec.at <http://kernel.cybertec.at>\n\n", "msg_date": "Fri, 04 Oct 2002 09:46:45 +0200", "msg_from": "=?ISO-8859-1?Q?Hans-J=FCrgen_Sch=F6nig?= <postgres@cybertec.at>", "msg_from_op": true, "msg_subject": "Threaded Sorting" }, { "msg_contents": "On 4 Oct 2002 at 9:46, Hans-J�rgen Sch�nig wrote:\n\n> Did anybody think about threaded sorting so far?\n> Assume an SMP machine. In the case of building an index or in the case \n> of sorting a lot of data there is just one backend working. Therefore \n> just one CPU is used.\n> What about starting a thread for every temporary file being created? \n> This way CREATE INDEX could use many CPUs.\n> Maybe this is worth thinking about because it will speed up huge \n> databases and enterprise level computing.\n\nI have a better plan. I have a thread architecture ready which acts as generic \nthread templates. Even the function pointers in the thread can be altered on \nthe fly.\n\nI suggest we use some such architecture for threading. It can be used in any \nmodule without hardcoding things. Like say in sorting we assign exclusive \njobs/data ranges to threads then there would be minimum locking and one thread \ncould merge the results.. Something like that.\n\nAll it takes to change entry functions to accept one more parameter that \nindicates range of values to act upon. In non-threaded version, it's not there \nbecause the function acts on entire data set.\n\nFurther more, with this model threading support can be turned off easily. In \nnon-threaded model, a wrapper function can call the entry point in series with \nnecessary arguments. So postgresql does not have to deal with not-so-good-\nenough thread implementations. Keeping tradition to conservative defaults we \ncan set default threads to off..\n\nThe code is in C++ but it's hardly couple of pages. I can convert it to C and \npost it if required..\n\nLet me know..\n\nBye\n Shridhar\n\n--\nParkinson's Fourth Law:\tThe number of people in any working group tends to \nincrease\tregardless of the amount of work to be done.\n\n", "msg_date": "Fri, 04 Oct 2002 13:24:47 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Re: Threaded Sorting" }, { "msg_contents": "=?ISO-8859-1?Q?Hans-J=FCrgen_Sch=F6nig?= <postgres@cybertec.at> writes:\n> Did anybody think about threaded sorting so far?\n> Assume an SMP machine. In the case of building an index or in the case \n> of sorting a lot of data there is just one backend working. Therefore \n> just one CPU is used.\n> What about starting a thread for every temporary file being created? \n> This way CREATE INDEX could use many CPUs.\n\nIn my experience, once you have enough data to force a temp file to be\nused, the sort algorithm is I/O bound anyway. Throwing more CPUs at it\nwon't help much.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 Oct 2002 09:45:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Threaded Sorting " }, { "msg_contents": "I wouldn't hold your breath for any form of threading. Since PostgreSQL\nis process based, you might consider having a pool of sort processes\nwhich address this but I doubt you'll get anywhere talking about threads\nhere.\n\nGreg\n\n\nOn Fri, 2002-10-04 at 02:46, Hans-Jürgen Schönig wrote:\n> Did anybody think about threaded sorting so far?\n> Assume an SMP machine. In the case of building an index or in the case \n> of sorting a lot of data there is just one backend working. Therefore \n> just one CPU is used.\n> What about starting a thread for every temporary file being created? \n> This way CREATE INDEX could use many CPUs.\n> Maybe this is worth thinking about because it will speed up huge \n> databases and enterprise level computing.\n> \n> Best regards,\n> \n> Hans-Jürgen Schönig\n> \n> -- \n> *Cybertec Geschwinde u Schoenig*\n> Ludo-Hartmannplatz 1/14, A-1160 Vienna, Austria\n> Tel: +43/1/913 68 09; +43/664/233 90 75\n> www.postgresql.at <http://www.postgresql.at>, cluster.postgresql.at \n> <http://cluster.postgresql.at>, www.cybertec.at \n> <http://www.cybertec.at>, kernel.cybertec.at <http://kernel.cybertec.at>\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html", "msg_date": "04 Oct 2002 09:22:00 -0500", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: Threaded Sorting" }, { "msg_contents": "Greg Copeland wrote:\n\n>I wouldn't hold your breath for any form of threading. Since PostgreSQL\n>is process based, you might consider having a pool of sort processes\n>which address this but I doubt you'll get anywhere talking about threads\n>here.\n>\n>Greg\n>\n> \n>\n\nI came across the problem yesterday. We thought about SMP and did some \ntests on huge tables. The postmaster was running full speed to get the \nstuff sorted - even on an IDE system.\nI asked my friends who are doing a lot of work with Oracle on huge SMP \nmachines. I was told that Oracle has a mechanism which can run efficient \nsorts on SMP machines. It seems to speed up sorting a lot.\n\nIf we could reduce the time needed to build up an index by 25% it would \nbe a wonderful thing. Just think of a scenario:\n1 thread: 24 hours\nmany threads: 18 hours\n\nWe could gain 6 hours which is a LOT.\nWe have many people running PostgreSQL on systems having wonderful IO \nsystems - in this case IO is not the bottleneck anymore.\n\nI had a brief look at the code used for sorting. It is very well \ndocumented so maybe it is worth thinking about a parallel algorithm.\n\nWhen talking about threads: A pool of processes for sorting? Maybe this \ncould be useful but I doubt if it the best solution to avoid overhead.\nSomewhere in the TODO it says that there will be experiments with a \nthreaded backend. This make me think that threads are not a big no no.\n\n Hans\n\n-- \n*Cybertec Geschwinde u Schoenig*\nLudo-Hartmannplatz 1/14, A-1160 Vienna, Austria\nTel: +43/1/913 68 09; +43/664/233 90 75\nwww.postgresql.at <http://www.postgresql.at>, cluster.postgresql.at \n<http://cluster.postgresql.at>, www.cybertec.at \n<http://www.cybertec.at>, kernel.cybertec.at <http://kernel.cybertec.at>\n\n", "msg_date": "Fri, 04 Oct 2002 16:40:37 +0200", "msg_from": "=?ISO-8859-1?Q?Hans-J=FCrgen_Sch=F6nig?= <hs@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Threaded Sorting" }, { "msg_contents": "On Fri, 2002-10-04 at 09:40, Hans-Jürgen Schönig wrote:\n> \n> I had a brief look at the code used for sorting. It is very well \n> documented so maybe it is worth thinking about a parallel algorithm.\n> \n> When talking about threads: A pool of processes for sorting? Maybe this \n> could be useful but I doubt if it the best solution to avoid overhead.\n> Somewhere in the TODO it says that there will be experiments with a \n> threaded backend. This make me think that threads are not a big no no.\n> \n> Hans\n\nThat was a fork IIRC. Threading is not used in baseline PostgreSQL nor\nis there any such plans that I'm aware of. People from time to time ask\nabout threads for this or that and are always told what I'm telling\nyou. The use of threads leads to portability issues not to mention\nPostgreSQL is entirely built around the process model.\n\nTom is right to dismiss the notion of adding additional CPUs to\nsomething that is already I/O bound, however, the concept it self should\nnot be dismissed. Applying multiple CPUs to a sort operation is well\naccepted and understood technology.\n\nAt this point, perhaps Tom or one of the other core developers having\ninsight in this area would be willing to address how readily such a\nmechanism could could be put in place.\n\nAlso, don't be so fast to dismiss what the process model can do. There\nis not reason to believe that having a process pool would not be able to\nperform wonderful things if implemented properly. Basically, the notion\nwould be that the backend processing the query would solicit assistance\nfrom the sort pool if one or more processes were available. At that\npoint, several methods could be employed to divide the work. Some form\nof threshold would also have to be created to prevent the pool from\nbeing used when a single backend is capable of addressing the need. \nBasically the idea is, you only have the pool assist with large tuple\ncounts and then, only when resources are available and resource are\navailable from within the pool. By doing this, you avoid additional\noverhead for small sort efforts and gain when it matters the most.\n\n\nRegards,\n\n\tGreg", "msg_date": "04 Oct 2002 10:22:28 -0500", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: Threaded Sorting" }, { "msg_contents": "Threads are not the best solutions when it comes to portability. A \nprefer a process model as well.\nMy concern was that a process model might be a bit too slow for that but \nif we had processes in memory this would be wonderful thing.\nUsing it for small amounts of data is pretty useless - I totally agree \nbut when it comes to huge amounts of data it can be useful.\n\nIt is a mechanism for huge installations with a lot of data.\n\n Hans\n\n\n\n>That was a fork IIRC. Threading is not used in baseline PostgreSQL nor\n>is there any such plans that I'm aware of. People from time to time ask\n>about threads for this or that and are always told what I'm telling\n>you. The use of threads leads to portability issues not to mention\n>PostgreSQL is entirely built around the process model.\n>\n>Tom is right to dismiss the notion of adding additional CPUs to\n>something that is already I/O bound, however, the concept it self should\n>not be dismissed. Applying multiple CPUs to a sort operation is well\n>accepted and understood technology.\n>\n>At this point, perhaps Tom or one of the other core developers having\n>insight in this area would be willing to address how readily such a\n>mechanism could could be put in place.\n>\n>Also, don't be so fast to dismiss what the process model can do. There\n>is not reason to believe that having a process pool would not be able to\n>perform wonderful things if implemented properly. Basically, the notion\n>would be that the backend processing the query would solicit assistance\n>from the sort pool if one or more processes were available. At that\n>point, several methods could be employed to divide the work. Some form\n>of threshold would also have to be created to prevent the pool from\n>being used when a single backend is capable of addressing the need. \n>Basically the idea is, you only have the pool assist with large tuple\n>counts and then, only when resources are available and resource are\n>available from within the pool. By doing this, you avoid additional\n>overhead for small sort efforts and gain when it matters the most.\n>\n>\n>Regards,\n>\n>\tGreg\n>\n> \n>\n\n\n-- \n*Cybertec Geschwinde u Schoenig*\nLudo-Hartmannplatz 1/14, A-1160 Vienna, Austria\nTel: +43/1/913 68 09; +43/664/233 90 75\nwww.postgresql.at <http://www.postgresql.at>, cluster.postgresql.at \n<http://cluster.postgresql.at>, www.cybertec.at \n<http://www.cybertec.at>, kernel.cybertec.at <http://kernel.cybertec.at>\n\n", "msg_date": "Fri, 04 Oct 2002 17:37:16 +0200", "msg_from": "=?ISO-8859-1?Q?Hans-J=FCrgen_Sch=F6nig?= <hs@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Threaded Sorting" }, { "msg_contents": "On Fri, 2002-10-04 at 10:37, Hans-Jürgen Schönig wrote:\n> My concern was that a process model might be a bit too slow for that but \n> if we had processes in memory this would be wonderful thing.\n\nYes, that's the point of having a pool. The idea is not only do you\navoid process creation and destruction which is notoriously expensive on\nmany platforms, they would sit idle until signaled to begin working on\nit's assigned sort operation. Ideally, these would be configurable\noptions which would include items such as, pool size (maximum number of\nprocesses in the pool), max concurrency level (maximum number of process\nfrom the pool which can contribute to a single backend) and tuple count\nthreshold (threshold which triggers solicition for assistance from the\nsort pool).\n\n> Using it for small amounts of data is pretty useless - I totally agree \n> but when it comes to huge amounts of data it can be useful.\n> \n> It is a mechanism for huge installations with a lot of data.\n> \n> Hans\n\nAgreed. Thus the importance of being able to specify some type of\nmeaningful threshold.\n\nAny of the core developers wanna chime in here on this concept?\n\nGreg", "msg_date": "04 Oct 2002 10:44:48 -0500", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: Threaded Sorting" }, { "msg_contents": "Hans-J�rgen Sch�nig wrote:\n> Did anybody think about threaded sorting so far?\n> Assume an SMP machine. In the case of building an index or in the case \n> of sorting a lot of data there is just one backend working. Therefore \n> just one CPU is used.\n> What about starting a thread for every temporary file being created? \n> This way CREATE INDEX could use many CPUs.\n> Maybe this is worth thinking about because it will speed up huge \n> databases and enterprise level computing.\n\nWe haven't thought about it yet because there are too many buggy thread\nimplementations. We are probably just now getting to a point where we\ncan consider it. However, lots of databases have moved to threads for\nall sorts of things and ended up with a royal mess of code. Threads\ncan only improve things in a few areas of the backend so it would be\nnice if we could limit the exposure to threads to those areas; sorting\ncould certainly be one of them, but frankly, I think disk I/O is our\nlimiting factore there. I would be interested to see some tests that\nshowed otherwise.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 4 Oct 2002 11:56:22 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Threaded Sorting" }, { "msg_contents": "Threads are bad - I know ...\nI like the idea of a pool of processes instead of threads - from my \npoint of view this would be useful.\n\nI am planning to run some tests (GEQO, AIX, sorts) as soon as I have \ntime to do so (still too much work ahead before :( ...).\nIf I had time I'd love to do something for the PostgreSQL community :(.\n\nAs far as sorting is concerned: It would be fine if it was possible to \ndefine an alternative location for temporary sort files using SET.\nIf you had multiple disks this would help in the case of concurrent \nsorts because this way people could insert and index many tables at once \nwithout having to access just one storage system.\nThis would be an easy way out of the IO limitation ... - at least for \nsome problems.\n\n Hans\n\n\n\nBruce Momjian wrote:\n\n>\n>We haven't thought about it yet because there are too many buggy thread\n>implementations. We are probably just now getting to a point where we\n>can consider it. However, lots of databases have moved to threads for\n>all sorts of things and ended up with a royal mess of code. Threads\n>can only improve things in a few areas of the backend so it would be\n>nice if we could limit the exposure to threads to those areas; sorting\n>could certainly be one of them, but frankly, I think disk I/O is our\n>limiting factore there. I would be interested to see some tests that\n>showed otherwise.\n> \n>\n\n-- \n*Cybertec Geschwinde u Schoenig*\nLudo-Hartmannplatz 1/14, A-1160 Vienna, Austria\nTel: +43/1/913 68 09; +43/664/233 90 75\nwww.postgresql.at <http://www.postgresql.at>, cluster.postgresql.at \n<http://cluster.postgresql.at>, www.cybertec.at \n<http://www.cybertec.at>, kernel.cybertec.at <http://kernel.cybertec.at>\n\n", "msg_date": "Fri, 04 Oct 2002 18:25:14 +0200", "msg_from": "=?ISO-8859-1?Q?Hans-J=FCrgen_Sch=F6nig?= <hs@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Threaded Sorting" }, { "msg_contents": "On Fri, 4 Oct 2002, Bruce Momjian wrote:\n\n> Hans-J�rgen Sch�nig wrote:\n> > Did anybody think about threaded sorting so far?\n> > Assume an SMP machine. In the case of building an index or in the case \n> > of sorting a lot of data there is just one backend working. Therefore \n> > just one CPU is used.\n> > What about starting a thread for every temporary file being created? \n> > This way CREATE INDEX could use many CPUs.\n> > Maybe this is worth thinking about because it will speed up huge \n> > databases and enterprise level computing.\n> \n> We haven't thought about it yet because there are too many buggy thread\n> implementations. We are probably just now getting to a point where we\n> can consider it. However, lots of databases have moved to threads for\n> all sorts of things and ended up with a royal mess of code. Threads\n> can only improve things in a few areas of the backend so it would be\n> nice if we could limit the exposure to threads to those areas; sorting\n> could certainly be one of them, but frankly, I think disk I/O is our\n> limiting factore there. I would be interested to see some tests that\n> showed otherwise.\n\nWouldn't the type of disk subsystem really make a big difference here?\n\nWith a couple of U160 cards and a dozen 15krpm hard drives, I would \nimagine I/O would no longer be as much of an issue as a single drive \nsystem would be.\n\nIt seems like sometimes we consider these issues more from the one or two \nSCSI drives perspective insted of the big box o drives perspective.\n\n", "msg_date": "Fri, 4 Oct 2002 10:35:38 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: Threaded Sorting" }, { "msg_contents": "Hans-J�rgen Sch�nig wrote:\n> Threads are bad - I know ...\n> I like the idea of a pool of processes instead of threads - from my \n> point of view this would be useful.\n> \n> I am planning to run some tests (GEQO, AIX, sorts) as soon as I have \n> time to do so (still too much work ahead before :( ...).\n> If I had time I'd love to do something for the PostgreSQL community :(.\n> \n> As far as sorting is concerned: It would be fine if it was possible to \n> define an alternative location for temporary sort files using SET.\n> If you had multiple disks this would help in the case of concurrent \n> sorts because this way people could insert and index many tables at once \n> without having to access just one storage system.\n> This would be an easy way out of the IO limitation ... - at least for \n> some problems.\n\nBingo! Want to increase sorting performance, give it more I/O\nbandwidth, and it will take 1/100th of the time to do threading.\n\nIngres had a nice feature where you could specify sort directories and\nit would cycle through those directories while it did the tape sort.\n\nAdded to TODO:\n\n\t* Allow sorting to use multiple work directories\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 4 Oct 2002 13:26:44 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Threaded Sorting" }, { "msg_contents": "scott.marlowe wrote:\n> > We haven't thought about it yet because there are too many buggy thread\n> > implementations. We are probably just now getting to a point where we\n> > can consider it. However, lots of databases have moved to threads for\n> > all sorts of things and ended up with a royal mess of code. Threads\n> > can only improve things in a few areas of the backend so it would be\n> > nice if we could limit the exposure to threads to those areas; sorting\n> > could certainly be one of them, but frankly, I think disk I/O is our\n> > limiting factore there. I would be interested to see some tests that\n> > showed otherwise.\n> \n> Wouldn't the type of disk subsystem really make a big difference here?\n> \n> With a couple of U160 cards and a dozen 15krpm hard drives, I would \n> imagine I/O would no longer be as much of an issue as a single drive \n> system would be.\n> \n> It seems like sometimes we consider these issues more from the one or two \n> SCSI drives perspective insted of the big box o drives perspective.\n\nYes, it is mostly for non-RAID drives, but also, sometimes single drives\ncan be faster. When you have a drive array, it isn't as easy to hit\neach drive and keep it running sequentially. Of course, I don't have\nany hard numbers on that. ;-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 4 Oct 2002 13:28:47 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Threaded Sorting" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> scott.marlowe wrote:\n<snip>\n> > It seems like sometimes we consider these issues more from the one or two\n> > SCSI drives perspective insted of the big box o drives perspective.\n> \n> Yes, it is mostly for non-RAID drives, but also, sometimes single drives\n> can be faster. When you have a drive array, it isn't as easy to hit\n> each drive and keep it running sequentially. Of course, I don't have\n> any hard numbers on that. ;-)\n\nArrghh... please remember that \"big bunch of drives\" != \"all in one\narray\".\n\nIt's common to have a bunch of drives and allocate different ones for\ndifferent tasks appropriately, whether in array sets, individually,\nmirrored, etc.\n\n100% totally feasible to have a separate 15k SCSI drive or two just\npurely for doing sorts if it would assist in throughput.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\n> \n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Sat, 05 Oct 2002 03:35:19 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Threaded Sorting" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Bingo! Want to increase sorting performance, give it more I/O\n> bandwidth, and it will take 1/100th of the time to do threading.\n\n> Added to TODO:\n> \t* Allow sorting to use multiple work directories\n\nYeah, I like that. Actually it should apply to all temp files not only\nsorting.\n\nA crude hack would be to allow there to be multiple pg_temp_NNN/\nsubdirectories (read symlinks) in a database, and then the code would\nautomatically switch among these.\n\nProbably a cleaner idea would be to somehow integrate this with\ntablespace management --- if you could mark some tablespaces as intended\nfor temp stuff, the system could round-robin among those as it creates\ntemp files and/or temp tables.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 Oct 2002 14:54:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Threaded Sorting " }, { "msg_contents": "On Fri, 2002-10-04 at 12:26, Bruce Momjian wrote:\n> Added to TODO:\n> \n> \t* Allow sorting to use multiple work directories\n\nWhy wouldn't that fall under the table space effort???\n\nGreg", "msg_date": "04 Oct 2002 14:00:53 -0500", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: Threaded Sorting" }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Bingo! Want to increase sorting performance, give it more I/O\n> > bandwidth, and it will take 1/100th of the time to do threading.\n> \n> > Added to TODO:\n> > \t* Allow sorting to use multiple work directories\n> \n> Yeah, I like that. Actually it should apply to all temp files not only\n> sorting.\n> \n> A crude hack would be to allow there to be multiple pg_temp_NNN/\n> subdirectories (read symlinks) in a database, and then the code would\n> automatically switch among these.\n\nTODO updated:\n\n\t* Allow sorting/temp files to use multiple work directories \n\nTom, what temp files do we use that aren't for sorting; I forgot.\n \n> Probably a cleaner idea would be to somehow integrate this with\n> tablespace management --- if you could mark some tablespaces as intended\n> for temp stuff, the system could round-robin among those as it creates\n> temp files and/or temp tables.\n\nYes, tablespaces would be the place for this.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 4 Oct 2002 15:02:03 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Threaded Sorting" }, { "msg_contents": "Greg Copeland wrote:\n-- Start of PGP signed section.\n> On Fri, 2002-10-04 at 12:26, Bruce Momjian wrote:\n> > Added to TODO:\n> > \n> > \t* Allow sorting to use multiple work directories\n> \n> Why wouldn't that fall under the table space effort???\n\nYes, but we make it a separate item so we are sure that is implemented\nas part of tablespaces.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 4 Oct 2002 15:02:35 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Threaded Sorting" }, { "msg_contents": "I see. I just always assumed that it would be done as part of table\nspace effort as it's such a defacto feature.\n\nI am curious as to why no one has commented on the other rather obvious\nperformance enhancement which was brought up in this thread. Allowing\nfor parallel sorting seems rather obvious and is a common enhancement\nyet seems to of been completely dismissed as people seem to be fixated\non I/O. Go figure. \n\nGreg\n\n\n\nOn Fri, 2002-10-04 at 14:02, Bruce Momjian wrote:\n> Greg Copeland wrote:\n> -- Start of PGP signed section.\n> > On Fri, 2002-10-04 at 12:26, Bruce Momjian wrote:\n> > > Added to TODO:\n> > > \n> > > \t* Allow sorting to use multiple work directories\n> > \n> > Why wouldn't that fall under the table space effort???\n> \n> Yes, but we make it a separate item so we are sure that is implemented\n> as part of tablespaces.\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org", "msg_date": "04 Oct 2002 14:12:11 -0500", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: Threaded Sorting" }, { "msg_contents": "Bingo = great :).\nThe I/O problem seems to be solved :).\n\nA table space concept would be top of the histlist :).\n\nThe symlink version is not very comfortable and I think it would be a \nreal hack.\nAlso: If we had a clean table space concept it would be real advantage.\nIn the first place it would be enough to define a directory (alter \ntablespace, changing sizes etc. could be a lot of work).\n\nHow could CREATE TABLESPACE look like?\nPersonally I like the Oracle Syntax.\n\nIs it already time to work on the parser for CREATE/ALTER/DROP TABLESPACE?\n\n Hans\n\n\n\nTom Lane wrote:\n\n>Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> \n>\n>>Bingo! Want to increase sorting performance, give it more I/O\n>>bandwidth, and it will take 1/100th of the time to do threading.\n>> \n>>\n>\n> \n>\n>>Added to TODO:\n>>\t* Allow sorting to use multiple work directories\n>> \n>>\n>\n>Yeah, I like that. Actually it should apply to all temp files not only\n>sorting.\n>\n>A crude hack would be to allow there to be multiple pg_temp_NNN/\n>subdirectories (read symlinks) in a database, and then the code would\n>automatically switch among these.\n>\n>Probably a cleaner idea would be to somehow integrate this with\n>tablespace management --- if you could mark some tablespaces as intended\n>for temp stuff, the system could round-robin among those as it creates\n>temp files and/or temp tables.\n>\n>\t\t\tregards, tom lane\n> \n>\n\n\n-- \n*Cybertec Geschwinde u Schoenig*\nLudo-Hartmannplatz 1/14, A-1160 Vienna, Austria\nTel: +43/1/913 68 09; +43/664/233 90 75\nwww.postgresql.at <http://www.postgresql.at>, cluster.postgresql.at \n<http://cluster.postgresql.at>, www.cybertec.at \n<http://www.cybertec.at>, kernel.cybertec.at <http://kernel.cybertec.at>\n\n", "msg_date": "Fri, 04 Oct 2002 21:13:28 +0200", "msg_from": "=?ISO-8859-1?Q?Hans-J=FCrgen_Sch=F6nig?= <hs@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Threaded Sorting" }, { "msg_contents": "Greg Copeland wrote:\n-- Start of PGP signed section.\n> I see. I just always assumed that it would be done as part of table\n> space effort as it's such a defacto feature.\n> \n> I am curious as to why no one has commented on the other rather obvious\n> performance enhancement which was brought up in this thread. Allowing\n> for parallel sorting seems rather obvious and is a common enhancement\n> yet seems to of been completely dismissed as people seem to be fixated\n> on I/O. Go figure. \n\nWe think we are fixated on I/O because we think that is where the delay\nis. Is there a reason we shouldn't think that?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 4 Oct 2002 15:15:46 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Threaded Sorting" }, { "msg_contents": "Well, that's why I was soliciting developer input as to exactly what\ngoes on with sorts. From what I seem to be hearing, all sorts result in\ntemp files being created and/or used. If that's the case then yes, I\ncan understand the fixation. Of course that opens the door for it being\na horrible implementation. If that's not the case, then parallel sorts\nstill seem like a rather obvious route to look into.\n\nGreg\n\n\nOn Fri, 2002-10-04 at 14:15, Bruce Momjian wrote:\n> Greg Copeland wrote:\n> -- Start of PGP signed section.\n> > I see. I just always assumed that it would be done as part of table\n> > space effort as it's such a defacto feature.\n> > \n> > I am curious as to why no one has commented on the other rather obvious\n> > performance enhancement which was brought up in this thread. Allowing\n> > for parallel sorting seems rather obvious and is a common enhancement\n> > yet seems to of been completely dismissed as people seem to be fixated\n> > on I/O. Go figure. \n> \n> We think we are fixated on I/O because we think that is where the delay\n> is. Is there a reason we shouldn't think that?\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073", "msg_date": "04 Oct 2002 14:20:15 -0500", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: Threaded Sorting" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Tom, what temp files do we use that aren't for sorting; I forgot.\n\nMATERIALIZE plan nodes are the only thing I can think of offhand that\nuses a straight temp file. But ISTM that if this makes sense for\nour internal temp files, it makes sense for user-created temp tables\nas well.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 Oct 2002 15:22:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Threaded Sorting " }, { "msg_contents": "Greg Copeland wrote:\n-- Start of PGP signed section.\n> Well, that's why I was soliciting developer input as to exactly what\n> goes on with sorts. From what I seem to be hearing, all sorts result in\n> temp files being created and/or used. If that's the case then yes, I\n> can understand the fixation. Of course that opens the door for it being\n> a horrible implementation. If that's not the case, then parallel sorts\n> still seem like a rather obvious route to look into.\n\nWe use tape sorts, ala Knuth, meaning we sort in memory as much as\npossible, but when there is more data than fits in memory, rather than\nswapping, we write to temp files then merge the temp files (aka tapes).\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 4 Oct 2002 15:31:53 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Threaded Sorting" }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Tom, what temp files do we use that aren't for sorting; I forgot.\n> \n> MATERIALIZE plan nodes are the only thing I can think of offhand that\n> uses a straight temp file. But ISTM that if this makes sense for\n> our internal temp files, it makes sense for user-created temp tables\n> as well.\n\nYes, I was thinking that, but of course, those are real tables, rather\nthan just files. Not sure how clean it will be to mix those in the same\ndirectory. We haven't in the past. Is it a good idea?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 4 Oct 2002 15:33:09 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Threaded Sorting" }, { "msg_contents": "On Fri, 2002-10-04 at 14:31, Bruce Momjian wrote:\n> We use tape sorts, ala Knuth, meaning we sort in memory as much as\n> possible, but when there is more data than fits in memory, rather than\n> swapping, we write to temp files then merge the temp files (aka tapes).\n\nRight, which is what I originally assumed. On lower end systems, that\nworks great. Once you allow that people may actually have high-end\nsystems with multiple CPUs and lots of memory, wouldn't it be nice to\nallow for huge improvements on large sorts? Basically, you have two\nends of the spectrum. One, where you don't have enough memory and\nbecome I/O bound. The other is where you have enough memory but are CPU\nbound; where potentially you have extra CPUs to spare. Seems to me they\nare not mutually exclusive.\n\nUnless I've missed something, the ideal case is to never use tapes for\nsorting. Which is saying, you're trying to optimize an already less an\nideal situation (which is of course good). I'm trying to discuss making\nit a near ideal use of available resources. I can understand why\naddressing the seemingly more common I/O bound case would receive\npriority, however, I'm at a loss as to why the other would be completely\nignored. Seems to me, implementing both would even work in a\ncomplimentary fashion on the low-end cases and yield more breathing room\nfor the high-end cases.\n\nWhat am I missing for the other case to be completely ignored?\n\nGreg", "msg_date": "04 Oct 2002 14:44:40 -0500", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: Threaded Sorting" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Tom Lane wrote:\n>> ... But ISTM that if this makes sense for\n>> our internal temp files, it makes sense for user-created temp tables\n>> as well.\n\n> Yes, I was thinking that, but of course, those are real tables, rather\n> than just files. Not sure how clean it will be to mix those in the same\n> directory. We haven't in the past. Is it a good idea?\n\nSure we have --- up till recently, pg_temp files just lived in the\ndatabase directory. I think it was you that added the pg_temp\nsubdirectory, and the reason you did it was to let people symlink the\ntemp files to someplace else. But that's just a zeroth-order\napproximation to providing a tablespace facility for these things.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 Oct 2002 15:51:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Threaded Sorting " }, { "msg_contents": "Greg Copeland <greg@CopelandConsulting.Net> writes:\n> ... I can understand why\n> addressing the seemingly more common I/O bound case would receive\n> priority, however, I'm at a loss as to why the other would be completely\n> ignored.\n\nBruce already explained that we avoid threads because of portability and\nrobustness considerations.\n\nThe notion of a sort process pool seems possibly attractive. I'm\nunconvinced that it's going to be a win though because of the cost of\nshoving data across address-space boundaries. Another issue is that\nthe sort comparison function can be anything, including user-defined\ncode that does database accesses or other interesting stuff. This\nwould mean that the sort auxiliary process would have to adopt the\ndatabase user identity of the originating process, and quite possibly\nabsorb a whole lot of other context information before it could\ncorrectly/safely execute the comparison function. That pushes the\noverhead up a lot more.\n\n(The need to allow arbitrary operations in the comparison function would\nput a pretty substantial crimp on a thread-based approach, too, even if\nwe were willing to ignore the portability issue.)\n\nStill, if you want to try it out, feel free ... this is an open-source\nproject, and if you can't convince other people that an idea is worth\nimplementing, that doesn't mean you can't implement it yourself and\nprove 'em wrong.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 Oct 2002 16:07:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Threaded Sorting " }, { "msg_contents": "On Fri, 2002-10-04 at 15:07, Tom Lane wrote:\n> the sort comparison function can be anything, including user-defined\n> code that does database accesses or other interesting stuff. This\n\nThis is something that I'd not considered.\n\n> would mean that the sort auxiliary process would have to adopt the\n> database user identity of the originating process, and quite possibly\n> absorb a whole lot of other context information before it could\n> correctly/safely execute the comparison function. That pushes the\n> overhead up a lot more.\n\nSignificantly! Agreed.\n\n> \n> Still, if you want to try it out, feel free ... this is an open-source\n> project, and if you can't convince other people that an idea is worth\n> implementing, that doesn't mean you can't implement it yourself and\n> prove 'em wrong.\n\nNo Tom, my issue wasn't if I could or could not convince someone but\nrather that something has been put on the table requesting additional\nfeedback on it's feasibility but had been completely ignored. Fact is,\nI knew I didn't know enough about the implementation details to even\nattempt to convince anyone of anything. I simply wanted to explore the\nidea or rather the feasibility of the idea. In theory, it's a great\nidea. In practice, I had no idea, thus my desire to seek additional\ninput. As such, it seems a practical implementation may prove\ndifficult. I now understand. Thank you for taking the take to respond\nin a manner that satisfies my curiosity. That's all I was looking for. \n:)\n\nBest Regards,\n\n\tGreg", "msg_date": "04 Oct 2002 15:43:37 -0500", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: Threaded Sorting" }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Tom Lane wrote:\n> >> ... But ISTM that if this makes sense for\n> >> our internal temp files, it makes sense for user-created temp tables\n> >> as well.\n> \n> > Yes, I was thinking that, but of course, those are real tables, rather\n> > than just files. Not sure how clean it will be to mix those in the same\n> > directory. We haven't in the past. Is it a good idea?\n> \n> Sure we have --- up till recently, pg_temp files just lived in the\n> database directory. I think it was you that added the pg_temp\n> subdirectory, and the reason you did it was to let people symlink the\n> temp files to someplace else. But that's just a zeroth-order\n> approximation to providing a tablespace facility for these things.\n\nOK, TODO updated:\n\n\t* Allow sorting, temp files, temp tables to use multiple work\n\tdirectories \n\nFYI, I originally created that directory so a postmaster startup could\nclear that dir.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 4 Oct 2002 23:53:37 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Threaded Sorting" }, { "msg_contents": "tom lane writes:\n>The notion of a sort process pool seems possibly attractive. I'm\n>unconvinced that it's going to be a win though because of the cost of\n>shoving data across address-space boundaries. \n\nWhat about splitting out parsing, optimization and plan generation from\nexecution and having a separate pool of exececutor processes.\n\nAs an optimizer finished with a query plan it would initiate execution\nby grabbing an executor from a pool and passing it the plan.\n\nThis would allow the potential for passing partial plans to multiple\nexecutors so a given query might be split up into three or four pieces\nand then executed in parallel with the results passed through a\nshared memory area owned by each executor process.\n\nThis would allow for potential optimization of sorts without threads or \nincurring the overhead problems you mentioned for a separate sorter\nprocess. The optimizer could do things like split a scan into 3 or 4\npieces before sending it off for execution and then join the pieces\nback together.\n\nIt could also make complex queries much faster if there are idling CPUs\nif the optimizer was updated to take advantage of this.\n\nIf we are going to split things apart, then this should be done at a\nnatural communication boundary right? The code has this logical split\nright now anyway so the change would be more natural.\n\nOTOH, there are much bigger fish to fry at the moment, I suspect.\n\n- Curtis\n", "msg_date": "Sat, 5 Oct 2002 10:39:37 -0400", "msg_from": "\"Curtis Faith\" <curtis@galtair.com>", "msg_from_op": false, "msg_subject": "Re: Threaded Sorting" }, { "msg_contents": "\"Curtis Faith\" <curtis@galtair.com> writes:\n> What about splitting out parsing, optimization and plan generation from\n> execution and having a separate pool of exececutor processes.\n\n> As an optimizer finished with a query plan it would initiate execution\n> by grabbing an executor from a pool and passing it the plan.\n\nSo different executors would potentially handle the queries from a\nsingle transaction? How will you deal with pushing transaction-local\nstate from one to the other?\n\nEven if you restrict it to switching at transaction boundaries, you\nstill have session-local state (at minimum user ID and SET settings)\nto worry about.\n\nBeing able to apply multiple CPUs to a single query is attractive,\nbut I've not yet seen schemes for it that don't look like the extra\nCPU power would be chewed up in overhead :-(.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 05 Oct 2002 20:48:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Threaded Sorting " }, { "msg_contents": "tom lane wrote:\n> \"Curtis Faith\" <curtis@galtair.com> writes:\n> > What about splitting out parsing, optimization and plan generation from\n> > execution and having a separate pool of exececutor processes.\n> \n> > As an optimizer finished with a query plan it would initiate execution\n> > by grabbing an executor from a pool and passing it the plan.\n> \n> So different executors would potentially handle the queries from a\n> single transaction? How will you deal with pushing transaction-local\n> state from one to the other?\n> \n> Even if you restrict it to switching at transaction boundaries, you\n> still have session-local state (at minimum user ID and SET settings)\n> to worry about.\n\nHmmm, what transaction boundaries did you mean? Since we are talking\nabout single statement parallization, there must be some specific\ninternal semantics that you believe need isolation. It seems like\nwe'd be able to get most of the benefit and restrict the parallization\nin a way that would preserve this isolation but I'm curious what\nyou were specifically referring to?\n\nThe current transaction/user state seems to be stored in process\nglobal space. This could be changed to be a sointer to a struct \nstored in a back-end specific shared memory area which would be\naccessed by the executor process at execution start. The backend\nwould destroy and recreate the shared memory and restart execution\nin the case where an executor process dies much like the postmaster\ndoes with backends now.\n\nTo the extent the executor process might make changes to the state,\nwhich I'd try to avoid if possible (don't know if it is), the\nexecutors could obtain locks, otherwise if the executions were \nconstrained to isolated elements (changes to different indexes for\nexample) it seems like it would be possible using an architecture\nwhere you have:\n\nMain Executor: Responsible for updating global meta data from\neach sub-executor and assembling the results of multiple executions.\nIn the case of multiple executor sorts, the main executor would\nperform a merge sort on the results of it and it's subordinates\npre-sorted sub-sets of the relation.\n\nSubordinate Executor: Executes sub-plans and returns results or\nmeta-data update information into front-end shared memory directly.\n\nTo make this optimal, the index code would have to be changed to\nsupport the idea of partial scans. In the case of btrees it would\nbe pretty easy using the root page to figure out what index values\ndelineated different 1/2's, 1/3's, 1/4's etc. of the index space.\n\nI'm not sure what you'd have to do to support this for table scans as\nI don't know the PostgreSQL tuple storage mechanism, yet.\n\nThis does not seem like too much architectural complexity or\nperformance overhead (even for the single executor case) for a big\ngain for complex query performance.\n\n> Being able to apply multiple CPUs to a single query is attractive,\n> but I've not yet seen schemes for it that don't look like the extra\n> CPU power would be chewed up in overhead :-(.\n\nDo you remember specifc overhead problems/issues?\n\n- Curtis\n", "msg_date": "Sun, 6 Oct 2002 05:17:59 -0400", "msg_from": "\"Curtis Faith\" <curtis@galtair.com>", "msg_from_op": false, "msg_subject": "Parallel Executors [was RE: Threaded Sorting]" }, { "msg_contents": "On 4 Oct 2002 at 21:13, Hans-J�rgen Sch�nig wrote:\n\n> Bingo = great :).\n> The I/O problem seems to be solved :).\n> \n> A table space concept would be top of the histlist :).\n> \n> The symlink version is not very comfortable and I think it would be a \n> real hack.\n> Also: If we had a clean table space concept it would be real advantage.\n> In the first place it would be enough to define a directory (alter \n> tablespace, changing sizes etc. could be a lot of work).\n> \n> How could CREATE TABLESPACE look like?\n> Personally I like the Oracle Syntax.\n\nWell. I (hopefully) understand need to get table spaces. But I absolutely hate \nit the way oracle does it.. I am repeating all the points I posted before. \nThere was no follow up. I hope I get some on this.\n\n1) It tries to be a volume assuming OS handles volumes inefficiently. Same \nmentality as handling all disk I/O by in it self. May be worth when oracle did \nit but is it worth now?\n\n2) It allows joining multiple volumes for performance reason. If you want to \njoin multiple volume for performance, let RAID handle it. Is it job of RDBMS?\n\n3) It puts multiple objets together. Why? I never fully understood having a \nopeque file sitting on drive v/s neatly laid directory structure. I would \nalways prefer the directory structure.\n\nCan anybody please tell me in detail.(Not just a pointing towards TODO items)\n\n1) What a table space supposed to offer?\n\n2) What a directory structure does not offer that table space does?\n\n3) How do they compare for advantages/disadvantages..\n\nOracle familiarity is out. That's not even close to being good merit IMO. If \npostgresql moves to oracle way of doing things, .. well, I won't be as much \nhapy as I am now..\n\nThanks for your patience..\n\n\nBye\n Shridhar\n\n--\nNewton's Little-Known Seventh Law:\tA bird in the hand is safer than one \noverhead.\n\n", "msg_date": "Mon, 07 Oct 2002 18:55:38 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Table spaces again [was Re: Threaded Sorting]" }, { "msg_contents": ">\n>\n>Can anybody please tell me in detail.(Not just a pointing towards TODO items)\n>\n>1) What a table space supposed to offer?\n>\n\nThey allow you to define a maximum amount of storage for a certain set \nof data.\nThey help you to define the location of data.\nThey help you to define how much data can be used by which ressource.\n\n>2) What a directory structure does not offer that table space does?\n>\n\nYou need to the command line in order to manage quotas - you might not \nwant that.\nQuotas are handled differently on ever platform (if available).\nWith tablespaces you can assign 30mb to use a, 120mb to user b etc. ...\nTable spaces are a nice abstraction layer to the file system.\n\n>\n>3) How do they compare for advantages/disadvantages..\n>\n>Oracle familiarity is out. That's not even close to being good merit IMO. If \n>postgresql moves to oracle way of doing things, .. well, I won't be as much \n>hapy as I am now..\n>\n>Thanks for your patience..\n>\n\nhow would you handle table spaces? just propose it to the hackers' list ...\nwe should definitely discuss that ...\na bad implementation of table spaces would be painful ...\n\n\n Hans\n\n-- \n*Cybertec Geschwinde u Schoenig*\nLudo-Hartmannplatz 1/14, A-1160 Vienna, Austria\nTel: +43/1/913 68 09; +43/664/233 90 75\nwww.postgresql.at <http://www.postgresql.at>, cluster.postgresql.at \n<http://cluster.postgresql.at>, www.cybertec.at \n<http://www.cybertec.at>, kernel.cybertec.at <http://kernel.cybertec.at>\n\n\n", "msg_date": "Mon, 07 Oct 2002 15:52:54 +0200", "msg_from": "=?ISO-8859-1?Q?Hans-J=FCrgen_Sch=F6nig?= <postgres@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Table spaces again [was Re: Threaded Sorting]" }, { "msg_contents": "On 7 Oct 2002 at 15:52, Hans-J�rgen Sch�nig wrote:\n\n> >Can anybody please tell me in detail.(Not just a pointing towards TODO items)\n> >1) What a table space supposed to offer?\n> They allow you to define a maximum amount of storage for a certain set \n> of data.\n\nUse quota\n\n> They help you to define the location of data.\n\nMount/symlink whereever you want assuming database supports one directory per \nobject metaphor. This is finer control than tablespaces as tablespaces often \nhosts many objects from possibly many databases. Once a tablespace is created, \nit's difficult to keep track of what all goes on a table space. You look at \ndirectory structure and you get a clear picture..\n\n> They help you to define how much data can be used by which ressource.\n\nWhich resource? I am confused. Disk space is only resource we are talking \nabout.\n\n> >2) What a directory structure does not offer that table space does?\n> You need to the command line in order to manage quotas - you might not \n> want that.\n\nMount a directory on a partition. If the data exceeds on that partition, there \nwould be disk error. Like tablespace getting overflown. I have seen both the \nscenarios in action..\n\n> Quotas are handled differently on ever platform (if available).\n\nYeah. But that's sysadmins responsibility not DBA's.\n\n> With tablespaces you can assign 30mb to use a, 120mb to user b etc. ...\n> Table spaces are a nice abstraction layer to the file system.\n\nHmm.. And how does that fit in database metaphor? What practical use is that? I \ncan't imagine as I am a developer and not a DBA.\n\n> how would you handle table spaces? just propose it to the hackers' list ...\n> we should definitely discuss that ...\n> a bad implementation of table spaces would be painful ...\n\nI suggest a directory per object structure. A database gets it's own directory, \nan index gets it's own dir. etc. \n\nIn addition to that, postgresql should offer capability to suspend a \ndatabase/table so that they can be moved without restarting database daemon. \n\nThis is to acknowledge the fact that database storage is relocatable. In \ncurrent implementation, database does not offer any assistance in relocating \nthe database structure on disk..\n\nBesides postgresql runs a cluster of databases as opposed to single database \nrun by likes of oracle. So relocating a single database should not affect \nothers.\n\nAdditionally postgresql should handle out of disk space errors gracefully \nnoting that out of space for an object may not mean out of space for all \nobjects. Obviously tablespaces can do better here.\n\nLet's say for each object that gets storage, e.g. database, table, index and \ntransaction log, we maintain a flag in metadata to indicate whether it's \nrelocated or not. t can offer a command to set this flag. Typically the command \nsequence would look like this\n\nsql> take <object> offline;\n---\nrelocate/symlink/remount the appropriate dir.\n--\nsql> take <object> online mark relocated;\n\nPostgresql should continue working if a relocated object experiences disk space \nfull.\n\nOf course, if postgresql could accept arguments at object creation time for \nalternate directories to symlink to, that would be better than sliced bread.\n\nI believe giving each database it's own transaction log would be a great \nadvantage of this scheme.\n\nThese are some thoughts how it would work. I believe this course of action \nwould make the transition easier, offer a much better granular control over \nobject allocation on disk and ultimately would prove useful to users.\n\nI might have lost couple of points as I took long time composing it. But having \nall the possible objections dealt with should be the starting point. \n\nThanks once again..\n\nBye\n Shridhar\n\n--\nCheit's Lament:\tIf you help a friend in need, he is sure to remember you--\tthe \nnext time he's in need.\n\n", "msg_date": "Mon, 07 Oct 2002 20:02:46 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Re: Table spaces again [was Re: Threaded Sorting]" }, { "msg_contents": "\n>>>2) What a directory structure does not offer that table space does?\n>>> \n>>>\n>>You need to the command line in order to manage quotas - you might not \n>>want that.\n>> \n>>\n>\n>Mount a directory on a partition. If the data exceeds on that partition, there \n>would be disk error. Like tablespace getting overflown. I have seen both the \n>scenarios in action..\n> \n>\n\nOf course it can be done somehow. However, with tablespaces it is more \ndb-like and you need not be familiar with the operating system itself.\nJust think of a company having several different operating systems \n(suns, linux, bsd, ...).\nwhat do you think could be done in this case? my answer would be an \nabstraction layer called table spaces ...\n\n> \n>\n>>Quotas are handled differently on ever platform (if available).\n>> \n>>\n>\n>Yeah. But that's sysadmins responsibility not DBA's.\n>\n\nMaybe many people ARE the sysadmins of their PostgreSQL box ...\nWhen developing a database with an open mind people should try to see a \nproblem from more than just one perspective.\nWhy should anybody just forget about sysdbas???\n\n\n>>With tablespaces you can assign 30mb to use a, 120mb to user b etc. ...\n>>Table spaces are a nice abstraction layer to the file system.\n>> \n>>\n>\n>Hmm.. And how does that fit in database metaphor? What practical use is that? I \n>can't imagine as I am a developer and not a DBA.\n>\n> \n>\n\nOne of our customers did some minor hosting projects with PostgreSQL. \nThat's what he wanted to have because it is a practical issue.\na. you don't want to have more than one instance per machine.\nb. you want to assign a certain amount of space to a certain user \nwithout using quotas. just think of administration tools - tablespaces \nare as simple as a select.\n\nper directory is a first step - a good step and a good idea but \ntablespaces are a useful invention. just think of hosting companies, \nhybrid environments, etc ...\ntablespaces or not a devil and sysdbas may be developers ...\n\n\n Hans\n\n\n-- \n*Cybertec Geschwinde u Schoenig*\nLudo-Hartmannplatz 1/14, A-1160 Vienna, Austria\nTel: +43/1/913 68 09; +43/664/233 90 75\nwww.postgresql.at <http://www.postgresql.at>, cluster.postgresql.at \n<http://cluster.postgresql.at>, www.cybertec.at \n<http://www.cybertec.at>, kernel.cybertec.at <http://kernel.cybertec.at>\n\n\n", "msg_date": "Mon, 07 Oct 2002 16:49:02 +0200", "msg_from": "=?ISO-8859-1?Q?Hans-J=FCrgen_Sch=F6nig?= <postgres@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Table spaces again [was Re: Threaded Sorting]" }, { "msg_contents": "On 7 Oct 2002 at 16:49, Hans-J�rgen Sch�nig wrote:\n\n> >Mount a directory on a partition. If the data exceeds on that partition, there \n> >would be disk error. Like tablespace getting overflown. I have seen both the \n> >scenarios in action..\n> Of course it can be done somehow. However, with tablespaces it is more \n> db-like and you need not be familiar with the operating system itself.\n> Just think of a company having several different operating systems \n> (suns, linux, bsd, ...).\n> what do you think could be done in this case? my answer would be an \n> abstraction layer called table spaces ...\n\nOK. Point noted. Suspended till next point.\n\n> >>Quotas are handled differently on ever platform (if available).\n> >Yeah. But that's sysadmins responsibility not DBA's.\n> Maybe many people ARE the sysadmins of their PostgreSQL box ...\n> When developing a database with an open mind people should try to see a \n> problem from more than just one perspective.\n> Why should anybody just forget about sysdbas???\n\nIf DBA is sysadmin, does it make a difference if he learnes about mount/ln or \ntable spaces. Yes it does. Table spaces are limited to databases but mount/ln \nis useful for any general purpose sysadmin work.\n\nThat answers the last point as well, I guess..\n\n> >>With tablespaces you can assign 30mb to use a, 120mb to user b etc. ...\n> >>Table spaces are a nice abstraction layer to the file system.\n> >Hmm.. And how does that fit in database metaphor? What practical use is that? I \n> >can't imagine as I am a developer and not a DBA.\n> One of our customers did some minor hosting projects with PostgreSQL. \n> That's what he wanted to have because it is a practical issue.\n> a. you don't want to have more than one instance per machine.\n> b. you want to assign a certain amount of space to a certain user \n> without using quotas. just think of administration tools - tablespaces \n> are as simple as a select.\n\nAgreed. Perfect point and I didn't thought of it.\n\nBut it can be done in directory structure as well. Of course it's quite a \ndeviation from what one thinks as plain old directory structure. But if this is \none point where table spaces win, let's borrow that. There is lot of baggage in \ntable spaces that can be left out..\n\nBesides AFAIU, tablespaces implements quota using data files which are pre-\nallocated. Pre-claiming space/resource is the evil of everything likes of \noracle do and runs in exact opposite direction of postgresql philosophy.\n\nIf postgresql has to implement quotas on object, it should do without \npreclaiming space.\n\nBesides if postgresql offers quota on per object basis in directory/object \nscheme, I am sure that's far more granular than tablespaces. Choice is good..\n \n> per directory is a first step - a good step and a good idea but \n> tablespaces are a useful invention. just think of hosting companies, \n> hybrid environments, etc ...\n> tablespaces or not a devil and sysdbas may be developers ...\n\nIt's not about devil. It's about revaluating need once again. Especially at the \nlevel of tablespace concept in itself.\n\nBye\n Shridhar\n\n--\nOblivion together does not frighten me, beloved.\t\t-- Thalassa (in Anne \nMulhall's body), \"Return to Tomorrow\",\t\t stardate 4770.3.\n\n", "msg_date": "Mon, 07 Oct 2002 20:31:16 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Re: Table spaces again [was Re: Threaded Sorting]" }, { "msg_contents": "=?ISO-8859-1?Q?Hans-J=FCrgen_Sch=F6nig?= <postgres@cybertec.at> writes:\n> how would you handle table spaces?\n\nThe plan that's been discussed simply defines a tablespace as being a\ndirectory somewhere; physical storage of individual tables would remain\nbasically the same, one or more files under the containing directory.\n\nThe point of this being, of course, that the DBA could create the\ntablespace directories on different partitions or volumes in order to\nprovide the behavior he wants.\n\nIn my mind this would be primarily a cleaner, more flexible\nreimplementation of the existing \"database location\" feature.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 Oct 2002 11:14:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Table spaces again [was Re: Threaded Sorting] " }, { "msg_contents": ">\n>\n>>>>Quotas are handled differently on ever platform (if available).\n>>>> \n>>>>\n>>>Yeah. But that's sysadmins responsibility not DBA's.\n>>> \n>>>\n>>Maybe many people ARE the sysadmins of their PostgreSQL box ...\n>>When developing a database with an open mind people should try to see a \n>>problem from more than just one perspective.\n>>Why should anybody just forget about sysdbas???\n>> \n>>\n>\n>If DBA is sysadmin, does it make a difference if he learnes about mount/ln or \n>table spaces. Yes it does. Table spaces are limited to databases but mount/ln \n>is useful for any general purpose sysadmin work.\n>\n>That answers the last point as well, I guess..\n> \n>\n\nI agree but still, think of hybrid systems ..\n\n>\n>Agreed. Perfect point and I didn't thought of it.\n>\n>But it can be done in directory structure as well. Of course it's quite a \n>deviation from what one thinks as plain old directory structure. But if this is \n>one point where table spaces win, let's borrow that. There is lot of baggage in \n>table spaces that can be left out..\n>\n>Besides AFAIU, tablespaces implements quota using data files which are pre-\n>allocated. Pre-claiming space/resource is the evil of everything likes of \n>oracle do and runs in exact opposite direction of postgresql philosophy.\n>\n>If postgresql has to implement quotas on object, it should do without \n>preclaiming space.\n>\n>Besides if postgresql offers quota on per object basis in directory/object \n>scheme, I am sure that's far more granular than tablespaces. Choice is good..\n> \n>\nI didn't think of pre allocation - this is pretty much like Oracle would \ndo it.\nI was thinking of having a maximum size or something like that.\nOverhead such as EXTENDS and things like that don't seem too useful for \nme - that's what a filesystem can be used for.\n\nI agree with Tom: If a tablespace was a directory it would be pretty \nsimple and pretty useful. If people could define a maximum size it would \nbe more than perfect.\nAll I think is necessary is:\n - having data in different, user defined, locations\n - having the chance to define a maximum size for that tablespace.\n\nSuggestion:\nCREATE TABLESPACE: Create a \"directory\" with a certain size (optional) - \nnothing special here.\nALTER TABLESPACE: resize table space. resizing is possible if the amount \nof data in the tablespace < new size of tablespace\nDROP TABLESPACE: remove table space. the question in this case is - \nwhat about the objects in the tablespace?\n objects can not always be deleted (just think of inheritance and \nparent tables)\n\n>It's not about devil. It's about revaluating need once again. Especially at the \n>level of tablespace concept in itself.\n>\n> \n>\n\nThat's why people should discuss it and think about it :).\nPeople want a good implementation or no implementation :).\nThis is Open Source - it is designed to be discussed :).\n\n Hans\n\n-- \n*Cybertec Geschwinde u Schoenig*\nLudo-Hartmannplatz 1/14, A-1160 Vienna, Austria\nTel: +43/1/913 68 09; +43/664/233 90 75\nwww.postgresql.at <http://www.postgresql.at>, cluster.postgresql.at \n<http://cluster.postgresql.at>, www.cybertec.at \n<http://www.cybertec.at>, kernel.cybertec.at <http://kernel.cybertec.at>\n\n\n", "msg_date": "Mon, 07 Oct 2002 17:23:03 +0200", "msg_from": "=?ISO-8859-1?Q?Hans-J=FCrgen_Sch=F6nig?= <postgres@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Table spaces again [was Re: Threaded Sorting]" }, { "msg_contents": "Is this NOT what I have been after for many months now. I dropped the tablespace/location idea before 7.2 because that\ndidn't seem to be any interest. Please see my past email's for the SQL commands and on disk directory layout I have\nproposed. I have a working 7.2 system with tablespaces/locations (what ever you want to call them, I like locations\nbecause tablespace are an Oracle thing). I would like to get this code ported into 7.4.\n\nJim\n\n\n> =?ISO-8859-1?Q?Hans-J=FCrgen_Sch=F6nig?= <postgres@cybertec.at> writes:\n> > how would you handle table spaces?\n> \n> The plan that's been discussed simply defines a tablespace as being a\n> directory somewhere; physical storage of individual tables would remain\n> basically the same, one or more files under the containing directory.\n> \n> The point of this being, of course, that the DBA could create the\n> tablespace directories on different partitions or volumes in order to\n> provide the behavior he wants.\n> \n> In my mind this would be primarily a cleaner, more flexible\n> reimplementation of the existing \"database location\" feature.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n\n\n\n", "msg_date": "Mon, 7 Oct 2002 11:29:35 -0400", "msg_from": "\"Jim Buttafuoco\" <jim@contactbda.com>", "msg_from_op": false, "msg_subject": "Re: Table spaces again [was Re: Threaded Sorting] " }, { "msg_contents": "Jim Buttafuoco wrote:\n\n>Is this NOT what I have been after for many months now. I dropped the tablespace/location idea before 7.2 because that\n>didn't seem to be any interest. Please see my past email's for the SQL commands and on disk directory layout I have\n>proposed. I have a working 7.2 system with tablespaces/locations (what ever you want to call them, I like locations\n>because tablespace are an Oracle thing). I would like to get this code ported into 7.4.\n>\n>Jim\n>\n>\n> \n>\n>>=?ISO-8859-1?Q?Hans-J=FCrgen_Sch=F6nig?= <postgres@cybertec.at> writes:\n>> \n>>\n>>>how would you handle table spaces?\n>>> \n>>>\n>>The plan that's been discussed simply defines a tablespace as being a\n>>directory somewhere; physical storage of individual tables would remain\n>>basically the same, one or more files under the containing directory.\n>>\n>>The point of this being, of course, that the DBA could create the\n>>tablespace directories on different partitions or volumes in order to\n>>provide the behavior he wants.\n>>\n>>In my mind this would be primarily a cleaner, more flexible\n>>reimplementation of the existing \"database location\" feature.\n>>\n>>\t\t\t\n>>\n\nwow :)\ncan we have the patch? i'd like to try it with my 7.2.2 :).\nis it stable?\n\nhow did you implement it precisely? is it as you have proposed it?\n\n Hans\n\n\n-- \n*Cybertec Geschwinde u Schoenig*\nLudo-Hartmannplatz 1/14, A-1160 Vienna, Austria\nTel: +43/1/913 68 09; +43/664/233 90 75\nwww.postgresql.at <http://www.postgresql.at>, cluster.postgresql.at \n<http://cluster.postgresql.at>, www.cybertec.at \n<http://www.cybertec.at>, kernel.cybertec.at <http://kernel.cybertec.at>\n\n\n", "msg_date": "Mon, 07 Oct 2002 17:36:18 +0200", "msg_from": "=?ISO-8859-1?Q?Hans-J=FCrgen_Sch=F6nig?= <hs@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Table spaces again [was Re: Threaded Sorting]" }, { "msg_contents": "Shridhar Daithankar <shridhar_daithankar@persistent.co.in> wrote:\n[snip]\n> On 7 Oct 2002 at 15:52, Hans-J�rgen Sch�nig wrote:\n[snip]\n> > With tablespaces you can assign 30mb to use a, 120mb to user b etc. ...\n> > Table spaces are a nice abstraction layer to the file system.\n>\n> Hmm.. And how does that fit in database metaphor? What practical use is\nthat? I\n> can't imagine as I am a developer and not a DBA.\n\nVirtual hosting at ISP's for example.\n\n> I believe giving each database it's own transaction log would be a great\n> advantage of this scheme.\n\nWell, if you think of Tom's recent patch (ganged WAL writes), from a\nperformance point of view, this would only be good if each transaction\nlog had it's own disk. Otherwise a single transaction log is still better.\n\nI think tablespaces is a good idea. I also prefer associating tablespaces\nwith directory structures better over the oracle style.\n\nRegards,\nMichael Paesold\n\n", "msg_date": "Mon, 7 Oct 2002 19:12:14 +0200", "msg_from": "\"Michael Paesold\" <mpaesold@gmx.at>", "msg_from_op": false, "msg_subject": "Re: Table spaces again [was Re: Threaded Sorting]" }, { "msg_contents": "Curtis Faith wrote:\n\n> The current transaction/user state seems to be stored in process\n> global space. This could be changed to be a sointer to a struct\n> stored in a back-end specific shared memory area which would be\n> accessed by the executor process at execution start. The backend\n> would destroy and recreate the shared memory and restart execution\n> in the case where an executor process dies much like the postmaster\n> does with backends now.\n> \n> To the extent the executor process might make changes to the state,\n> which I'd try to avoid if possible (don't know if it is), the\n> executors could obtain locks, otherwise if the executions were\n> constrained to isolated elements (changes to different indexes for\n> example) it seems like it would be possible using an architecture\n> where you have:\n\nImagine there is a PL/Tcl function. On the first call in a session, the\nPL/Tcl interpreter get's created (that's during execution, okay?). Now\nthe procedure that's called inside of that interpreter creates a\n\"global\" variable ... a global Tcl variable inside of that interpreter,\nwhich is totally unknown to the backend since it doesn't know what Tcl\nis at all and that variable is nothing than an entry in a private hash\ntable inside of that interpreter. On a subsequent call to any PL/Tcl\nfunction during that session, it might be good if that darn hashtable\nentry exists.\n\nHow do you propose to let this happen?\n\nAnd while at it, the Tcl procedure next calls spi_exec, causing the\nPL/Tcl function handler to call SPI_exec(), so your isolated executor\nall of the sudden becomes a fully operational backend, doing the\nparsing, planning and optimizing, or what?\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n", "msg_date": "Mon, 07 Oct 2002 13:42:03 -0400", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Executors [was RE: Threaded Sorting]" }, { "msg_contents": "> Curtis Faith wrote:\n>\n> > The current transaction/user state seems to be stored in process\n> > global space. This could be changed to be a sointer to a struct\n> > stored in a back-end specific shared memory area which would be\n> > accessed by the executor process at execution start. The backend\n> > would destroy and recreate the shared memory and restart execution\n> > in the case where an executor process dies much like the postmaster\n> > does with backends now.\n> >\n> > To the extent the executor process might make changes to the state,\n> > which I'd try to avoid if possible (don't know if it is), the\n> > executors could obtain locks, otherwise if the executions were\n> > constrained to isolated elements (changes to different indexes for\n> > example) it seems like it would be possible using an architecture\n> > where you have:\n\nJan Wieck replied:\n> Imagine there is a PL/Tcl function. On the first call in a session, the\n> PL/Tcl interpreter get's created (that's during execution, okay?). Now\n> the procedure that's called inside of that interpreter creates a\n> \"global\" variable ... a global Tcl variable inside of that interpreter,\n> which is totally unknown to the backend since it doesn't know what Tcl\n> is at all and that variable is nothing than an entry in a private hash\n> table inside of that interpreter. On a subsequent call to any PL/Tcl\n> function during that session, it might be good if that darn hashtable\n> entry exists.\n>\n> How do you propose to let this happen?\n>\n> And while at it, the Tcl procedure next calls spi_exec, causing the\n> PL/Tcl function handler to call SPI_exec(), so your isolated executor\n> all of the sudden becomes a fully operational backend, doing the\n> parsing, planning and optimizing, or what?\n\nYou bring up a good point, we couldn't do what I propose for all\nsituations. I had never anticipated that splitting things up would be the\nrule. For example, the optimizer would have to decide whether it made sense\nto split up a query from a strictly performance perspective. So now, if we\nconsider the fact that some things could not be done with split backend\nexecution, the logic becomes:\n\nif ( splitting is possible && splitting is faster )\n\tdo the split execution;\nelse\n\tdo the normal execution;\n\nSince the design already splits the backend internally into a separate\nexecution phase, it seems like one could keep the current current\nimplementation for the typical case where splitting doesn't buy anything or\ncases where there is complex state information that needs to be maintained.\nIf there are no triggers or functions that will be accessed by a given\nquery then I don't see your concerns applying.\n\nIf there are triggers or other conditions which preclude multi-process\nexecution, we can keep exactly the same behavior as now. The plan execution\nentry could easily be a place where it either A) did the same thing it\ncurrently does or B) passed execution off to a pool as per the original\nproposal.\n\nI have to believe that most SELECTs won't be affected by your concerns.\nAdditionally, even in the case of an UPDATE, many times there are large\nportions of the operation's actual work that wouldn't be affected even if\nthere are lots of triggers on the tables being updated. The computation of\nthe inside of the WHERE could often be split out without causing any\nproblems with context or state information. The master executor could\nalways be the original backend as it is now and this would be the place\nwhere the UPDATE part would be processed after the WHERE tuples had been\nidentified.\n\nAs with any optimization, it is more complicated and won't handle all the\ncases. It's just an idea to handle common cases that would otherwise be\nmuch slower.\n\nThat having been said, I'm sure there are much lower hanging fruit on the\nperformance tree and likely will be for a little while.\n\n- Curtis\n\n", "msg_date": "Mon, 7 Oct 2002 14:29:28 -0400", "msg_from": "\"Curtis Faith\" <curtis@galtair.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Executors [was RE: Threaded Sorting]" } ]
[ { "msg_contents": "\n> > I'd give you the first and third of those. As Andrew noted, the\n> > argument that \"it's more standard-compliant\" is not very solid.\n> \n> The standard doesn't say anything about transaction in this regard.\n\nYes, it sais statement.\n\nNote also, that a typical SELECT only session would not advance \nCURRENT_TIMESTAMP at all in the typical \"autocommit off\" mode that\nthe Spec is all about. \n\n> What do others think?\n\nI liked your proposal to advance CURRENT_TIMESTAMP at each statement start.\n(It would not advance inside a stored procedure).\n\nAndreas\n", "msg_date": "Fri, 4 Oct 2002 11:43:38 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: [SQL] [GENERAL] CURRENT_TIMESTAMP" }, { "msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> Note also, that a typical SELECT only session would not advance \n> CURRENT_TIMESTAMP at all in the typical \"autocommit off\" mode that\n> the Spec is all about. \n\nTrue, but the spec also says to default to serializable transaction\nmode. So in a single-transaction session like you are picturing,\nthe successive SELECTs would all see a frozen snapshot of the database.\nFreezing CURRENT_TIMESTAMP goes right along with that, and in fact makes\na lot of sense, because it tells you exactly what time your snapshot\nof the database state was taken.\n\nThis line of thought opens another can of worms: should the behavior\nof CURRENT_TIMESTAMP depend on serializable vs. read-committed mode?\nMaybe SetQuerySnapshot is the routine that ought to capture the\n\"statement-start-time\" timestamp value. We could define\nCURRENT_TIMESTAMP as the time of the active database snapshot.\nOr at least offer a fourth parameter to that parameterized now() to\nreturn this time.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 Oct 2002 09:54:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [SQL] [GENERAL] CURRENT_TIMESTAMP " }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> \"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> > Note also, that a typical SELECT only session would not advance\n> > CURRENT_TIMESTAMP at all in the typical \"autocommit off\" mode that\n> > the Spec is all about.\n>\n> True, but the spec also says to default to serializable transaction\n> mode. So in a single-transaction session like you are picturing,\n> the successive SELECTs would all see a frozen snapshot of the database.\n> Freezing CURRENT_TIMESTAMP goes right along with that, and in fact makes\n> a lot of sense, because it tells you exactly what time your snapshot\n> of the database state was taken.\n>\n> This line of thought opens another can of worms: should the behavior\n> of CURRENT_TIMESTAMP depend on serializable vs. read-committed mode?\n> Maybe SetQuerySnapshot is the routine that ought to capture the\n> \"statement-start-time\" timestamp value. We could define\n> CURRENT_TIMESTAMP as the time of the active database snapshot.\n> Or at least offer a fourth parameter to that parameterized now() to\n> return this time.\n>\n> regards, tom lane\n\nThat is a very good point. At least with serializable transactions it seems\nperfectly reasonable to return a frozen CURRENT_TIMESTAMP. What do you think\nabout read-commited level? Can time be commited? ;-)\nIt would be even more surprising to new users if the implementation of\nCURRENT_TIMESTAMP would depend on trx serialization level.\n\nRegards,\nMichael Paesold\n\n", "msg_date": "Fri, 4 Oct 2002 18:44:04 +0200", "msg_from": "\"Michael Paesold\" <mpaesold@gmx.at>", "msg_from_op": false, "msg_subject": "Re: [SQL] [GENERAL] CURRENT_TIMESTAMP " }, { "msg_contents": "Michael Paesold wrote:\n> Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> > \"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> > > Note also, that a typical SELECT only session would not advance\n> > > CURRENT_TIMESTAMP at all in the typical \"autocommit off\" mode that\n> > > the Spec is all about.\n> >\n> > True, but the spec also says to default to serializable transaction\n> > mode. So in a single-transaction session like you are picturing,\n> > the successive SELECTs would all see a frozen snapshot of the database.\n> > Freezing CURRENT_TIMESTAMP goes right along with that, and in fact makes\n> > a lot of sense, because it tells you exactly what time your snapshot\n> > of the database state was taken.\n> >\n> > This line of thought opens another can of worms: should the behavior\n> > of CURRENT_TIMESTAMP depend on serializable vs. read-committed mode?\n> > Maybe SetQuerySnapshot is the routine that ought to capture the\n> > \"statement-start-time\" timestamp value. We could define\n> > CURRENT_TIMESTAMP as the time of the active database snapshot.\n> > Or at least offer a fourth parameter to that parameterized now() to\n> > return this time.\n> >\n> > regards, tom lane\n> \n> That is a very good point. At least with serializable transactions it seems\n> perfectly reasonable to return a frozen CURRENT_TIMESTAMP. What do you think\n> about read-commited level? Can time be commited? ;-)\n> It would be even more surprising to new users if the implementation of\n> CURRENT_TIMESTAMP would depend on trx serialization level.\n\nYes, CURRENT_TIMESTAMP changing based on transaction serializable/read\ncommited would be quite confusing. Also, because our default is read\ncommitted, we would end up with CURRENT_TIMESTAMP being statement level,\nwhich actually does give us a logical place to allow CURRENT_TIMESTAMP\nto change, but I thought people voted against that.\n\nHowever, imagine a query that used CURRENT_TIMESTAMP in the WHERE clause\nto find items that were not in the future. Would a CURRENT_TIMESTAMP\ntest in a multi-statement transaction want to check based on transaction\nstart, or on the tuples visible at the time the statement started?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 4 Oct 2002 13:04:29 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [SQL] [GENERAL] CURRENT_TIMESTAMP" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> wrote:\n\n> > That is a very good point. At least with serializable transactions it\nseems\n> > perfectly reasonable to return a frozen CURRENT_TIMESTAMP. What do you\nthink\n> > about read-commited level? Can time be commited? ;-)\n> > It would be even more surprising to new users if the implementation of\n> > CURRENT_TIMESTAMP would depend on trx serialization level.\n>\n> Yes, CURRENT_TIMESTAMP changing based on transaction serializable/read\n> commited would be quite confusing. Also, because our default is read\n> committed, we would end up with CURRENT_TIMESTAMP being statement level,\n> which actually does give us a logical place to allow CURRENT_TIMESTAMP\n> to change, but I thought people voted against that.\n>\n> However, imagine a query that used CURRENT_TIMESTAMP in the WHERE clause\n> to find items that were not in the future. Would a CURRENT_TIMESTAMP\n> test in a multi-statement transaction want to check based on transaction\n> start, or on the tuples visible at the time the statement started?\n\nWell, in a serializable transaction there would be no difference at all, at\nleast if CURRENT_TIMESTAMP is consistent within the transaction. Any changes\noutside the transaction after SetQuerySnapshot would not be seen by the\ntransaction anyway.\n\nIn read-commited, I think it's different. If CURRENT_TIMESTAMP is frozen,\nthan the behavior would be the same as in serializable level, if\nCURRENT_TIMESTAMP advances with each statement, the result would also\nchange. That is an inherent problem with read-commited though and has not so\nmuch to do with the timestamp behavior.\n\nRegards,\nMichael Paesold\n\n\n\n", "msg_date": "Fri, 4 Oct 2002 19:34:01 +0200", "msg_from": "\"Michael Paesold\" <mpaesold@gmx.at>", "msg_from_op": false, "msg_subject": "Re: [SQL] [GENERAL] CURRENT_TIMESTAMP" }, { "msg_contents": "\nOK, are we agreed to leave CURRENT_TIMESTAMP/now() alone and just add\nnow(\"string\")? If no one replies, I will assume that is a yes and I\nwill add it to TODO.\n\n---------------------------------------------------------------------------\n\nMichael Paesold wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> wrote:\n> \n> > > That is a very good point. At least with serializable transactions it\n> seems\n> > > perfectly reasonable to return a frozen CURRENT_TIMESTAMP. What do you\n> think\n> > > about read-commited level? Can time be commited? ;-)\n> > > It would be even more surprising to new users if the implementation of\n> > > CURRENT_TIMESTAMP would depend on trx serialization level.\n> >\n> > Yes, CURRENT_TIMESTAMP changing based on transaction serializable/read\n> > commited would be quite confusing. Also, because our default is read\n> > committed, we would end up with CURRENT_TIMESTAMP being statement level,\n> > which actually does give us a logical place to allow CURRENT_TIMESTAMP\n> > to change, but I thought people voted against that.\n> >\n> > However, imagine a query that used CURRENT_TIMESTAMP in the WHERE clause\n> > to find items that were not in the future. Would a CURRENT_TIMESTAMP\n> > test in a multi-statement transaction want to check based on transaction\n> > start, or on the tuples visible at the time the statement started?\n> \n> Well, in a serializable transaction there would be no difference at all, at\n> least if CURRENT_TIMESTAMP is consistent within the transaction. Any changes\n> outside the transaction after SetQuerySnapshot would not be seen by the\n> transaction anyway.\n> \n> In read-commited, I think it's different. If CURRENT_TIMESTAMP is frozen,\n> than the behavior would be the same as in serializable level, if\n> CURRENT_TIMESTAMP advances with each statement, the result would also\n> change. That is an inherent problem with read-commited though and has not so\n> much to do with the timestamp behavior.\n> \n> Regards,\n> Michael Paesold\n> \n> \n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sat, 5 Oct 2002 00:29:03 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [SQL] [GENERAL] CURRENT_TIMESTAMP" }, { "msg_contents": "On Sat, 5 Oct 2002 00:29:03 -0400 (EDT), Bruce Momjian\n<pgman@candle.pha.pa.us> wrote:\n>\n>OK, are we agreed to leave CURRENT_TIMESTAMP/now() alone and just add\n>now(\"string\")? If no one replies, I will assume that is a yes and I\n>will add it to TODO.\n\nSo my view of CURRENT_TIMESTAMP not being spec compliant didn't find\nmuch agreement. No problem, such is life.\n\nMay I suggest that a \"Compatibility\" section is added to the bottom of\nfunctions-datetime.html?\n\n\nIn case this issue is revisited later let me add for the archives:\n\nOn Fri, 04 Oct 2002 09:54:42 -0400, Tom Lane <tgl@sss.pgh.pa.us>\nwrote:\n>Freezing CURRENT_TIMESTAMP goes right along with that, and in fact makes\n>a lot of sense, because it tells you exactly what time your snapshot\n>of the database state was taken.\n\nI like this interpretation. But bear in mind that a transaction's own\nactions are visible to later commands in the same transaction.\nLooking at the clock is an \"own action\", so this is perfectly\ncompatible with (my reading of) General Rule 1.\n\nA statement does not see its own modifications which corresponds to\n(my interpretation of) General Rule 3.\n\nAnd one last thought: There are applications out there that are not\nwritten for one specific database backend. Having to replace\nCURRENT_TIMESTAMP by PG-specific now('statement') is just one more\npain in trying to be portable across different backends.\n\nServus\n Manfred\n", "msg_date": "Sat, 05 Oct 2002 11:58:22 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": false, "msg_subject": "Re: [SQL] [GENERAL] CURRENT_TIMESTAMP" }, { "msg_contents": "Manfred Koizar <mkoi-pg@aon.at> writes:\n> And one last thought: There are applications out there that are not\n> written for one specific database backend. Having to replace\n> CURRENT_TIMESTAMP by PG-specific now('statement') is just one more\n> pain in trying to be portable across different backends.\n\nBased on this discussion, it seems any application that depends on a\nspecific behavior of CURRENT_TIMESTAMP is going to have portability\nproblems anyway. Even if we did change CURRENT_TIMESTAMP to match\nnow('statement'), it would not act exactly like anyone else's.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 05 Oct 2002 11:36:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [SQL] [GENERAL] CURRENT_TIMESTAMP " }, { "msg_contents": "Yes, I agree with you Manfred, but more people _don't_ want it to\nchange, and like it the way it is, so we will just keep it and add\nnow(\"string\").\n\nAdded to TODO:\n\n\t* Add now(\"transaction|statement|clock\") functionality\n\nI have attached an SGML patch that explains the issues with\nCURRENT_TIMESTAMP in more detail.\n\n---------------------------------------------------------------------------\n\nManfred Koizar wrote:\n> On Sat, 5 Oct 2002 00:29:03 -0400 (EDT), Bruce Momjian\n> <pgman@candle.pha.pa.us> wrote:\n> >\n> >OK, are we agreed to leave CURRENT_TIMESTAMP/now() alone and just add\n> >now(\"string\")? If no one replies, I will assume that is a yes and I\n> >will add it to TODO.\n> \n> So my view of CURRENT_TIMESTAMP not being spec compliant didn't find\n> much agreement. No problem, such is life.\n> \n> May I suggest that a \"Compatibility\" section is added to the bottom of\n> functions-datetime.html?\n> \n> \n> In case this issue is revisited later let me add for the archives:\n> \n> On Fri, 04 Oct 2002 09:54:42 -0400, Tom Lane <tgl@sss.pgh.pa.us>\n> wrote:\n> >Freezing CURRENT_TIMESTAMP goes right along with that, and in fact makes\n> >a lot of sense, because it tells you exactly what time your snapshot\n> >of the database state was taken.\n> \n> I like this interpretation. But bear in mind that a transaction's own\n> actions are visible to later commands in the same transaction.\n> Looking at the clock is an \"own action\", so this is perfectly\n> compatible with (my reading of) General Rule 1.\n> \n> A statement does not see its own modifications which corresponds to\n> (my interpretation of) General Rule 3.\n> \n> And one last thought: There are applications out there that are not\n> written for one specific database backend. Having to replace\n> CURRENT_TIMESTAMP by PG-specific now('statement') is just one more\n> pain in trying to be portable across different backends.\n> \n> Servus\n> Manfred\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\nIndex: doc/src/sgml/func.sgml\n===================================================================\nRCS file: /cvsroot/pgsql-server/doc/src/sgml/func.sgml,v\nretrieving revision 1.126\ndiff -c -c -r1.126 func.sgml\n*** doc/src/sgml/func.sgml\t24 Sep 2002 20:14:58 -0000\t1.126\n--- doc/src/sgml/func.sgml\t5 Oct 2002 19:00:15 -0000\n***************\n*** 4293,4304 ****\n </informalexample>\n \n <para>\n! It is quite important to realize that\n! <function>CURRENT_TIMESTAMP</function> and related functions all return\n! the time as of the start of the current transaction; their values do not\n! increment while a transaction is running. But\n! <function>timeofday()</function> returns the actual current time.\n </para>\n \n <para>\n All the date/time data types also accept the special literal value\n--- 4293,4309 ----\n </informalexample>\n \n <para>\n! It is important to realize that\n! <function>CURRENT_TIMESTAMP</function> and related functions return\n! the start time of the current transaction; their values do not\n! change during the transaction. <function>timeofday()</function>\n! returns the wall clock time and does advance during transactions.\n </para>\n+ \n+ <note> \n+ Many other database systems advance these values more\n+ frequently.\n+ </note>\n \n <para>\n All the date/time data types also accept the special literal value", "msg_date": "Sat, 5 Oct 2002 15:02:22 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [SQL] [GENERAL] CURRENT_TIMESTAMP" }, { "msg_contents": "Hello!\n\nOn Sat, 5 Oct 2002, Bruce Momjian wrote:\n\n> \n> Yes, I agree with you Manfred, but more people _don't_ want it to\n> change, and like it the way it is, so we will just keep it and add\n> now(\"string\").\n\nIMHO the best way could be GUC-default/SET session-based variable \ncontrolling the behaviour. By default old Pg one, but ppl can set \nstandard-compliant. Such changes were done often in past, look at \"group \nby\" behaviour changes 6.4->6.5, default Pg datetime representation format \nchange, etc. I think those who concern interoperability confirm that it's \nmuch easy to add one SET per session then replace all CURRENT_STAMP to \nnow(blah-blah-blah). Moreover, ppl who need old behaviour can easily \nrevert to this by just one SET (in case GUC is set to new behaviour).\n\n> \n> Added to TODO:\n> \n> \t* Add now(\"transaction|statement|clock\") functionality\n> \n> I have attached an SGML patch that explains the issues with\n> CURRENT_TIMESTAMP in more detail.\n \n\n-- \nWBR, Yury Bokhoncovich, Senior System Administrator, NOC of F1 Group.\nPhone: +7 (3832) 106228, ext.140, E-mail: byg@center-f1.ru.\nUnix is like a wigwam -- no Gates, no Windows, and an Apache inside.\n\n\n", "msg_date": "Mon, 7 Oct 2002 12:00:30 +0700 (NOVST)", "msg_from": "Yury Bokhoncovich <byg@center-f1.ru>", "msg_from_op": false, "msg_subject": "Re: [SQL] [GENERAL] CURRENT_TIMESTAMP" }, { "msg_contents": "Yury Bokhoncovich wrote:\n> Hello!\n> \n> On Sat, 5 Oct 2002, Bruce Momjian wrote:\n> \n> > \n> > Yes, I agree with you Manfred, but more people _don't_ want it to\n> > change, and like it the way it is, so we will just keep it and add\n> > now(\"string\").\n> \n> IMHO the best way could be GUC-default/SET session-based variable \n> controlling the behaviour. By default old Pg one, but ppl can set \n> standard-compliant. Such changes were done often in past, look at \"group \n> by\" behaviour changes 6.4->6.5, default Pg datetime representation format \n> change, etc. I think those who concern interoperability confirm that it's \n> much easy to add one SET per session then replace all CURRENT_STAMP to \n> now(blah-blah-blah). Moreover, ppl who need old behaviour can easily \n> revert to this by just one SET (in case GUC is set to new behaviour).\n\nLet's see if people want the more standards-compliant behavior before\nadding a GUC, no?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 7 Oct 2002 01:13:06 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [SQL] [GENERAL] CURRENT_TIMESTAMP" } ]
[ { "msg_contents": "I, probably, will have a chance to work with postgres on Linux IA-64.\nIs there any optimization for postgresql ?\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Fri, 4 Oct 2002 20:40:01 +0400 (MSD)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": true, "msg_subject": "any experience with IA-64" }, { "msg_contents": "Oleg Bartunov wrote:\n> I, probably, will have a chance to work with postgres on Linux IA-64.\n> Is there any optimization for postgresql ?\n\nNone we know of. I think people have already gotten it working.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 4 Oct 2002 13:53:19 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: any experience with IA-64" } ]
[ { "msg_contents": "\n> > > I am confused how yours differs from mine. I don't see how the last\n> > > matching tagged query would not be from an INSTEAD rule.\n> > \n> > You could have both INSTEAD and non-INSTEAD rules firing for the same\n> > original query. If the alphabetically-last rule is a non-INSTEAD rule,\n> > then there's a difference.\n> \n> How do we get multiple rules on a query? I thought it was mostly\n> INSERT/UPDATE/DELETE, and those all operate on a single table.\n\nI have yet another use case, that might be of great interest to VLDB users,\nPartitioned tables:\n\ncreate table atab2000 (b text unique);\ncreate table atab2001 (b text unique);\ncreate table atab2002 (b text unique);\n\ncreate view atab (a, b) as \nselect 2000, b from atab2000 union all\nselect 2001, b from atab2001 union all\nselect 2002, b from atab2002;\n\ncreate rule atab_ins2000 as on insert to atab\nwhere a=2000 do instead insert into atab2000 values (new.b);\ncreate rule atab_ins2001 as on insert to atab\nwhere a=2001 do instead insert into atab2001 values (new.b);\ncreate rule atab_ins2002 as on insert to atab\nwhere a=2002 do instead insert into atab2002 values (new.b);\n\ncreate rule atab_insZZZZ as on insert to atab\ndo instead nothing;\n\ncreate rule atab_upd2000_u as on update to atab\nwhere old.a=2000 and new.a=2000 do instead\nupdate atab2000 set b= new.b where b = old.b;\ncreate rule atab_upd2000_m as on update to atab\nwhere old.a=2000 and new.a <> 2000 do instead (\ninsert into atab values (new.a, new.b);\ndelete from atab2000 where b = old.b);\n\n... if same for 2001 and 2002 I get \n\tpostgres=# explain update atab set a=2002 where a=2000 and b='2000 row 1';\n\tERROR: query rewritten 10 times, may contain cycles\nCannot understand why though :-(\n\ncreate rule atab_updZZZZ as on update to atab\ndo instead nothing;\n\n(I have not been able to successfully do the delete in a non-INSTEAD \nrule and the insert in a INSTEAD rule, thus don't count DELETE's, see below)\n\n... on delete .. :-)\n\nIt is ***really amazing*** how well this actually works in PostgreSQL !!!\nThe only thing to fix is the row count, and above \"query rewritten\" problem :-)\n\nA small improvement for the planner/optimizer would probably be to do the\nOne-Time Filter: false evaluation earlier (before planning a Subquery \nthat is not needed).\n\nI was one of those suggesting all, but viewing above I think DELETE is only \nsupposed to be counted in an original DELETE statement, Thus:\n\nDELETE: DELETE's + UPDATE's\t(update the row to invisible)\nINSERT: INSERT's + UPDATE's\t(allow an insert or update rule)\nUPDATE: INSERT's + UPDATE's\t(allow partitioned tables)\n\nI do not think there is a way around \"messy\", thus I think we have to \nload off the responsibility to the rule creator. The creator can choose what counts by \ncreating INSTEAD rules for actions that are supposed to count, and non-INSTEAD\nrules for those that should not.\n\n> I think you just modified the second part of that to restrict it to\n> queries that were added by INSTEAD rules. This is doable but it's\n> not a trivial change --- in particular, I think it implies adding\n> another field to Query data structure so we can mark INSTEAD-added\n> vs non-INSTEAD-added queries. Which means an initdb because it breaks\n> stored rules.\n\nYes Tom, I think we need to differentiate rule actions from INSTEAD vs non-INSTEAD \nactions inside one statement, non-INSTEAD should never count INSTEAD should always count. \n\nAndreas\n", "msg_date": "Fri, 4 Oct 2002 20:50:33 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: Return of INSTEAD rules" } ]
[ { "msg_contents": "\n> > ... most file systems can't process fsync's\n> > simultaneous with other writes, so those writes block because the file\n> > system grabs its own internal locks.\n> \n> Oh? That would be a serious problem, but I've never heard that asserted\n> before. Please provide some evidence.\n> \n> On a filesystem that does have that kind of problem, can't you avoid it\n> just by using O_DSYNC on the WAL files?\n\nTo make this competitive, the WAL writes would need to be improved to \ndo more than one block (up to 256k or 512k per write) with one write call \n(if that much is to be written for this tx to be able to commit).\nThis should actually not be too difficult since the WAL buffer is already \ncontiguous memory.\n\nIf that is done, then I bet O_DSYNC will beat any other config we currently \nhave.\n\nWith this, a separate disk for WAL and large transactions you shoud be able \nto see your disks hit the max IO figures they are capable of :-)\n\nAndreas\n", "msg_date": "Fri, 4 Oct 2002 23:14:56 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: Potential Large Performance Gain in WAL synching " }, { "msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> To make this competitive, the WAL writes would need to be improved to \n> do more than one block (up to 256k or 512k per write) with one write call \n> (if that much is to be written for this tx to be able to commit).\n> This should actually not be too difficult since the WAL buffer is already \n> contiguous memory.\n\nHmmm ... if you were willing to dedicate a half meg or meg of shared\nmemory for WAL buffers, that's doable. I was originally thinking of\nhaving the (still hypothetical) background process wake up every time a\nWAL page was completed and available to write. But it could be set up\nso that there is some \"slop\", and it only wakes up when the number of\nwritable pages exceeds N, for some N that's still well less than the\nnumber of buffers. Then it could write up to N sequential pages in a\nsingle write().\n\nHowever, this would only be a win if you had few and large transactions.\nAny COMMIT will force a write of whatever we have so far, so the idea of\nwriting hundreds of K per WAL write can only work if it's hundreds of K\nbetween commit records. Is that a common scenario? I doubt it.\n\nIf you try to set it up that way, then it's more likely that what will\nhappen is the background process seldom awakens at all, and each\ncommitter effectively becomes responsible for writing all the WAL\ntraffic since the last commit. Wouldn't that lose compared to someone\nelse having written the previous WAL pages in background?\n\nWe could certainly build the code to support this, though, and then\nexperiment with different values of N. If it turns out N==1 is best\nafter all, I don't think we'd have wasted much code.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 Oct 2002 17:43:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Potential Large Performance Gain in WAL synching " } ]
[ { "msg_contents": "Hello hackers,\n\nI'm thinking about ALTER TABLE ... ADD COLUMN working properly when\nchild tables already contain the column.\n\nI have two proposals. First one:\n\nThere are two cases: one when specifying ALTER TABLE ONLY, and other\nwhen specifying recursive (not ONLY).\n\nIn the first (ONLY) case, child tables are checked for existance of the\ncolumn. If any child does not contain the column or it has a different\natttypid or attypmod (any other field that should be checked?), the\nrequest is aborted. Else the column is added in the parent only, and\nthe child attributes have their attinhcount incremented.\n\nIn the second case, child tables are checked for existance of the\ncolumn. If the column doesn't exist, the procedure is called\nrecursively. If the column exists but has a different atttypid o\natttypmod, the request is aborted. Else, the attribute has its\nattinhcount incremented and attislocal reset.\n\n\nThere are two differences between these two cases:\n- ONLY does not add a column in childs. This seems natural. Obviously\n the scenario where this can lead to trouble produces an error\n (inexistent attr in child).\n\n- ONLY does not touch attislocal in childs.\n\n\nThe second may seems odd, but consider the following scenario:\n\nCREATE TABLE p1 (f1 int);\nCREATE TABLE p2 (f2 int);\nCREATE TABLE c (f1 int) INHERITS (p1, p2);\n\nIn this case, c.f1.attislocal is true. Now suppose the user wants to\ncreate p2.f1. If the recursive version is used, attislocal will be\nreset and the scenario will be equivalent to\n\nCREATE TABLE p1 (f1 int);\nCREATE TABLE p2 (f1 int, f2 int);\nCREATE TABLE c () INHERITS (p1, p2);\n\nbut the natural way would be\n\nCREATE TABLE p1 (f1 int);\nCREATE TABLE p2 (f1 int, f2 int);\nCREATE TABLE c (f1 int) INHERITS (p1, p2);\n\nwhich is what the ONLY case does.\n\n\nSecond proposal: Another way of doing this would be resetting attislocal\niff attinhcount is exactly zero. This assumes that if attislocal is\ntrue and attinhcount is not zero, the user wants the column as locally\ndefined and we should keep it that way. On the other hand, if a child\ntable already has the column there's no way to push the definition up\nthe inheritance tree.\n\nWhat do you think? I have actually a patch ready that implements the\nfirst proposal; the second one, while simpler conceptually and in terms\nof code, occurred to me just as I was writing this, and has an issue of\ncompleteness.\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"Las cosas son buenas o malas segun las hace nuestra opinion\" (Lisias)\n", "msg_date": "Fri, 4 Oct 2002 17:21:49 -0400", "msg_from": "Alvaro Herrera <alvherre@atentus.com>", "msg_from_op": true, "msg_subject": "ALTER TABLE ... ADD COLUMN" }, { "msg_contents": "Alvaro Herrera <alvherre@atentus.com> writes:\n> I'm thinking about ALTER TABLE ... ADD COLUMN working properly when\n> child tables already contain the column.\n> There are two cases: one when specifying ALTER TABLE ONLY, and other\n> when specifying recursive (not ONLY).\n\nI think ALTER TABLE ONLY ... ADD COLUMN is nonsensical and should be\nrejected out of hand. That solves that part of the problem easily ;-)\n\nThe comparison case in my mind is ALTER TABLE ONLY ... RENAME COLUMN,\nwhich has to be rejected. (Surely you're not going to say we should\nsupport this by allowing the parent column to become associated with\nsome other child columns than before...) ALTER ONLY ADD COLUMN cannot\nadd any functionality that's not perfectly well available with\nALTER ADD COLUMN.\n\n\n> In the second case, child tables are checked for existance of the\n> column. If the column doesn't exist, the procedure is called\n> recursively. If the column exists but has a different atttypid o\n> atttypmod, the request is aborted. Else, the attribute has its\n> attinhcount incremented and attislocal reset.\n\nI don't like resetting attislocal here. If you do that, then DROPping\nthe parent column doesn't return you to the prior state. I think I gave\nthis example before, but consider\n\nCREATE TABLE p (f1 int);\nCREATE TABLE c (f2 int) INHERITS (p);\nALTER TABLE p ADD COLUMN f2 int;\n-- sometime later, realize that the ADD was a mistake, so:\nALTER TABLE p DROP COLUMN f2;\n\nIf you reset attislocal then the ending state will be that c.f2 is gone.\nThat seems clearly wrong to me.\n\n\n> The second may seems odd, but consider the following scenario:\n\n> CREATE TABLE p1 (f1 int);\n> CREATE TABLE p2 (f2 int);\n> CREATE TABLE c (f1 int) INHERITS (p1, p2);\n\n> In this case, c.f1.attislocal is true. Now suppose the user wants to\n> create p2.f1. If the recursive version is used, attislocal will be\n> reset and the scenario will be equivalent to\n\n> CREATE TABLE p1 (f1 int);\n> CREATE TABLE p2 (f1 int, f2 int);\n> CREATE TABLE c () INHERITS (p1, p2);\n\n... which is wrong also. c had a local definition before and should\nstill, IMHO. What's the argument for taking away its local definition?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 Oct 2002 17:57:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE ... ADD COLUMN " }, { "msg_contents": "On Fri, Oct 04, 2002 at 05:57:02PM -0400, Tom Lane wrote:\n> Alvaro Herrera <alvherre@atentus.com> writes:\n> > I'm thinking about ALTER TABLE ... ADD COLUMN working properly when\n> > child tables already contain the column.\n> > There are two cases: one when specifying ALTER TABLE ONLY, and other\n> > when specifying recursive (not ONLY).\n\n> I don't like resetting attislocal here. If you do that, then DROPping\n> the parent column doesn't return you to the prior state. I think I gave\n> this example before, but consider\n\nHuh, I don't know where I got the idea you were (or someone else was?)\nin the position that attislocal should be reset. I'll clean everything\nup and submit the patch I had originally made.\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"Thou shalt not follow the NULL pointer, for chaos and madness await\nthee at its end.\" (2nd Commandment for C programmers)\n", "msg_date": "Fri, 4 Oct 2002 18:21:06 -0400", "msg_from": "Alvaro Herrera <alvherre@atentus.com>", "msg_from_op": true, "msg_subject": "Re: ALTER TABLE ... ADD COLUMN" }, { "msg_contents": "En Fri, 4 Oct 2002 18:21:06 -0400\nAlvaro Herrera <alvherre@atentus.com> escribi�:\n\n> Huh, I don't know where I got the idea you were (or someone else was?)\n> in the position that attislocal should be reset. I'll clean everything\n> up and submit the patch I had originally made.\n\nAll right, this is it. This patch merely checks if child tables have\nthe column. If atttypid and atttypmod are the same, the attributes'\nattinhcount is incremented; else the operation is aborted. If child\ntables don't have the column, recursively add it.\n\nattislocal is not touched in any case.\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"El Maquinismo fue proscrito so pena de cosquilleo hasta la muerte\"\n(Ijon Tichy en Viajes, Stanislaw Lem)", "msg_date": "Sat, 5 Oct 2002 04:46:23 -0400", "msg_from": "Alvaro Herrera <alvherre@atentus.com>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] ALTER TABLE ... ADD COLUMN" }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nAlvaro Herrera wrote:\n> En Fri, 4 Oct 2002 18:21:06 -0400\n> Alvaro Herrera <alvherre@atentus.com> escribi?:\n> \n> > Huh, I don't know where I got the idea you were (or someone else was?)\n> > in the position that attislocal should be reset. I'll clean everything\n> > up and submit the patch I had originally made.\n> \n> All right, this is it. This patch merely checks if child tables have\n> the column. If atttypid and atttypmod are the same, the attributes'\n> attinhcount is incremented; else the operation is aborted. If child\n> tables don't have the column, recursively add it.\n> \n> attislocal is not touched in any case.\n> \n> -- \n> Alvaro Herrera (<alvherre[a]atentus.com>)\n> \"El Maquinismo fue proscrito so pena de cosquilleo hasta la muerte\"\n> (Ijon Tichy en Viajes, Stanislaw Lem)\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 14 Oct 2002 00:34:52 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] ALTER TABLE ... ADD COLUMN" }, { "msg_contents": "\nPatch applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\nAlvaro Herrera wrote:\n> En Fri, 4 Oct 2002 18:21:06 -0400\n> Alvaro Herrera <alvherre@atentus.com> escribi?:\n> \n> > Huh, I don't know where I got the idea you were (or someone else was?)\n> > in the position that attislocal should be reset. I'll clean everything\n> > up and submit the patch I had originally made.\n> \n> All right, this is it. This patch merely checks if child tables have\n> the column. If atttypid and atttypmod are the same, the attributes'\n> attinhcount is incremented; else the operation is aborted. If child\n> tables don't have the column, recursively add it.\n> \n> attislocal is not touched in any case.\n> \n> -- \n> Alvaro Herrera (<alvherre[a]atentus.com>)\n> \"El Maquinismo fue proscrito so pena de cosquilleo hasta la muerte\"\n> (Ijon Tichy en Viajes, Stanislaw Lem)\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 18 Oct 2002 22:09:35 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] ALTER TABLE ... ADD COLUMN" } ]
[ { "msg_contents": "Hello hackers,\n\nI'm looking at implementing the btree reorganizer described in \"On-line\nreorganization of sparsely-...\", ACM SIGMOD proceedings 1996, by Zou and\nSalzberg. It seems to me I'll have to add some amount of lock types\nin the lock manager. Does that bother you?\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"On the other flipper, one wrong move and we're Fatal Exceptions\"\n(T.U.X.: Term Unit X - http://www.thelinuxreview.com/TUX/)\n", "msg_date": "Fri, 4 Oct 2002 17:53:13 -0400", "msg_from": "Alvaro Herrera <alvherre@atentus.com>", "msg_from_op": true, "msg_subject": "New lock types" }, { "msg_contents": "Alvaro Herrera <alvherre@atentus.com> writes:\n> I'm looking at implementing the btree reorganizer described in \"On-line\n> reorganization of sparsely-...\", ACM SIGMOD proceedings 1996, by Zou and\n> Salzberg. It seems to me I'll have to add some amount of lock types\n> in the lock manager. Does that bother you?\n\nSuch as?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 Oct 2002 17:58:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: New lock types " }, { "msg_contents": "En Fri, 04 Oct 2002 17:58:06 -0400\nTom Lane <tgl@sss.pgh.pa.us> escribi�:\n\n> Alvaro Herrera <alvherre@atentus.com> writes:\n> > I'm looking at implementing the btree reorganizer described in \"On-line\n> > reorganization of sparsely-...\", ACM SIGMOD proceedings 1996, by Zou and\n> > Salzberg.\n\nThe title is \"On-line reorganization of sparsely-populated B+-trees\".\nMy purpose is to implement the reorganizer so that B+-trees can be\ncompacted concurrently with other uses of the index.\n\n> > It seems to me I'll have to add some amount of lock types\n> > in the lock manager. Does that bother you?\n>\n> Such as?\n\nThere are three new lock modes: R, RX and RS (Reorganizer, Reorganizer\nExclusive and Reorganizer Shared). Actually, they are not new lock\ntypes; rather, new objects on which locks should be obtained before\nusing the index pages.\n\nBut there's some work to do before that. I need to make the nbtree code\nactually use its free page list. That is, new pages for splits should\nbe taken from the free page, or a new page should be created if there\nare none; also, deleting the last element from a page should put it into the\nfree page list. I'm looking at that code now. Let me know if you think\nthere's something I should know about this.\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"El sudor es la mejor cura para un pensamiento enfermo\" (Bardia)\n", "msg_date": "Sat, 5 Oct 2002 19:53:51 -0400", "msg_from": "Alvaro Herrera <alvherre@atentus.com>", "msg_from_op": true, "msg_subject": "Re: New lock types" }, { "msg_contents": "Alvaro Herrera <alvherre@atentus.com> writes:\n>>> It seems to me I'll have to add some amount of lock types\n>>> in the lock manager. Does that bother you?\n>> \n>> Such as?\n\n> There are three new lock modes: R, RX and RS (Reorganizer, Reorganizer\n> Exclusive and Reorganizer Shared). Actually, they are not new lock\n> types; rather, new objects on which locks should be obtained before\n> using the index pages.\n\nWe've got a ton of lock modes already; perhaps these operations can be\nmapped into acquiring some existing lock modes on index pages?\n\nNote that there is currently a notion of using exclusive lock on page\nzero of a relation (either index or heap) to interlock extension of the\nrelation. You can probably change that or work around it if you want to\nassign other semantics to locking index pages; just be aware that it's\nout there.\n\n> But there's some work to do before that. I need to make the nbtree code\n> actually use its free page list. That is, new pages for splits should\n> be taken from the free page, or a new page should be created if there\n> are none; also, deleting the last element from a page should put it into the\n> free page list. I'm looking at that code now. Let me know if you think\n> there's something I should know about this.\n\nYeah, you can't recycle pages without a freelist :-(. One thing that\nbothered me about that was that adding/removing freelist pages appears\nto require exclusive lock on the index metapage, which would\nunnecessarily block many other index operations. You should try to\nstructure things to avoid that. Maybe the freelist head link can be\ntreated as a separately lockable object.\n\nDon't forget to think about WAL logging of these operations.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 05 Oct 2002 20:25:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: New lock types " }, { "msg_contents": "On Sat, Oct 05, 2002 at 08:25:35PM -0400, Tom Lane wrote:\n> Alvaro Herrera <alvherre@atentus.com> writes:\n> >>> It seems to me I'll have to add some amount of lock types\n> >>> in the lock manager. Does that bother you?\n> >> \n> >> Such as?\n> \n> > There are three new lock modes: R, RX and RS (Reorganizer, Reorganizer\n> > Exclusive and Reorganizer Shared). Actually, they are not new lock\n> > types; rather, new objects on which locks should be obtained before\n> > using the index pages.\n> \n> We've got a ton of lock modes already; perhaps these operations can be\n> mapped into acquiring some existing lock modes on index pages?\n\nI'll have a look. Can't say right now. Some of the locks have special\nsemantics on what to do when a conflicting request arrives.\n\n> Yeah, you can't recycle pages without a freelist :-(.\n\nIn access/nbtree/README says that the metapage of the index \"contains a\npointer to the list of free pages\". However I don't see anything like\nthat in the BTMetaPageData. Should I suppose that it was ripped out\nsometime ago? I'll add it if that's the case. Or maybe I'm looking at\nthe wrong place?\n\n\n> Maybe the freelist head link can be treated as a separately lockable\n> object.\n\nI think creating a new LWLockId (BTFreeListLock?) can help here. The\noperations on freelist are short lived and rather infrequent so it\ndoesn't seem to matter that it is global to all indexes. Another way\nwould be to create one LockId per index, but it seems a waste to me.\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"No hay ausente sin culpa ni presente sin disculpa\" (Prov. frances)\n", "msg_date": "Sun, 6 Oct 2002 16:04:55 -0400", "msg_from": "Alvaro Herrera <alvherre@atentus.com>", "msg_from_op": true, "msg_subject": "Re: New lock types" }, { "msg_contents": "Alvaro Herrera <alvherre@atentus.com> writes:\n> I think creating a new LWLockId (BTFreeListLock?) can help here. The\n> operations on freelist are short lived and rather infrequent so it\n> doesn't seem to matter that it is global to all indexes.\n\nSeems like a really bad idea to me ... what makes you think that this\nwould not be a bottleneck? You'd have to take such a lock during every\nindex-page split, which is not that uncommon.\n\n> Another way\n> would be to create one LockId per index, but it seems a waste to me.\n\nNo, you should be looking at a way to represent index locking in the\nstandard lock manager, not as an LWLock. We've already got a concept\nof page-level lockable entities there.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 06 Oct 2002 16:48:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: New lock types " } ]
[ { "msg_contents": "\n> Hmmm ... if you were willing to dedicate a half meg or meg of shared\n> memory for WAL buffers, that's doable.\n\nYup, configuring Informix to three 2 Mb buffers (LOGBUF 2048) here. \n\n> However, this would only be a win if you had few and large transactions.\n> Any COMMIT will force a write of whatever we have so far, so the idea of\n> writing hundreds of K per WAL write can only work if it's hundreds of K\n> between commit records. Is that a common scenario? I doubt it.\n\nIt should help most for data loading, or mass updating, yes.\n\nAndreas\n", "msg_date": "Sat, 5 Oct 2002 01:47:12 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: Potential Large Performance Gain in WAL synching " } ]
[ { "msg_contents": "Hi all,\n\nGCC has supported block re-ordering through __builtin_expect() for some\ntime. I could not see and cannot recall any discussion of this in the\narchives. Has anyone played with GCC block re-ordering and PostgreSQL?\n\nGavin\n\n", "msg_date": "Sat, 5 Oct 2002 19:44:15 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": true, "msg_subject": "Branch prediction" } ]
[ { "msg_contents": "pgman wrote:\n> Curtis Faith wrote:\n> > Back-end servers would not issue fsync calls. They would simply block\n> > waiting until the LogWriter had written their record to the disk, i.e.\n> > until the sync'd block # was greater than the block that contained the\n> > XLOG_XACT_COMMIT record. The LogWriter could wake up committed back-\n> > ends after its log write returns.\n> > \n> > The log file would be opened O_DSYNC, O_APPEND every time. The LogWriter\n> > would issue writes of the optimal size when enough data was present or\n> > of smaller chunks if enough time had elapsed since the last write.\n> \n> So every backend is to going to wait around until its fsync gets done by\n> the backend process? How is that a win? This is just another version\n> of our GUC parameters:\n> \t\n> \t#commit_delay = 0 # range 0-100000, in microseconds\n> \t#commit_siblings = 5 # range 1-1000\n> \n> which attempt to delay fsync if other backends are nearing commit. \n> Pushing things out to another process isn't a win; figuring out if\n> someone else is coming for commit is. Remember, write() is fast, fsync\n> is slow.\n\nLet me add to what I just said:\n\nWhile the above idea doesn't win for normal operation, because each\nbackend waits for the fsync, and we have no good way of determining of\nother backends are nearing commit, a background WAL fsync process would\nbe nice if we wanted an option between fsync on (wait for fsync before\nreporting commit), and fsync off (no crash recovery).\n\nWe could have a mode where we did an fsync every X milliseconds, so we\nissue a COMMIT to the client, but wait a few milliseconds before\nfsync'ing. Many other databases have such a mode, but we don't, and I\nalways felt it would be valuable. It may allow us to remove the fsync\noption in favor of one that has _some_ crash recovery.\n \n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sat, 5 Oct 2002 08:01:01 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Proposed LogWriter Scheme, WAS: Potential Large Performance" } ]
[ { "msg_contents": "I've been getting only about 60% of the emails sent to the list. \n\nI see many emails in the archives that I never got via email.\n\nIs anyone else having this problems?\n\n- Curtis\n", "msg_date": "Sat, 5 Oct 2002 09:41:49 -0400", "msg_from": "\"Curtis Faith\" <curtis@galtair.com>", "msg_from_op": true, "msg_subject": "Anyone else having list server problems?" }, { "msg_contents": "\nI believe I've missed some mails as well, but I think that was a day or two \nago. I don't think I've missed any in the last day.\n\nRegards,\n\tJeff Davis\n\nOn Saturday 05 October 2002 06:41 am, Curtis Faith wrote:\n> I've been getting only about 60% of the emails sent to the list.\n>\n> I see many emails in the archives that I never got via email.\n>\n> Is anyone else having this problems?\n>\n> - Curtis\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n", "msg_date": "Sat, 5 Oct 2002 12:22:45 -0700", "msg_from": "Jeff Davis <list-pgsql-hackers@empires.org>", "msg_from_op": false, "msg_subject": "Re: Anyone else having list server problems?" } ]
[ { "msg_contents": "I currently develop an interface to simulate a indexed sequential file management with PostgreSql. I must reproduce the same philosophy used of control of locking of the records.\n\nI seek a solution to lock and unlock implicitly a row of a table. The locking of several rows, of the same table or various tables, can last a long time and consequently locking cannot be included in a transaction for not to lock the whole table for the other users.\n\nThere is a viable solution with PostgreSql?\n\nThere is an accessible basic structure of locking?\n\nThank you.\n\n\n\n\n\n\n\n\n\n\nI currently develop an interface to simulate a indexed \nsequential file management with PostgreSql. I must reproduce the same philosophy \nused of control of locking of the records.\nI seek a solution to lock and unlock implicitly a row of a \ntable. The locking of several rows, of the same table or various tables, can \nlast a long time and consequently locking cannot be included in a transaction \nfor not to lock the whole table for the other users.\nThere is a viable solution with PostgreSql?\nThere is an accessible basic structure of locking?\nThank you.", "msg_date": "Sat, 5 Oct 2002 23:56:37 +0200", "msg_from": "\"Antoine Lobato\" <Antoine.Lobato@wanadoo.fr>", "msg_from_op": true, "msg_subject": "Implicit Lock Row" }, { "msg_contents": "On 5 Oct 2002 at 23:56, Antoine Lobato wrote:\n\n> \n> I currently develop an interface to simulate a indexed sequential file \n> management with PostgreSql. I must reproduce the same philosophy used of \n> control of locking of the records.\n> I seek a solution to lock and unlock implicitly a row of a table. The locking \n> of several rows, of the same table or various tables, can last a long time and \n> consequently locking cannot be included in a transaction for not to lock the \n> whole table for the other users.\n> There is a viable solution with PostgreSql?\n> There is an accessible basic structure of locking?\n\nYou can use select for update to lock rows.\n\nHTH\n\nBye\n Shridhar\n\n--\nStrategy:\tA long-range plan whose merit cannot be evaluated until sometime\t\nafter those creating it have left the organization.\n\n", "msg_date": "Mon, 07 Oct 2002 19:16:00 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Re: Implicit Lock Row" } ]
[ { "msg_contents": "I currently develop an interface to simulate a indexed sequential file management with PostgreSql. I must reproduce the same philosophy used of control of locking of the records.\nI seek a solution to lock and unlock implicitly a row of a table. The locking of several rows, of the same table or various tables, can last a long time and consequently locking cannot be included in a transaction for not to lock the whole table for the other users.\n\nThere is a viable solution with PostgreSql?\n\nThere is an accessible basic structure of locking?\n\nThank you.\n\n\n\n\n\n\n\n\n\n \nI currently develop an interface to simulate a \nindexed sequential file management with PostgreSql. I must reproduce the same \nphilosophy used of control of locking of the records.\n\nI seek a solution to lock and unlock implicitly a row of a \ntable. The locking of several rows, of the same table or various tables, can \nlast a long time and consequently locking cannot be included in a transaction \nfor not to lock the whole table for the other users.\nThere is a viable solution with PostgreSql?\nThere is an accessible basic structure of locking?\nThank you.", "msg_date": "Sun, 6 Oct 2002 00:05:03 +0200", "msg_from": "\"Antoine Lobato\" <Antoine.Lobato@wanadoo.fr>", "msg_from_op": true, "msg_subject": "Implicit Lock Row" } ]
[ { "msg_contents": "I spent a little time reviewing the xlog.c logic, which I hadn't looked\nat in awhile. I see I made a mistake earlier: I claimed that only when\na backend wanted to commit or ran out of space in the WAL buffers would\nit issue any write(). This is not true: there is code in XLogInsert()\nthat will try to issue write() if the WAL buffers are more than half\nfull:\n\n /*\n * If cache is half filled then try to acquire write lock and do\n * XLogWrite. Ignore any fractional blocks in performing this check.\n */\n LogwrtRqst.Write.xrecoff -= LogwrtRqst.Write.xrecoff % BLCKSZ;\n if (LogwrtRqst.Write.xlogid != LogwrtResult.Write.xlogid ||\n (LogwrtRqst.Write.xrecoff >= LogwrtResult.Write.xrecoff +\n XLogCtl->XLogCacheByte / 2))\n {\n if (LWLockConditionalAcquire(WALWriteLock, LW_EXCLUSIVE))\n {\n LogwrtResult = XLogCtl->Write.LogwrtResult;\n if (XLByteLT(LogwrtResult.Write, LogwrtRqst.Write))\n XLogWrite(LogwrtRqst);\n LWLockRelease(WALWriteLock);\n }\n }\n\nBecause of the \"conditional acquire\" call, this will not block if\nsomeone else is currently doing a WAL write or fsync, but will just\nfall through in that case. However, if the code does acquire the\nlock then the backend will issue some writes --- synchronously, if\nO_SYNC or O_DSYNC mode is being used. It would be better to remove\nthis code and allow a background process to issue writes for filled\nWAL pages.\n\nNote this is done before acquiring WALInsertLock, so it does not block\nother would-be inserters of WAL records.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 05 Oct 2002 19:45:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Correction re existing WAL behavior" } ]
[ { "msg_contents": "I do not think the situation for ganging of multiple commit-record\nwrites is quite as dire as has been painted. There is a simple error\nin the current code that is easily corrected: in XLogFlush(), the\nwait to acquire WALWriteLock should occur before, not after, we try\nto acquire WALInsertLock and advance our local copy of the write\nrequest pointer. (To be exact, xlog.c lines 1255-1269 in CVS tip\nought to be moved down to before line 1275, inside the \"if\" that\ntests whether we are going to call XLogWrite.)\n\nGiven that change, what will happen during heavy commit activity\nis like this:\n\n1. Transaction A is ready to commit. It calls XLogInsert to insert\nits commit record into the WAL buffers (thereby transiently acquiring\nWALInsertLock) and then it calls XLogFlush to write and sync the\nlog through the commit record. XLogFlush acquires WALWriteLock and\nbegins issuing the needed I/O request(s).\n\n2. Transaction B is ready to commit. It gets through XLogInsert\nand then blocks on WALWriteLock inside XLogFlush.\n\n3. Transactions C, D, E likewise insert their commit records\nand then block on WALWriteLock.\n\n4. Eventually, transaction A finishes its I/O, advances the \"known\nflushed\" pointer past its own commit record, and releases the\nWALWriteLock.\n\n5. Transaction B now acquires WALWriteLock. Given the code change I\nrecommend, it will choose to flush the WAL *through the last queued\ncommit record as of this instant*, not the WAL endpoint as of when it\nstarted to wait. Therefore, this WAL write will handle all of the\nso-far-queued commits.\n\n6. More transactions F, G, H, ... arrive to be committed. They will\nlikewise insert their COMMIT records into the buffer and block on\nWALWriteLock.\n\n7. When B finishes its write and releases WALWriteLock, it will have\nset the \"known flushed\" pointer past E's commit record. Therefore,\ntransactions C, D, E will fall through their tests without calling\nXLogWrite at all. When F gets the lock, it will conclude that it\nshould write the data queued up to that time, and so it will handle\nthe commit records for G, H, etc. (The fact that lwlock.c will release\nwaiters in order of arrival is important here --- we want C, D, E to\nget out of the queue before F decides it needs to write.)\n\n\nIt seems to me that this behavior will provide fairly effective\nganging of COMMIT flushes under load. And it's self-tuning; no need\nto fiddle with weird parameters like commit_siblings. We automatically\ngang as many COMMITs as arrive during the time it takes to write and\nflush the previous gang of COMMITs.\n\nComments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 05 Oct 2002 20:16:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Analysis of ganged WAL writes" }, { "msg_contents": "I said:\n> There is a simple error\n> in the current code that is easily corrected: in XLogFlush(), the\n> wait to acquire WALWriteLock should occur before, not after, we try\n> to acquire WALInsertLock and advance our local copy of the write\n> request pointer. (To be exact, xlog.c lines 1255-1269 in CVS tip\n> ought to be moved down to before line 1275, inside the \"if\" that\n> tests whether we are going to call XLogWrite.)\n\nThat patch was not quite right, as it didn't actually flush the\nlater-arriving data. The correct patch is\n\n*** src/backend/access/transam/xlog.c.orig\tThu Sep 26 18:58:33 2002\n--- src/backend/access/transam/xlog.c\tSun Oct 6 18:45:57 2002\n***************\n*** 1252,1279 ****\n \t/* done already? */\n \tif (!XLByteLE(record, LogwrtResult.Flush))\n \t{\n- \t\t/* if something was added to log cache then try to flush this too */\n- \t\tif (LWLockConditionalAcquire(WALInsertLock, LW_EXCLUSIVE))\n- \t\t{\n- \t\t\tXLogCtlInsert *Insert = &XLogCtl->Insert;\n- \t\t\tuint32\t\tfreespace = INSERT_FREESPACE(Insert);\n- \n- \t\t\tif (freespace < SizeOfXLogRecord)\t/* buffer is full */\n- \t\t\t\tWriteRqstPtr = XLogCtl->xlblocks[Insert->curridx];\n- \t\t\telse\n- \t\t\t{\n- \t\t\t\tWriteRqstPtr = XLogCtl->xlblocks[Insert->curridx];\n- \t\t\t\tWriteRqstPtr.xrecoff -= freespace;\n- \t\t\t}\n- \t\t\tLWLockRelease(WALInsertLock);\n- \t\t}\n \t\t/* now wait for the write lock */\n \t\tLWLockAcquire(WALWriteLock, LW_EXCLUSIVE);\n \t\tLogwrtResult = XLogCtl->Write.LogwrtResult;\n \t\tif (!XLByteLE(record, LogwrtResult.Flush))\n \t\t{\n! \t\t\tWriteRqst.Write = WriteRqstPtr;\n! \t\t\tWriteRqst.Flush = record;\n \t\t\tXLogWrite(WriteRqst);\n \t\t}\n \t\tLWLockRelease(WALWriteLock);\n--- 1252,1284 ----\n \t/* done already? */\n \tif (!XLByteLE(record, LogwrtResult.Flush))\n \t{\n \t\t/* now wait for the write lock */\n \t\tLWLockAcquire(WALWriteLock, LW_EXCLUSIVE);\n \t\tLogwrtResult = XLogCtl->Write.LogwrtResult;\n \t\tif (!XLByteLE(record, LogwrtResult.Flush))\n \t\t{\n! \t\t\t/* try to write/flush later additions to XLOG as well */\n! \t\t\tif (LWLockConditionalAcquire(WALInsertLock, LW_EXCLUSIVE))\n! \t\t\t{\n! \t\t\t\tXLogCtlInsert *Insert = &XLogCtl->Insert;\n! \t\t\t\tuint32\t\tfreespace = INSERT_FREESPACE(Insert);\n! \n! \t\t\t\tif (freespace < SizeOfXLogRecord)\t/* buffer is full */\n! \t\t\t\t\tWriteRqstPtr = XLogCtl->xlblocks[Insert->curridx];\n! \t\t\t\telse\n! \t\t\t\t{\n! \t\t\t\t\tWriteRqstPtr = XLogCtl->xlblocks[Insert->curridx];\n! \t\t\t\t\tWriteRqstPtr.xrecoff -= freespace;\n! \t\t\t\t}\n! \t\t\t\tLWLockRelease(WALInsertLock);\n! \t\t\t\tWriteRqst.Write = WriteRqstPtr;\n! \t\t\t\tWriteRqst.Flush = WriteRqstPtr;\n! \t\t\t}\n! \t\t\telse\n! \t\t\t{\n! \t\t\t\tWriteRqst.Write = WriteRqstPtr;\n! \t\t\t\tWriteRqst.Flush = record;\n! \t\t\t}\n \t\t\tXLogWrite(WriteRqst);\n \t\t}\n \t\tLWLockRelease(WALWriteLock);\n\n\nTo test this, I made a modified version of pgbench in which each\ntransaction consists of a simple\n\tinsert into table_NNN values(0);\nwhere each client thread has a separate insertion target table.\nThis is about the simplest transaction I could think of that would\ngenerate a WAL record each time.\n\nRunning this modified pgbench with postmaster parameters\n\tpostmaster -i -N 120 -B 1000 --wal_buffers=250\nand all other configuration settings at default, CVS tip code gives me\na pretty consistent 115-118 transactions per second for anywhere from\n1 to 100 pgbench client threads. This is exactly what I expected,\nsince the database (including WAL file) is on a 7200 RPM SCSI drive.\nThe theoretical maximum rate of sync'd writes to the WAL file is\ntherefore 120 per second (one per disk revolution), but we lose a little\nbecause once in awhile the disk has to seek to a data file.\n\nInserting the above patch, and keeping all else the same, I get:\n\n$ mybench -c 1 -t 10000 bench1\nnumber of clients: 1\nnumber of transactions per client: 10000\nnumber of transactions actually processed: 10000/10000\ntps = 116.694205 (including connections establishing)\ntps = 116.722648 (excluding connections establishing)\n\n$ mybench -c 5 -t 2000 -S -n bench1\nnumber of clients: 5\nnumber of transactions per client: 2000\nnumber of transactions actually processed: 10000/10000\ntps = 282.808341 (including connections establishing)\ntps = 283.656898 (excluding connections establishing)\n\n$ mybench -c 10 -t 1000 bench1\nnumber of clients: 10\nnumber of transactions per client: 1000\nnumber of transactions actually processed: 10000/10000\ntps = 443.131083 (including connections establishing)\ntps = 447.406534 (excluding connections establishing)\n\n$ mybench -c 50 -t 200 bench1\nnumber of clients: 50\nnumber of transactions per client: 200\nnumber of transactions actually processed: 10000/10000\ntps = 416.154173 (including connections establishing)\ntps = 436.748642 (excluding connections establishing)\n\n$ mybench -c 100 -t 100 bench1\nnumber of clients: 100\nnumber of transactions per client: 100\nnumber of transactions actually processed: 10000/10000\ntps = 336.449110 (including connections establishing)\ntps = 405.174237 (excluding connections establishing)\n\nCPU loading goes from 80% idle at 1 client to 50% idle at 5 clients\nto <10% idle at 10 or more.\n\nSo this does seem to be a nice win, and unless I hear objections\nI will apply it ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 06 Oct 2002 19:07:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Analysis of ganged WAL writes " }, { "msg_contents": "On Sun, 2002-10-06 at 18:07, Tom Lane wrote:\n> \n> CPU loading goes from 80% idle at 1 client to 50% idle at 5 clients\n> to <10% idle at 10 or more.\n> \n> So this does seem to be a nice win, and unless I hear objections\n> I will apply it ...\n> \n\nWow Tom! That's wonderful! On the other hand, maybe people needed the\nextra idle CPU time that was provided by the unpatched code. ;)\n\nGreg", "msg_date": "06 Oct 2002 18:35:12 -0500", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: Analysis of ganged WAL writes" }, { "msg_contents": "On Sun, 2002-10-06 at 19:35, Greg Copeland wrote:\n> On Sun, 2002-10-06 at 18:07, Tom Lane wrote:\n> > \n> > CPU loading goes from 80% idle at 1 client to 50% idle at 5 clients\n> > to <10% idle at 10 or more.\n> > \n> > So this does seem to be a nice win, and unless I hear objections\n> > I will apply it ...\n> > \n> \n> Wow Tom! That's wonderful! On the other hand, maybe people needed the\n> extra idle CPU time that was provided by the unpatched code. ;)\n\nNaw. Distributed.net finally got through RC5-64. Lots of CPU to spare\nnow.\n\n-- \n Rod Taylor\n\n", "msg_date": "06 Oct 2002 20:59:45 -0400", "msg_from": "Rod Taylor <rbt@rbt.ca>", "msg_from_op": false, "msg_subject": "Re: Analysis of ganged WAL writes" }, { "msg_contents": "Tom Lane kirjutas E, 07.10.2002 kell 01:07:\n> \n> To test this, I made a modified version of pgbench in which each\n> transaction consists of a simple\n> \tinsert into table_NNN values(0);\n> where each client thread has a separate insertion target table.\n> This is about the simplest transaction I could think of that would\n> generate a WAL record each time.\n> \n> Running this modified pgbench with postmaster parameters\n> \tpostmaster -i -N 120 -B 1000 --wal_buffers=250\n> and all other configuration settings at default, CVS tip code gives me\n> a pretty consistent 115-118 transactions per second for anywhere from\n> 1 to 100 pgbench client threads. This is exactly what I expected,\n> since the database (including WAL file) is on a 7200 RPM SCSI drive.\n> The theoretical maximum rate of sync'd writes to the WAL file is\n> therefore 120 per second (one per disk revolution), but we lose a little\n> because once in awhile the disk has to seek to a data file.\n> \n> Inserting the above patch, and keeping all else the same, I get:\n> \n> $ mybench -c 1 -t 10000 bench1\n> number of clients: 1\n> number of transactions per client: 10000\n> number of transactions actually processed: 10000/10000\n> tps = 116.694205 (including connections establishing)\n> tps = 116.722648 (excluding connections establishing)\n> \n> $ mybench -c 5 -t 2000 -S -n bench1\n> number of clients: 5\n> number of transactions per client: 2000\n> number of transactions actually processed: 10000/10000\n> tps = 282.808341 (including connections establishing)\n> tps = 283.656898 (excluding connections establishing)\n\nin an ideal world this would be 5*120=600 tps. \n\nHave you any good any ideas what holds it back for the other 300 tps ? \n\nIf it has CPU utilisation of only 50% then there must be still some\nmoderate lock contention. \n\nbtw, what is the number for 1-5-10 clients with fsync off ? \n\n> $ mybench -c 10 -t 1000 bench1\n> number of clients: 10\n> number of transactions per client: 1000\n> number of transactions actually processed: 10000/10000\n> tps = 443.131083 (including connections establishing)\n> tps = 447.406534 (excluding connections establishing)\n> \n> CPU loading goes from 80% idle at 1 client to 50% idle at 5 clients\n> to <10% idle at 10 or more.\n> \n> So this does seem to be a nice win, and unless I hear objections\n> I will apply it ...\n\n3x speedup is not just nice, it's great ;)\n\n--------------\nHannu\n\n\n\n", "msg_date": "07 Oct 2002 13:01:12 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Analysis of ganged WAL writes" }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> in an ideal world this would be 5*120=600 tps. \n> Have you any good any ideas what holds it back for the other 300 tps ?\n\nWell, recall that the CPU usage was about 20% in the single-client test.\n(The reason I needed a variant version of pgbench is that this machine\nis too slow to do more than 120 TPC-B transactions per second anyway.)\n\nThat says that the best possible throughput on this test scenario is 5\ntransactions per disk rotation --- the CPU is just not capable of doing\nmore. I am actually getting about 4 xact/rotation for 10 or more\nclients (in fact it seems to reach that plateau at 8 clients, and be\nclose to it at 7). I'm inclined to think that the fact that it's 4 not\n5 is just a matter of \"not quite there\" --- there's some additional CPU\noverhead due to lock contention, etc, and any slowdown at all will cause\nit to miss making 5. The 20% CPU figure was approximate to begin with,\nanyway.\n\nThe other interesting question is why we're not able to saturate the\nmachine with only 4 or 5 clients. I think pgbench itself is probably\nto blame for that: it can't keep all its backend threads constantly\nbusy ... especially not when several of them report back transaction\ncompletion at essentially the same instant, as will happen under\nganged-commit conditions. There will be intervals where multiple\nbackends are waiting for pgbench to send a new command. That delay\nin starting a new command cycle is probably enough for them to \"miss the\nbus\" of getting included in the next commit write.\n\nThat's just a guess though; I don't have tools that would let me see\nexactly what's happening. Anyone else want to reproduce the test on\na different system and see what it does?\n\n> If it has CPU utilisation of only 50% then there must be still some\n> moderate lock contention. \n\nNo, that's I/O wait I think, forced by the quantization of the number\nof transactions that get committed per rotation.\n\n> btw, what is the number for 1-5-10 clients with fsync off ? \n\nAbout 640 tps at 1 and 5, trailing off to 615 at 10, and down to 450\nat 100 clients (now that must be lock contention...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 Oct 2002 08:05:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Analysis of ganged WAL writes " }, { "msg_contents": "I wrote:\n> That says that the best possible throughput on this test scenario is 5\n> transactions per disk rotation --- the CPU is just not capable of doing\n> more. I am actually getting about 4 xact/rotation for 10 or more\n> clients (in fact it seems to reach that plateau at 8 clients, and be\n> close to it at 7).\n\nAfter further thought I understand why it takes 8 clients to reach full\nthroughput in this scenario. Assume that we have enough CPU oomph so\nthat we can process four transactions, but not five, in the time needed\nfor one revolution of the WAL disk. If we have five active clients\nthen the behavior will be like this:\n\n1. Backend A becomes ready to commit. It locks WALWriteLock and issues\na write/flush that will only cover its own commit record. Assume that\nit has to wait one full disk revolution for the write to complete (this\nwill be the steady-state situation).\n\n2. While A is waiting, there is enough time for B, C, D, and E to run\ntheir transactions and become ready to commit. All eventually block on\nWALWriteLock.\n\n3. When A finishes its write and releases WALWriteLock, B will acquire\nthe lock and initiate a write that (with my patch) will cover C, D, and\nE's commit records as well as its own.\n\n4. While B is waiting for the disk to spin, A receives a new transaction\nfrom its client, processes it, and becomes ready to commit. It blocks\non WALWriteLock.\n\n5. When B releases the lock, C, D, E acquire it and quickly fall\nthrough, seeing that they need do no work. Then A acquires the lock.\nGOTO step 1.\n\nSo with five active threads, we alternate between committing one\ntransaction and four transactions on odd and even disk revolutions.\n\nIt's pretty easy to see that with six or seven active threads, we\nwill alternate between committing two or three transactions and\ncommitting four. Only when we get to eight threads do we have enough\nbackends to ensure that four transactions are available to commit on\nevery disk revolution. This must be so because the backends that are\nreleased at the end of any given disk revolution will not be able to\nparticipate in the next group commit, if there is already at least\none backend ready to commit.\n\nSo this solution isn't perfect; it would still be nice to have a way to\ndelay initiation of the WAL write until \"just before\" the disk is ready\nto accept it. I dunno any good way to do that, though.\n\nI went ahead and committed the patch for 7.3, since it's simple and does\noffer some performance improvement. But maybe we can think of something\nbetter later on...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 Oct 2002 13:42:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Analysis of ganged WAL writes " }, { "msg_contents": "On Tue, 2002-10-08 at 00:12, Curtis Faith wrote:\n> Tom, first of all, excellent job improving the current algorithm. I'm glad\n> you look at the WALCommitLock code.\n> \n> > This must be so because the backends that are\n> > released at the end of any given disk revolution will not be able to\n> > participate in the next group commit, if there is already at least\n> > one backend ready to commit.\n> \n> This is the major reason for my original suggestion about using aio_write.\n> The writes don't block each other and there is no need for a kernel level\n> exclusive locking call like fsync or fdatasync.\n> \n> Even the theoretical limit you mention of one transaction per revolution\n> per committing process seem like a significant bottleneck.\n> \n> Is committing 1 and 4 transactions on every revolution good? It's certainly\n> better than 1 per revolution.\n\nOf course committing all 5 at each rev would be better ;)\n\n> However, what if we could have done 3 transactions per process in the time\n> it took for a single revolution?\n\nI may be missing something obvious, but I don't see a way to get more\nthan 1 trx/process/revolution, as each previous transaction in that\nprocess must be written to disk before the next can start, and the only\nway it can be written to the disk is when the disk heads are on the\nright place and that happens exactly once per revolution. \n\nIn theory we could devise some clever page interleave scheme that would\nallow us to go like this: fill one page - write page to disk, commit\ntrx's - fill the page in next 1/3 of rev - write next page to disk ... ,\nbut this will work only for some limited set ao WAL page sizes.\n\nIt could be possible to get near 5/trx/rev for 5 backends if we do the\nfollowing (A-E are backends from Toms explanation):\n\n1. write the page for A's trx to its proper pos P (wher P is page\nnumber)\n\n2. if after sync for A returns and we already have more transactions\nwaiting for write()+sync() of the same page, immediately write the\n_same_ page to pos P+N (where N is a tunable parameter). If N is small\nenough then P+N will be on the same cylinder for most cases and thus\nwill get transactions B-E also committed on the same rev.\n\n3. make sure that the last version will also be written to its proper\nplace before the end of log will overwrite P+N. (This may be tricky.)\n\n4. When restoring from WAL, always check for a page at EndPos+N for a\npossible newer version of last page.\n\nThis scheme requires page numbers+page versions to be stored in each\npage and could get us near 1 trx/backend/rev performance, but it's hard\nto tell if it is really useful in real life.\n\nThis could also possibly be extended to more than one \"end page\" and\nmore than one \"continuation end page copy\" to get better than 1\ntrx/backend/rev.\n\n-----------------\nHannu\n\n\n", "msg_date": "07 Oct 2002 23:32:54 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Analysis of ganged WAL writes" }, { "msg_contents": "On Tue, 2002-10-08 at 01:27, Tom Lane wrote:\n>\n> The scheme we now have (with my recent patch) essentially says that the\n> commit delay seen by any one transaction is at most two disk rotations.\n> Unfortunately it's also at least one rotation :-(, except in the case\n> where there is no contention, ie, no already-scheduled WAL write when\n> the transaction reaches the commit stage. It would be nice to be able\n> to say \"at most one disk rotation\" instead --- but I don't see how to\n> do that in the absence of detailed information about disk head position.\n> \n> Something I was toying with this afternoon: assume we have a background\n> process responsible for all WAL writes --- not only filled buffers, but\n> the currently active buffer. It periodically checks to see if there\n> are unwritten commit records in the active buffer, and if so schedules\n> a write for them. If this could be done during each disk rotation,\n> \"just before\" the disk reaches the active WAL log block, we'd have an\n> ideal solution. And it would not be too hard for such a process to\n> determine the right time: it could measure the drive rotational speed\n> by observing the completion times of successive writes to the same\n> sector, and it wouldn't take much logic to empirically find the latest\n> time at which a write can be issued and have a good probability of\n> hitting the disk on time. (At least, this would work pretty well given\n> a dedicated WAL drive, else there'd be too much interference from other\n> I/O requests.)\n> \n> However, this whole scheme falls down on the same problem we've run into\n> before: user processes can't schedule themselves with millisecond\n> accuracy. The writer process might be able to determine the ideal time\n> to wake up and make the check, but it can't get the Unix kernel to\n> dispatch it then, at least not on most Unixen. The typical scheduling\n> slop is one time slice, which is comparable to if not more than the\n> disk rotation time.\n\nStandard for Linux has been 100Hz time slice, but it is configurable for\nsome time.\n\nThe latest RedHat (8.0) is built with 500Hz that makes about 4\nslices/rev for 7200 rpm disks (2 for 15000rpm)\n \n> ISTM aio_write only improves the picture if there's some magic in-kernel\n> processing that makes this same kind of judgment as to when to issue the\n> \"ganged\" write for real, and is able to do it on time because it's in\n> the kernel. I haven't heard anything to make me think that that feature\n> actually exists. AFAIK the kernel isn't much more enlightened about\n> physical head positions than we are.\n\nAt least for open source kernels it could be possible to\n\n1. write a patch to kernel \n\nor \n\n2. get the authors of kernel aio interested in doing it.\n\nor\n\n3. the third possibility would be using some real-time (RT) OS or mixed\nRT/conventional OS where some threads can be scheduled for hard-RT .\nIn an RT os you are supposed to be able to do exactly what you describe.\n\n\nI think that 2 and 3 could be \"outsourced\" (the respective developers\ntalked into supporting it) as both KAIO and RT Linuxen/BSDs are probably\nalso inetersted in high-profile applications so they could boast that\n\"using our stuff enabled PostgreSQL database run twice as fast\".\n\nAnyway, getting to near-harware speeds for database will need more\nspecific support from OS than web browsing or compiling.\n\n---------------\nHannu\n\n\n\n\n\n\n\n", "msg_date": "08 Oct 2002 00:00:13 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Analysis of ganged WAL writes" }, { "msg_contents": "Tom, first of all, excellent job improving the current algorithm. I'm glad\nyou look at the WALCommitLock code.\n\n> This must be so because the backends that are\n> released at the end of any given disk revolution will not be able to\n> participate in the next group commit, if there is already at least\n> one backend ready to commit.\n\nThis is the major reason for my original suggestion about using aio_write.\nThe writes don't block each other and there is no need for a kernel level\nexclusive locking call like fsync or fdatasync.\n\nEven the theoretical limit you mention of one transaction per revolution\nper committing process seem like a significant bottleneck.\n\nIs committing 1 and 4 transactions on every revolution good? It's certainly\nbetter than 1 per revolution.\n\nHowever, what if we could have done 3 transactions per process in the time\nit took for a single revolution?\n\nThen we are looking at (1 + 4)/ 2 = 2.5 transactions per revolution versus\nthe theoretical maximum of (3 * 5) = 15 transactions per revolution if we\ncan figure out a way to do non-blocking writes that we can guarantee are on\nthe disk platter so we can return from commit.\n\nSeparating out whether or not aio is viable. Do you not agree that\neliminating the blocking would result in potentially a 6X improvement for\nthe 5 process case?\n\n>\n> So this solution isn't perfect; it would still be nice to have a way to\n> delay initiation of the WAL write until \"just before\" the disk is ready\n> to accept it. I dunno any good way to do that, though.\n\nI still think that it would be much faster to just keep writing the WAL log\nblocks when they fill up and have a separate process wake the commiting\nprocess when the write completes. This would eliminate WAL writing as a\nbottleneck.\n\nI have yet to hear anyone say that this can't be done, only that we might\nnot want to do it because the code might not be clean.\n\nI'm generally only happy when I can finally remove a bottleneck completely,\nbut speeding one up by 3X like you have done is pretty damn cool for a day\nor two's work.\n\n- Curtis\n\n", "msg_date": "Mon, 7 Oct 2002 15:12:51 -0400", "msg_from": "\"Curtis Faith\" <curtis@galtair.com>", "msg_from_op": false, "msg_subject": "Re: Analysis of ganged WAL writes " }, { "msg_contents": "\"Curtis Faith\" <curtis@galtair.com> writes:\n> Even the theoretical limit you mention of one transaction per revolution\n> per committing process seem like a significant bottleneck.\n\nWell, too bad. If you haven't gotten your commit record down to disk,\nthen *you have not committed*. This is not negotiable. (If you think\nit is, then turn off fsync and quit worrying ;-))\n\nAn application that is willing to have multiple transactions in flight\nat the same time can open up multiple backend connections to issue those\ntransactions, and thereby perhaps beat the theoretical limit. But for\nserial transactions, there is not anything we can do to beat that limit.\n(At least not with the log structure we have now. One could imagine\ndropping a commit record into the nearest one of multiple buckets that\nare carefully scattered around the disk. But exploiting that would take\nnear-perfect knowledge about disk head positioning; it's even harder to\nsolve than the problem we're considering now.)\n\n> I still think that it would be much faster to just keep writing the WAL log\n> blocks when they fill up and have a separate process wake the commiting\n> process when the write completes. This would eliminate WAL writing as a\n> bottleneck.\n\nYou're failing to distinguish total throughput to the WAL drive from\nresponse time seen by any one transaction. Yes, a policy of writing\neach WAL block once when it fills would maximize potential throughput,\nbut it would also mean a potentially very large delay for a transaction\nwaiting to commit. The lower the system load, the worse the performance\non that scale.\n\nThe scheme we now have (with my recent patch) essentially says that the\ncommit delay seen by any one transaction is at most two disk rotations.\nUnfortunately it's also at least one rotation :-(, except in the case\nwhere there is no contention, ie, no already-scheduled WAL write when\nthe transaction reaches the commit stage. It would be nice to be able\nto say \"at most one disk rotation\" instead --- but I don't see how to\ndo that in the absence of detailed information about disk head position.\n\nSomething I was toying with this afternoon: assume we have a background\nprocess responsible for all WAL writes --- not only filled buffers, but\nthe currently active buffer. It periodically checks to see if there\nare unwritten commit records in the active buffer, and if so schedules\na write for them. If this could be done during each disk rotation,\n\"just before\" the disk reaches the active WAL log block, we'd have an\nideal solution. And it would not be too hard for such a process to\ndetermine the right time: it could measure the drive rotational speed\nby observing the completion times of successive writes to the same\nsector, and it wouldn't take much logic to empirically find the latest\ntime at which a write can be issued and have a good probability of\nhitting the disk on time. (At least, this would work pretty well given\na dedicated WAL drive, else there'd be too much interference from other\nI/O requests.)\n\nHowever, this whole scheme falls down on the same problem we've run into\nbefore: user processes can't schedule themselves with millisecond\naccuracy. The writer process might be able to determine the ideal time\nto wake up and make the check, but it can't get the Unix kernel to\ndispatch it then, at least not on most Unixen. The typical scheduling\nslop is one time slice, which is comparable to if not more than the\ndisk rotation time.\n\nISTM aio_write only improves the picture if there's some magic in-kernel\nprocessing that makes this same kind of judgment as to when to issue the\n\"ganged\" write for real, and is able to do it on time because it's in\nthe kernel. I haven't heard anything to make me think that that feature\nactually exists. AFAIK the kernel isn't much more enlightened about\nphysical head positions than we are.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 Oct 2002 16:27:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Analysis of ganged WAL writes " }, { "msg_contents": "> Well, too bad. If you haven't gotten your commit record down to disk,\n> then *you have not committed*. This is not negotiable. (If you think\n> it is, then turn off fsync and quit worrying ;-))\n\nI've never disputed this, so if I seem to be suggesting that, I've beee\nunclear. I'm just assuming that the disk can get a confirmation back to the\nINSERTing process in much less than one rotation. This would allow that\nprocess to start working again, perhaps in time to complete another\ntransaction.\n\n> An application that is willing to have multiple transactions in flight\n> at the same time can open up multiple backend connections to issue those\n> transactions, and thereby perhaps beat the theoretical limit. But for\n> serial transactions, there is not anything we can do to beat that limit.\n> (At least not with the log structure we have now. One could imagine\n> dropping a commit record into the nearest one of multiple buckets that\n> are carefully scattered around the disk. But exploiting that would take\n> near-perfect knowledge about disk head positioning; it's even harder to\n> solve than the problem we're considering now.)\n\nConsider the following scenario:\n\nTime measured in disk rotations.\n\nTime 1.00 - Process A Commits - Causing aio_write to log and wait\nTime 1.03 - aio_completes for Process A write - wakes process A\nTime 1.05 - Process A Starts another transaction.\nTime 1.08 - Process A Commits\netc.\n\nI agree that a process can't proceed from a commit until it receives\nconfirmation of the write, but if the write has hit the disk before a full\nrotation then the process should be able to continue processing new\ntransactions\n\n> You're failing to distinguish total throughput to the WAL drive from\n> response time seen by any one transaction. Yes, a policy of writing\n> each WAL block once when it fills would maximize potential throughput,\n> but it would also mean a potentially very large delay for a transaction\n> waiting to commit. The lower the system load, the worse the performance\n> on that scale.\n\nYou are assuming fsync or fdatasync behavior, I am not. There would be no\ndelay under the scenario I describe. The transaction would exit commit as\nsoon as the confirmation of the write is received from the aio system. I\nwould hope that with a decent aio implementation this would generally be\nmuch less than one rotation.\n\nI think that the single transaction response time is very important and\nthat's one of the chief problems I sought to solve when I proposed\naio_writes for logging in my original email many moons ago.\n\n> ISTM aio_write only improves the picture if there's some magic in-kernel\n> processing that makes this same kind of judgment as to when to issue the\n> \"ganged\" write for real, and is able to do it on time because it's in\n> the kernel. I haven't heard anything to make me think that that feature\n> actually exists. AFAIK the kernel isn't much more enlightened about\n> physical head positions than we are.\n\nAll aio_write has to do is pass the write off to the device as soon as it\naio_write gets it bypassing the system buffers. The code on the disk's\nhardware is very good at knowing when the disk head is coming. IMHO,\nbypassing the kernel's less than enlightened writing system is the main\npoint of using aio_write.\n\n- Curtis\n\n", "msg_date": "Mon, 7 Oct 2002 17:06:47 -0400", "msg_from": "\"Curtis Faith\" <curtis@galtair.com>", "msg_from_op": false, "msg_subject": "Re: Analysis of ganged WAL writes " }, { "msg_contents": "On Mon, 2002-10-07 at 16:06, Curtis Faith wrote:\n> > Well, too bad. If you haven't gotten your commit record down to disk,\n> > then *you have not committed*. This is not negotiable. (If you think\n> > it is, then turn off fsync and quit worrying ;-))\n> \n\nAt this point, I think we've come full circle. Can we all agree that\nthis concept is a *potential* source of improvement in a variety of\nsituations? If we can agree on that, perhaps we should move to the next\nstage in the process, validation?\n\nHow long do you think it would take to develop something worthy of\ntesting? Do we have known test cases which will properly (in)validate\nthe approach that everyone will agree to? If code is reasonably clean\nso as to pass the smell test and shows a notable performance boost, will\nit be seriously considered for inclusion? If so, I assume it would\nbecome a configure option (--with-aio)?\n\n\nRegards,\n\n\tGreg", "msg_date": "07 Oct 2002 16:18:36 -0500", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: Analysis of ganged WAL writes" }, { "msg_contents": "Greg Copeland wrote:\n<snip>\n> If so, I assume it would become a configure option (--with-aio)?\n\nOr maybe a GUC \"use_aio\" ?\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n> \n> Regards,\n> \n> Greg\n> \n> ------------------------------------------------------------------------\n> Name: signature.asc\n> signature.asc Type: application/pgp-signature\n> Description: This is a digitally signed message part\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Tue, 08 Oct 2002 07:23:38 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Analysis of ganged WAL writes" }, { "msg_contents": "\"Curtis Faith\" <curtis@galtair.com> writes:\n>> Well, too bad. If you haven't gotten your commit record down to disk,\n>> then *you have not committed*. This is not negotiable. (If you think\n>> it is, then turn off fsync and quit worrying ;-))\n\n> I've never disputed this, so if I seem to be suggesting that, I've beee\n> unclear. I'm just assuming that the disk can get a confirmation back to the\n> INSERTing process in much less than one rotation.\n\nYou've spent way too much time working with lying IDE drives :-(\n\nDo you really trust a confirm-before-write drive to make that write if\nit loses power? I sure don't.\n\nIf you do trust your drive to hold that data across a crash, then ISTM\nthe whole problem goes away anyway, as writes will \"complete\" quite\nindependently of disk rotation. My Linux box has no problem claiming\nthat it's completing several thousand TPS with a single client ... and\nyes, fsync is on, but it's using an IDE drive, and I don't know how to\ndisable confirm-before-write on that drive. (That's why I did these\ntests on my old slow HP hardware.) Basically, the ganging of commit\nwrites happens inside the disk controller on a setup like that. You\nstill don't need aio_write --- unless perhaps to reduce wastage of IDE\nbus bandwidth by repeated writes, but that doesn't seem to be a scarce\nresource in this context.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 Oct 2002 17:28:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Analysis of ganged WAL writes " }, { "msg_contents": "Well, I was thinking that aio may not be available on all platforms,\nthus the conditional compile option. On the other hand, wouldn't you\npretty much want it either on or off for all instances? I can see that\nit would be nice for testing though. ;)\n\nGreg\n\nOn Mon, 2002-10-07 at 16:23, Justin Clift wrote:\n> Greg Copeland wrote:\n> <snip>\n> > If so, I assume it would become a configure option (--with-aio)?\n> \n> Or maybe a GUC \"use_aio\" ?\n> \n> :-)\n> \n> Regards and best wishes,\n> \n> Justin Clift\n> \n> > \n> > Regards,\n> > \n> > Greg\n> > \n> > ------------------------------------------------------------------------\n> > Name: signature.asc\n> > signature.asc Type: application/pgp-signature\n> > Description: This is a digitally signed message part\n> \n> -- \n> \"My grandfather once told me that there are two kinds of people: those\n> who work and those who take the credit. He told me to try to be in the\n> first group; there was less competition there.\"\n> - Indira Gandhi", "msg_date": "07 Oct 2002 16:32:25 -0500", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: Analysis of ganged WAL writes" }, { "msg_contents": "> I may be missing something obvious, but I don't see a way to get more\n> than 1 trx/process/revolution, as each previous transaction in that\n> process must be written to disk before the next can start, and the only\n> way it can be written to the disk is when the disk heads are on the\n> right place and that happens exactly once per revolution.\n\nOkay, consider the following scenario.\n\n1) Process A commits when the platter is at 0 degrees.\n2) There are enough XLog writes from other processes to fill 1/4 platter\nrotation worth of log or 90 degrees. The SCSI drive writes the XLog commit\nrecord and keeps writing other log entries as the head rotates.\n3) Process A receives a confirmation of the write before the platter\nrotates 60 degrees.\n4) Process A continues and adds another commit before the platter rotates\nto 90 degrees.\n\nThis should be very possible and more and more likely in the future as CPUs\nget faster and faster relative to disks.\n\nI'm not suggesting this would happen all the time, just that it's possible\nand that an SMP machine with good CPUs and a fast I/O subsystem should be\nable to keep the log writing at close to I/O bandwidth limits.\n\nThe case of bulk inserts is one where I would expect that for simple tables\nwe should be able to peg the disks given today's hardware and enough\ninserting processes.\n\n- Curtis\n\n", "msg_date": "Mon, 7 Oct 2002 19:04:58 -0400", "msg_from": "\"Curtis Faith\" <curtis@galtair.com>", "msg_from_op": false, "msg_subject": "Re: Analysis of ganged WAL writes" }, { "msg_contents": "Curtis Faith kirjutas T, 08.10.2002 kell 01:04:\n> > I may be missing something obvious, but I don't see a way to get more\n> > than 1 trx/process/revolution, as each previous transaction in that\n> > process must be written to disk before the next can start, and the only\n> > way it can be written to the disk is when the disk heads are on the\n> > right place and that happens exactly once per revolution.\n> \n> Okay, consider the following scenario.\n> \n> 1) Process A commits when the platter is at 0 degrees.\n> 2) There are enough XLog writes from other processes to fill 1/4 platter\n> rotation worth of log or 90 degrees. The SCSI drive writes the XLog commit\n> record and keeps writing other log entries as the head rotates.\n> 3) Process A receives a confirmation of the write before the platter\n> rotates 60 degrees.\n> 4) Process A continues and adds another commit before the platter rotates\n> to 90 degrees.\n\nfor this scheme to work there are _very_ specific conditions to be met\n(i.e. the number of writing processes and size of WAL records has to\nmeet very strict criteria)\n\nI'd suspect that this will not work for 95% of cases.\n\n> This should be very possible and more and more likely in the future as CPUs\n> get faster and faster relative to disks.\n\nYou example of >1 trx/proc/rev will wok _only_ if no more and no less\nthan 1/4 of platter is filled by _other_ log writers.\n\n> I'm not suggesting this would happen all the time, just that it's possible\n> and that an SMP machine with good CPUs and a fast I/O subsystem should be\n> able to keep the log writing at close to I/O bandwidth limits.\n\nKeeping log writing parallel and close to I/O bandwidth limits is the\nreal key issue there. \n\nToms test case of fast inserting of small records by relatively small\nnumber of processes (forcing repeated writes of the same page) seems\nalso a border case.\n\n> The case of bulk inserts is one where I would expect that for simple tables\n> we should be able to peg the disks given today's hardware and enough\n> inserting processes.\n\nbulk inserts should probably be chunked at higher level by inserting\nseveral records inside a single transaction.\n\n-----------\nHannu\n\n", "msg_date": "08 Oct 2002 09:36:49 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Analysis of ganged WAL writes" }, { "msg_contents": "> You example of >1 trx/proc/rev will wok _only_ if no more and no less\n> than 1/4 of platter is filled by _other_ log writers.\n\nNot really, if 1/2 the platter has been filled we'll still get in one more\ncommit in for a given rotation. If more than a rotation's worth of writing\nhas occurred that means we are bumping into the limit of disk I/O and that\nit the limit that we can't do anything about without doing interleaved log\nfiles.\n\n> > The case of bulk inserts is one where I would expect that for\n> simple tables\n> > we should be able to peg the disks given today's hardware and enough\n> > inserting processes.\n>\n> bulk inserts should probably be chunked at higher level by inserting\n> several records inside a single transaction.\n\nAgreed, that's much more efficient. There are plenty of situations where\nthe inserts and updates are ongoing rather than initial, Shridhar's\nreal-world test or TPC benchmarks, for example.\n\n- Curtis\n\n", "msg_date": "Tue, 8 Oct 2002 10:00:06 -0400", "msg_from": "\"Curtis Faith\" <curtis@galtair.com>", "msg_from_op": false, "msg_subject": "Re: Analysis of ganged WAL writes" }, { "msg_contents": "tgl@sss.pgh.pa.us (Tom Lane) writes:\n\n[snip]\n> \n> So this does seem to be a nice win, and unless I hear objections\n> I will apply it ...\n> \n\nIt does indeed look like a great improvement, so is the fix\ngoing to be merged to the 7.3 branch or is it too late for that?\n\n _\nMats Lofkvist\nmal@algonet.se\n", "msg_date": "18 Oct 2002 11:21:03 +0200", "msg_from": "Mats Lofkvist <mal@algonet.se>", "msg_from_op": false, "msg_subject": "Re: Analysis of ganged WAL writes" }, { "msg_contents": "Mats Lofkvist <mal@algonet.se> writes:\n> It does indeed look like a great improvement, so is the fix\n> going to be merged to the 7.3 branch or is it too late for that?\n\nYes, been there done that ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 18 Oct 2002 10:00:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Analysis of ganged WAL writes " } ]
[ { "msg_contents": "Hello hackers,\n\nWhat's the naming convention for new functions/variables? I've seen\nthis_way() and ThisWay() used without visible distinction. I've used\nboth in previously submitted and accepted patches...\n\nDoes it matter?\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"La libertad es como el dinero; el que no la sabe emplear la pierde\" (Alvarez)\n", "msg_date": "Sun, 6 Oct 2002 17:58:04 -0400", "msg_from": "Alvaro Herrera <alvherre@atentus.com>", "msg_from_op": true, "msg_subject": "Naming convention" }, { "msg_contents": "Alvaro Herrera <alvherre@atentus.com> writes:\n> What's the naming convention for new functions/variables? I've seen\n> this_way() and ThisWay() used without visible distinction. I've used\n> both in previously submitted and accepted patches...\n> Does it matter?\n\nConsistency? We don't need no steenking consistency ;-)\n\nSeriously, you can find a wide range of naming conventions in the PG\nsources. It might be better if the range weren't so wide, but I doubt\nanyone really wants to engage in wholesale renaming (let alone getting\ninto the flamewars that would ensue if we tried to pick a One True\nNaming Style).\n\nI'd suggest conforming to the namestyle that you see in code closely\nrelated to what you are doing, or at least some namestyle you can find\nprecedent for somewhere in the backend. Beyond that, no one will\nquestion you.\n\nMy own two cents: pay more attention to the semantic content of your\nnames, and not so much to how you capitalize 'em. FooBar() is a useless\nname no matter how beautifully you present it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 06 Oct 2002 23:34:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Naming convention " } ]
[ { "msg_contents": "Hello,\nI am looking at moving our company away from MS SQL. Have been looking at\nDB2 and it looks to have some good features. Now wondering if POSTGRESQL\ncould be a viable alternative. I have a few questions though;\n1. What is the postgresql equiv to Stored procedures and can they be written\nin another langauage such s JAVA?\n2. How well is JAva supported for developing DB applications using PG?\n3. What are the limitations to PG compared to DB2, Oracle, Sybase ?\n\nThanks\nBen\n\n\n", "msg_date": "Mon, 07 Oct 2002 02:00:23 GMT", "msg_from": "\"Benjamin Stewart\" <bstewart@NOSPAM.waterwerks.com.au>", "msg_from_op": true, "msg_subject": "Moving to PostGres" }, { "msg_contents": "In article <XU5o9.11017$vg.28049@news-server.bigpond.net.au>,\nBenjamin Stewart <bstewart@NOSPAM.waterwerks.com.au> wrote:\n\n>I am looking at moving our company away from MS SQL.\n\n\nHere's a good place to start:\n\n http://techdocs.postgresql.org/redir.php?link=/techdocs/sqlserver2pgsql.php\n\n--\nhttp://www.spinics.net/linux/\n", "msg_date": "7 Oct 2002 02:37:08 GMT", "msg_from": "ellis@no.spam ()", "msg_from_op": false, "msg_subject": "Re: Moving to PostGres" }, { "msg_contents": "\"Benjamin Stewart\" <bstewart@NOSPAM.waterwerks.com.au> writes:\n> 1. What is the postgresql equiv to Stored procedures and can they be written\n> in another langauage such s JAVA?\n\nPostgreSQL supports user-defined functions; in 7.3 (currently in beta)\nthey can return sets of tuples.\n\nYou can define functions in Java using http://pljava.sf.net , or in a\nvariety of other languages (Perl, Python, Tcl, Ruby, C, PL/PgSQL, SQL,\nsh, etc.)\n\n> 2. How well is JAva supported for developing DB applications using\n> PG?\n\n\"Pretty well\", I guess. If you have a specific question, ask it.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n", "msg_date": "07 Oct 2002 12:18:46 -0400", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": false, "msg_subject": "Re: Moving to PostGres" } ]
[ { "msg_contents": "Hello hackers,\n\nI'm trying to get something from pg_filedump. However, the version\npublished in sources.redhat.com/rhdb doesn't grok a lot of changes in\ncurrent CVS. I changed all those and made it compile... but looks like\nthat's only the easy part. I get bogus values everywhere (block sizes,\nitem numbers, etc).\n\nDoes somebody know whether it's mantained for current versions?\n\n-- \nAlvaro Herrera (<alvherre[a]dcc.uchile.cl>)\n\"Cuando miro a alguien, mas me atrae como cambia que quien es\" (J. Binoche)\n", "msg_date": "Mon, 7 Oct 2002 00:42:15 -0400", "msg_from": "Alvaro Herrera <alvherre@dcc.uchile.cl>", "msg_from_op": true, "msg_subject": "pg_filedump" }, { "msg_contents": "Alvaro Herrera <alvherre@dcc.uchile.cl> writes:\n> I'm trying to get something from pg_filedump. However, the version\n> published in sources.redhat.com/rhdb doesn't grok a lot of changes in\n> current CVS. I changed all those and made it compile... but looks like\n> that's only the easy part. I get bogus values everywhere (block sizes,\n> item numbers, etc).\n> Does somebody know whether it's mantained for current versions?\n\nAFAIK, no one has yet updated it for 7.3's changes in tuple header\nformat. That needs to get done sometime soon ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 Oct 2002 10:21:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_filedump " } ]
[ { "msg_contents": "Thankyou very much for your enlightened comment, it worked a treat.\n\nI do not seem to be able to find references to this kind of useful\ninformation in the postgresql online manual or in books such as bruce\nmomjian's 'postgresql-introduction and concepts'. Where is this info to be\nfound other than the mailing list?\n\nThanks again.\nRegards\nSteve\n\n\n-----Original Message-----\nFrom: Tom Lane [mailto:tgl@sss.pgh.pa.us]\nSent: 04 October 2002 15:48\nTo: Steve King\nCc: PostgreSQL-development\nSubject: Re: [HACKERS] Bad rules\n\n\nSteve King <steve.king@ecmsys.co.uk> writes:\n> I am using postgres 7.2, and have rule on a table which causes a notify if\n> an insert/update/delete is performed on the table.\n> The table is very very small.\n> When performing a simple (very simple) update on the table this takes\nabout\n> 3 secs, when I remove the rule it is virtually instantaneous.\n> The rest of the database seems to perform fine, have you any ideas or come\n> across this before??\n\nLet's see the rule exactly? NOTIFY per se is not slow in my experience.\n\n(One thing to ask: have you done a VACUUM FULL on pg_listener in recent\nmemory? Heavy use of LISTEN/NOTIFY does tend to bloat that table if you\ndon't keep after it with VACUUM.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 7 Oct 2002 09:02:04 +0100", "msg_from": "Steve King <steve.king@ecmsys.co.uk>", "msg_from_op": true, "msg_subject": "Re: Bad rules" } ]
[ { "msg_contents": "\n> > Keep in mind that we support platforms without O_DSYNC. I am not\n> > sure whether there are any that don't have O_SYNC either, but I am\n> > fairly sure that we measured O_SYNC to be slower than fsync()s on\n> > some platforms.\n\nThis measurement is quite understandable, since the current software \ndoes 8k writes, and the OS only has a chance to write bigger blocks in the\nwrite+fsync case. In the O_SYNC case you need to group bigger blocks yourself.\n(bigger blocks are essential for max IO)\n\nI am still convinced, that writing bigger blocks would allow the fastest\nsolution. But reading the recent posts the solution might only be to change\nthe current \"loop foreach dirty 8k WAL buffer write 8k\" to one or two large \nwrite calls. \n\nAndreas\n", "msg_date": "Mon, 7 Oct 2002 10:42:44 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: Proposed LogWriter Scheme,\n\tWAS: Potential Large Performance Gain in WAL synching" } ]
[ { "msg_contents": "Hello, \n\ncurently I am downloading the E-Mails from different Accounts in an \nInternet-Cafe of Strasbourg and transfet it to my home-Network \n(Debian-samba-Server) where it will filterd to the user-accounts \nwith Floppys... :-) \n\nBecause I do not like Attachment in my Mails I let it under WinSuck \nin the dir c:/windows/temp . ;-))\n\nNow the Automatic-Viruscanner has detect the Bugbear-Virus and I like \nto know, what does it do, because I can not try it under WINE or WfW \n3.11 which give me a Win32s Error ? ;-)\n\nOh yes, I have already a collecton of more then 2300 Virusses which \nare working only under 32Bit-WinSuck only. Does not infect WfW 3.11 :-)\n\nThanks \nMichelle\n\n\nAm 17:24 2002-10-06 +0200 hat Christoph Strebin geschrieben:\n>\n>I have a problem similar to that described by Shaw Terwilliger on\n>2001-03-14 (Subject: Case Insensitive CHECK CONSTRAINTs):\n>\n>I need some case insensitive char/varchar columns. A unique index on\n>lower(col_name) wo\n>\n>Attachment Converted: \"C:\\WINDOWS\\TEMP\\resume.exe\"\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 3: if posting/reading through Usenet, please send an appropriate\n>subscribe-nomail command to majordomo@postgresql.org so that your\n>message can get through to the mailing list cleanly\n>\n> ########## Get the Power of Debian/GNU-Linux ##########\n-- \nRegistered Linux-User #280138 with the Linux Counter, http://counter.li.org.\n\n", "msg_date": "Mon, 07 Oct 2002 17:31:51 +0200", "msg_from": "Michelle Konzack <linux.mailinglists@freenet.de>", "msg_from_op": true, "msg_subject": "Re: Case insensitive columns" } ]
[ { "msg_contents": "\n> if i'm not mistaken, a char(n)/varchar(n) column is stored as a 32-bit\n> integer specifying the length followed by as many characters as the\n> length tells. On 32-bit Intel hardware this structure is aligned on a\n> 4-byte boundary.\n\nYes.\n\n> | opc0 char (3) no no 8 4\n> | opc1 char (3) no no 8 4\n> | opc2 char (3) no no 8 4\n\n> Hackers, do you think it's possible to hack together a quick and dirty\n> patch, so that string length is represented by one byte? IOW can a\n> database be built that doesn't contain any char/varchar/text value\n> longer than 255 characters in the catalog?\n\nSince he is only using fixchar how about doing a fixchar implemetation, that \ndoes not store length at all ? It is the same for every row anyways !\n\nAndreas\n", "msg_date": "Mon, 7 Oct 2002 17:42:12 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Large databases, performance" }, { "msg_contents": "On Mon, Oct 07, 2002 at 05:42:12PM +0200, Zeugswetter Andreas SB SD wrote:\n> > Hackers, do you think it's possible to hack together a quick and dirty\n> > patch, so that string length is represented by one byte? IOW can a\n> > database be built that doesn't contain any char/varchar/text value\n> > longer than 255 characters in the catalog?\n> \n> Since he is only using fixchar how about doing a fixchar implemetation, that \n> does not store length at all ? It is the same for every row anyways !\n\nRemember that in Unicode, 1 char != 1 byte. In fact, any encoding that's not\nLatin will have a problem. I guess you could put a warning on it: not for\nuse for asian character sets. So what do you do if someone tries to insert\nsuch a string anyway?\n\nPerhaps a better approach is to vary the number of bytes used for the\nlength. So one byte for lengths < 64, two bytes for lengths < 16384.\nUnfortunatly, two bits in the length are already used (IIRC) for other\nthings making it a bit more tricky.\n-- \nMartijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/\n> There are 10 kinds of people in the world, those that can do binary\n> arithmetic and those that can't.\n", "msg_date": "Wed, 9 Oct 2002 08:51:11 +1000", "msg_from": "Martijn van Oosterhout <kleptog@svana.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Large databases, performance" } ]
[ { "msg_contents": "\nSince 7.4 is getting real close and docs are going to be going through\ntheir final once-overs. Please remember to have a look at the DocNote\ncomments that have been submitted. Once 7.4 is released the current\nnotes will be gone.\n\n\thttp://www.postgresql.org/idocs/checknotes.php\n\nThe above url will show the notes and what they're in relation to with\na link to that particular piece of documentation.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n http://www.camping-usa.com http://www.cloudninegifts.com\n http://www.meanstreamradio.com http://www.unknown-artists.com\n==========================================================================\n\n\n\n", "msg_date": "Mon, 7 Oct 2002 12:20:37 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": true, "msg_subject": "reminder for those working on docs" }, { "msg_contents": "Vince Vielhaber <vev@michvhf.com> writes:\n> Since 7.3 is getting real close and docs are going to be going through\n> their final once-overs. Please remember to have a look at the DocNote\n> comments that have been submitted. Once 7.3 is released the current\n> notes will be gone.\n> \thttp://www.postgresql.org/idocs/checknotes.php\n> The above url will show the notes and what they're in relation to with\n> a link to that particular piece of documentation.\n\nI've gone through this, and found only a couple of things that seemed\nworth migrating into the 7.3 docs. Perhaps I just have a low tolerance\nfor silliness tonight, but it seemed like the average quality of the\ncomments was a lot lower than in previous release cycles :-(\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 20 Oct 2002 22:16:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] reminder for those working on docs " }, { "msg_contents": "Tom Lane wrote:\n> Vince Vielhaber <vev@michvhf.com> writes:\n> > Since 7.3 is getting real close and docs are going to be going through\n> > their final once-overs. Please remember to have a look at the DocNote\n> > comments that have been submitted. Once 7.3 is released the current\n> > notes will be gone.\n> > \thttp://www.postgresql.org/idocs/checknotes.php\n> > The above url will show the notes and what they're in relation to with\n> > a link to that particular piece of documentation.\n> \n> I've gone through this, and found only a couple of things that seemed\n> worth migrating into the 7.3 docs. Perhaps I just have a low tolerance\n> for silliness tonight, but it seemed like the average quality of the\n> comments was a lot lower than in previous release cycles :-(\n\nIt is possible our docs are getting clearer and there is less fiddling\nto do.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 20 Oct 2002 22:18:52 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] reminder for those working on docs" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Tom Lane wrote:\n>> I've gone through this, and found only a couple of things that seemed\n>> worth migrating into the 7.3 docs. Perhaps I just have a low tolerance\n>> for silliness tonight, but it seemed like the average quality of the\n>> comments was a lot lower than in previous release cycles :-(\n\n> It is possible our docs are getting clearer and there is less fiddling\n> to do.\n\nOr I'm just being a jerk tonight. Anyone else care to make a pass\nthrough the comments? It seemed like an awful lot of the stuff that\nwas credible at all was \"point <foo> should be mentioned over here in\npart <bar>\", which I mostly didn't agree with, but maybe someone else\nwould see it differently.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 20 Oct 2002 22:27:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] reminder for those working on docs " } ]
[ { "msg_contents": "Hello hackers,\n\nI'm thinking about the btree metapage locking problem.\n\nIn the current situation there are three operations that lock the\nmetapage:\n1. when looking for the root page (share lock, \"getroot\")\n2. when setting a new root page (exclusive lock, \"setroot\")\n3. when extending the relation (exclusive lock, \"extend\").\n\nNow, I want to add three more operations:\n4. add a page to the freelist (\"addfree\")\n5. get a page from the freelist (\"getfree\")\n6. shrink a relation (\"shrink\").\n\nThe shrink operation only happens when one tries to add the last page of\nthe relation to the freelist. I don't know if that's very common, but\nin case of relation truncating or massive deletion this is important.\n\n\nWhat I want is to be able to do getroot and setroot without being\nblocked by any of the other four. Of course the other four are all\nmutually exclusive.\n\nThere doesn't seem to be a way to acquire two different locks on the\nsame page, so I propose to lock the InvalidBlockNumer for the btree, and\nuse that as the lock to be obtained before doing extend, addfree,\ngetfree or shrink. The setroot/getroot operations would still use the\nlocking on BTREE_METAPAGE, so they can proceed whether the\nInvalidBlockNumber is blocked or not.\n\n\nOn a different topic: the freelist structure I think should be\nrepresented as a freelist int32 number (btm_numfreepages) in\nBTMetaPageData, and a pointer to the first BlockNumber. Adding a new\npage is done by sequentially scanning the list until a zero is found or\nthe end of the block is reached. Getting a page sequentially scans the\nsame list until a blocknumber > 0 is found (iff btm_numfreepages is\ngreater than zero). This allows for ~ 2000 free pages (very unlikely to\nactually happen if the shrink operation is in place).\n\nComments? Another solution would be to have a separate page for the\nfreelist, but it seems to be a waste.\n\n-- \nAlvaro Herrera (<alvherre[a]dcc.uchile.cl>)\n\"Porque francamente, si para saber manejarse a uno mismo hubiera que\nrendir examen... �Qui�n es el machito que tendr�a carnet?\" (Mafalda)\n", "msg_date": "Mon, 7 Oct 2002 13:13:03 -0400", "msg_from": "Alvaro Herrera <alvherre@dcc.uchile.cl>", "msg_from_op": true, "msg_subject": "BTree metapage lock and freelist structure" }, { "msg_contents": "Alvaro Herrera <alvherre@dcc.uchile.cl> writes:\n> There doesn't seem to be a way to acquire two different locks on the\n> same page, so I propose to lock the InvalidBlockNumer for the btree, and\n> use that as the lock to be obtained before doing extend, addfree,\n> getfree or shrink. The setroot/getroot operations would still use the\n> locking on BTREE_METAPAGE, so they can proceed whether the\n> InvalidBlockNumber is blocked or not.\n\nUnfortunately, blkno == InvalidBlockNumber is already taken --- it's the\nencoding of a relation-level lock. Compare LockRelation and LockPage\nin src/backend/storage/lmgr/lmgr.c.\n\nIt looks to me like nbtree.c is currently using LockPage(rel, 0) to\ninterlock relation extension --- this is the same convention used to\ninterlock heap extension. But access to the metapage is determined by\nbuffer locks, which are an independent facility.\n\nI agree with your conclusion that extend, shrink, addfree, and getfree\noperations may as well all use the same exclusive lock. I think it\ncould be LockPage(rel, 0), though.\n\nSince addfree/getfree need to touch the metapage, at first glance it\nwould seem that they need to do LockPage(rel, 0) *plus* get a buffer\nlock on the metapage. But I think it might work for them to get only\na shared lock on the metapage; the LockPage lock will guarantee\nexclusion against other addfree/getfree operations, and they don't\nreally care if someone else is changing the root pointer. This is ugly\nbut given the limited set of operations that occur against a btree\nmetapage, it seems acceptable to me. Comments anyone?\n\n\n> On a different topic: the freelist structure I think should be\n> represented as a freelist int32 number (btm_numfreepages) in\n> BTMetaPageData, and a pointer to the first BlockNumber. Adding a new\n> page is done by sequentially scanning the list until a zero is found or\n> the end of the block is reached. Getting a page sequentially scans the\n> same list until a blocknumber > 0 is found (iff btm_numfreepages is\n> greater than zero). This allows for ~ 2000 free pages (very unlikely to\n> actually happen if the shrink operation is in place).\n\nNo more than 2000 free pages in an index? What makes you think that's\nunlikely? It's only 16 meg of free space, which would be about 1% of\na gigabyte-sized index. I think it's a bad idea to make such a\nhardwired assumption.\n\nThe original concept I believe was to have a linked list of free pages,\nie, each free page contains the pointer to the next one. This allows\nindefinite expansion. It does mean that you have to read in the\nselected free page to get the next-free-page pointer from it, which you\nhave to do to update the metapage, so that read has to happen while\nyou're still holding the getfree lock. That's annoying.\n\nPerhaps we could use a hybrid data structure: up to 2000 free pages\nstored in the metapage, with any remaining ones chained together.\nMost of the time, freelist operations wouldn't need to touch the\nchain, hopefully.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 Oct 2002 14:31:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BTree metapage lock and freelist structure " } ]
[ { "msg_contents": "Hello to all the Doers of Postgres!!!\n\nLast time I went through forums, people spoke highly about 7.3 and its capability to do hot backups. My problem is if the database goes down and I lose my main data store, then I will lose all transactions back to the time I did the pg_dump.\n\nOther databases (i e Oracle) solves this by retaining their archive logs in some physically separate storage. So, when you lose your data, you can restore the data from back-up, and then apply your archive log, and avoid losing any committed transactions. \n\nPostgresql has been lacking this all along. I've installed postgres 7.3b2 and still don't see any archive's flushed to any other place. Please let me know how is hot backup procedure implemented in current 7.3 beta(2) release.\n\n\nThanks.\n", "msg_date": "Mon, 7 Oct 2002 13:21:06 -0400", "msg_from": "\"Sandeep Chadha\" <sandeep@newnetco.com>", "msg_from_op": true, "msg_subject": "Hot Backup " }, { "msg_contents": "\"Sandeep Chadha\" <sandeep@newnetco.com> writes:\n> Postgresql has been lacking this all along. I've installed postgres\n> 7.3b2 and still don't see any archive's flushed to any other\n> place. Please let me know how is hot backup procedure implemented in\n> current 7.3 beta(2) release.\n\nAFAIK no such hot backup feature has been implemented for 7.3 -- you\nappear to have been misinformed.\n\nThat said, I agree that would be a good feature to have.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n", "msg_date": "07 Oct 2002 13:48:00 -0400", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Hot Backup" }, { "msg_contents": "On 7 Oct 2002 at 13:48, Neil Conway wrote:\n\n> \"Sandeep Chadha\" <sandeep@newnetco.com> writes:\n> > Postgresql has been lacking this all along. I've installed postgres\n> > 7.3b2 and still don't see any archive's flushed to any other\n> > place. Please let me know how is hot backup procedure implemented in\n> > current 7.3 beta(2) release.\n> AFAIK no such hot backup feature has been implemented for 7.3 -- you\n> appear to have been misinformed.\n\nIs replication an answer to hot backup?\n\nBye\n Shridhar\n\n--\nink, n.:\tA villainous compound of tannogallate of iron, gum-arabic,\tand water, \nchiefly used to facilitate the infection of\tidiocy and promote intellectual \ncrime.\t\t-- H.L. Mencken\n\n", "msg_date": "Tue, 08 Oct 2002 11:17:18 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Hot Backup" }, { "msg_contents": "Shridhar Daithankar wrote:\n> On 7 Oct 2002 at 13:48, Neil Conway wrote:\n> \n> > \"Sandeep Chadha\" <sandeep@newnetco.com> writes:\n> > > Postgresql has been lacking this all along. I've installed postgres\n> > > 7.3b2 and still don't see any archive's flushed to any other\n> > > place. Please let me know how is hot backup procedure implemented in\n> > > current 7.3 beta(2) release.\n> > AFAIK no such hot backup feature has been implemented for 7.3 -- you\n> > appear to have been misinformed.\n> \n> Is replication an answer to hot backup?\n\nWe already allow hot backups using pg_dump. If you mean point-in-time\nrecovery, we have a patch for that ready for 7.4.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 8 Oct 2002 20:26:57 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Hot Backup" }, { "msg_contents": "Hi Sandeep. What you were calling Hot Backup is really called Point in \nTime Recovery (PITR). Hot Backup means making a complete backup of the \ndatabase while it is running, something Postgresql has supported for a \nvery long time.\n\nOn Mon, 7 Oct 2002, Sandeep Chadha wrote:\n\n> Hello to all the Doers of Postgres!!!\n> \n> Last time I went through forums, people spoke highly about 7.3 and its \n> capability to do hot backups. My problem is if the database goes down \n> and I lose my main data store, then I will lose all transactions back \n> to the time I did the pg_dump.\n\nLet's make it clear that this kind of failure is EXTREMELY rare on real \ndatabase servers since they almost ALL run their data sets on RAID arrays. \nWhile it is possible to lost >1 drive at the same time and all your \ndatabase, it is probably more likely to have a bad memory chip corrupt \nyour data silently, or a bad query delete data it shouldn't.\n\nThat said, there IS work ongoing to provide this facility for Postgresql, \nbut I would much rather have work done on making large complex queries run \nfaster, or fix the little issues with foreign keys cause deadlocks.\n\n> Other databases (i e Oracle) solves this by retaining their archive \n> logs in some physically separate storage. So, when you lose your data, \n> you can restore the data from back-up, and then apply your archive log, \n> and avoid losing any committed transactions. \n> \n> > Postgresql has been lacking this all along. I've installed postgres \n> 7.3b2 and still don't see any archive's flushed to any other place. \n> Please let me know how is hot backup procedure implemented in current \n> 7.3 beta(2) release.\n\nAgain, you'll get better response to your questions if you call it \"point \nin time recovery\" or pitr. Hot backup is the wrong word, and something \nPostgresql DOES have.\n\nIt also supports WALs, which stands for Write ahead logs. These files \nstore what the database is about to do before it does it. Should the \ndatabase crash with transactions pending, the server will come back up and \nprocess the pending transactions that are in the WAL files, ensuring the \nintegrity of your database.\n\nPoint in Time recovery is very nice, but it's the last step in many to \nensure a stable, coherent database, and will probably be in 7.4 or \nsomewhere around there. If you're running in a RAID array, then the loss \nof your datastore should be a very remote possibility.\n\n", "msg_date": "Wed, 9 Oct 2002 10:19:26 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Point in Time Recovery WAS: Hot Backup " }, { "msg_contents": "Hi All,\n\nI have a question related to postgres.I have an application which is transferring data from MSSQL to Postgres at some regular interval.\n\nI had around 20,000 data in MSSQL which should be transferred to postgres.\nAfter transferring around 12000 data I got a error message\nERROR: deadlock detected\nAnd after that data transfer has stopped.\n\nHow to avoid this error and what is the reason for this error.\n--\nBest Regards\n- Savita\n----------------------------------------------------\nHewlett Packard (India)\n+91 80 2051288 (Phone)\n847 1288 (HP Telnet)\n----------------------------------------------------\n\n\n\nHi All,\nI have a question related to postgres.I have an application which is\ntransferring data from MSSQL to Postgres at some regular interval.\nI had around 20,000 data in MSSQL which should be transferred to postgres.\nAfter transferring around 12000 data I got a error message\nERROR: deadlock detected\nAnd after that data transfer has stopped.\nHow to avoid this error and what is the reason\nfor this error.\n--\nBest Regards\n- Savita\n----------------------------------------------------\nHewlett Packard (India)\n+91 80 2051288 (Phone)\n847 1288 (HP Telnet)\n----------------------------------------------------", "msg_date": "Fri, 22 Nov 2002 11:54:00 +0530", "msg_from": "Savita <savita@india.hp.com>", "msg_from_op": false, "msg_subject": "Question about DEADLOCK" }, { "msg_contents": "On Fri, 22 Nov 2002, Savita wrote:\n\n> Hi All,\n>\n> I have a question related to postgres.I have an application which is\n> transferring data from MSSQL to Postgres at some regular interval.\n>\n> I had around 20,000 data in MSSQL which should be transferred to postgres.\n> After transferring around 12000 data I got a error message\n> ERROR: deadlock detected\n> And after that data transfer has stopped.\n>\n> How to avoid this error and what is the reason for this error.\n\nMost likely cause would be a concurrent transaction doing something that\ncaused a foreign key to get into a dead lock situation (the current\nimplementation grabs overly strong locks). In any case, we'll need more\ninformation about what else was going on probably.\n\n\n", "msg_date": "Thu, 21 Nov 2002 23:04:52 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Question about DEADLOCK" }, { "msg_contents": "Thanks Stephan,\n\nI wold like to know where to get LOCK manual/reference.\n\nAnd what is ther error number it will return when this error will come.Is there\nany work around this.HOw do I avoid this situation.\n\n--\nBest Regards\n- Savita\n----------------------------------------------------\nHewlett Packard (India)\n+91 80 2051288 (Phone)\n847 1288 (HP Telnet)\n----------------------------------------------------\n\n\nStephan Szabo wrote:\n\n> On Fri, 22 Nov 2002, Savita wrote:\n>\n> > Hi All,\n> >\n> > I have a question related to postgres.I have an application which is\n> > transferring data from MSSQL to Postgres at some regular interval.\n> >\n> > I had around 20,000 data in MSSQL which should be transferred to postgres.\n> > After transferring around 12000 data I got a error message\n> > ERROR: deadlock detected\n> > And after that data transfer has stopped.\n> >\n> > How to avoid this error and what is the reason for this error.\n>\n> Most likely cause would be a concurrent transaction doing something that\n> caused a foreign key to get into a dead lock situation (the current\n> implementation grabs overly strong locks). In any case, we'll need more\n> information about what else was going on probably.\n\n\n\n\n", "msg_date": "Fri, 22 Nov 2002 12:40:09 +0530", "msg_from": "Savita <savita@india.hp.com>", "msg_from_op": false, "msg_subject": "Re: Question about DEADLOCK" }, { "msg_contents": "I recommend reading Tom Lane's talk \"Concurrency Issues\" at OSCON2002.\nThis will answer all questions and some more.\n\n Best regards,\n\n Hans\n\n\n\nSavita wrote:\n\n>--------------880B7BB5722DEBFDA92AF2C3\n>Content-Type: text/plain; charset=us-ascii\n>Content-Transfer-Encoding: 7bit\n>\n>Hi All,\n>\n>I have a question related to postgres.I have an application which is transferring data from MSSQL to Postgres at some regular interval.\n>\n>I had around 20,000 data in MSSQL which should be transferred to postgres.\n>After transferring around 12000 data I got a error message\n>ERROR: deadlock detected\n>And after that data transfer has stopped.\n>\n>How to avoid this error and what is the reason for this error.\n>--\n>Best Regards\n>- Savita\n>----------------------------------------------------\n>Hewlett Packard (India)\n>+91 80 2051288 (Phone)\n>847 1288 (HP Telnet)\n>----------------------------------------------------\n>\n>\n>--------------880B7BB5722DEBFDA92AF2C3\n>Content-Type: text/html; charset=us-ascii\n>Content-Transfer-Encoding: 7bit\n>\n><!doctype html public \"-//w3c//dtd html 4.0 transitional//en\">\n><html>\n>Hi All,\n><p>I have a question related to postgres.I have an application which is\n>transferring data from MSSQL to Postgres at some regular interval.\n><p>I had around 20,000 data in MSSQL which should be transferred to postgres.\n><br>After transferring around 12000 data I got a error message\n><br><b><font color=\"#3333FF\">ERROR: deadlock detected</font></b>\n><br><font color=\"#000000\">And after that data transfer has stopped.</font><font color=\"#000000\"></font>\n><p><font color=\"#000000\">How to avoid this error and what is the reason\n>for this error.</font>\n><br>--\n><br>Best Regards\n><br>- Savita\n><br>----------------------------------------------------\n><br>Hewlett Packard (India)\n><br>+91 80 2051288 (Phone)\n><br>847 1288 (HP Telnet)\n><br>----------------------------------------------------\n><br>&nbsp;</html>\n>\n>--------------880B7BB5722DEBFDA92AF2C3--\n>\n> \n>\n\n\n-- \n*Cybertec Geschwinde u Schoenig*\nLudo-Hartmannplatz 1/14, A-1160 Vienna, Austria\nTel: +43/1/913 68 09; +43/664/233 90 75\nwww.postgresql.at <http://www.postgresql.at>, cluster.postgresql.at \n<http://cluster.postgresql.at>, www.cybertec.at \n<http://www.cybertec.at>, kernel.cybertec.at <http://kernel.cybertec.at>\n\n", "msg_date": "Fri, 22 Nov 2002 09:16:16 +0100", "msg_from": "=?ISO-8859-1?Q?Hans-J=FCrgen_Sch=F6nig?= <hs@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Question about DEADLOCK" }, { "msg_contents": "\nOn Fri, 22 Nov 2002, Savita wrote:\n\n> I wold like to know where to get LOCK manual/reference.\n\nWell, there's the entry for the LOCK command, but I don't think\nthat's what you want, what exactly are you looking for?\n\n\n> And what is ther error number it will return when this error will come.Is there\n> any work around this.HOw do I avoid this situation.\n\nI don't know of an error number, but the string will will have that\ndeadlock detected in it. :) As for a workaround, since we don't know\nwhat's causing it, we can't really give you a workaround.\n\nWhat's the schema of the table you're transferring to and what else\nwas going on in the server at the time you got the deadlock? I'd\nguess it was a foreign key, but that only applies if you've got\nforeign keys, for example.\n\n", "msg_date": "Fri, 22 Nov 2002 07:35:47 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Question about DEADLOCK" }, { "msg_contents": "Hello,\nYou try load data first and then referential constraints\n\nregards\nHaris Peco\nOn Friday 22 November 2002 07:10 am, Savita wrote:\n> Thanks Stephan,\n>\n> I wold like to know where to get LOCK manual/reference.\n>\n> And what is ther error number it will return when this error will come.Is\n> there any work around this.HOw do I avoid this situation.\n>\n> --\n> Best Regards\n> - Savita\n> ----------------------------------------------------\n> Hewlett Packard (India)\n> +91 80 2051288 (Phone)\n> 847 1288 (HP Telnet)\n> ----------------------------------------------------\n>\n> Stephan Szabo wrote:\n> > On Fri, 22 Nov 2002, Savita wrote:\n> > > Hi All,\n> > >\n> > > I have a question related to postgres.I have an application which is\n> > > transferring data from MSSQL to Postgres at some regular interval.\n> > >\n> > > I had around 20,000 data in MSSQL which should be transferred to\n> > > postgres. After transferring around 12000 data I got a error message\n> > > ERROR: deadlock detected\n> > > And after that data transfer has stopped.\n> > >\n> > > How to avoid this error and what is the reason for this error.\n> >\n> > Most likely cause would be a concurrent transaction doing something that\n> > caused a foreign key to get into a dead lock situation (the current\n> > implementation grabs overly strong locks). In any case, we'll need more\n> > information about what else was going on probably.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n", "msg_date": "Fri, 22 Nov 2002 17:30:35 +0000", "msg_from": "snpe <snpe@snpe.co.yu>", "msg_from_op": false, "msg_subject": "Re: Question about DEADLOCK" }, { "msg_contents": "I haven't seen this before, and I can't find it on techdocs. Can you \npost a link to it?\n\n-tfo\n\nIn article <3DDDE7D0.5040803@cybertec.at>,\n Hans-Jurgen Schonig <hs@cybertec.at> wrote:\n\n> I recommend reading Tom Lane's talk \"Concurrency Issues\" at OSCON2002.\n> This will answer all questions and some more.\n> \n> Best regards,\n> \n> Hans\n", "msg_date": "Thu, 09 Jan 2003 13:09:47 -0600", "msg_from": "Thomas O'Connell <tfo@monsterlabs.com>", "msg_from_op": false, "msg_subject": "Re: Question about DEADLOCK" }, { "msg_contents": "\"Thomas O'Connell\" <tfo@monsterlabs.com> writes:\n> Hans-Jurgen Schonig <hs@cybertec.at> wrote:\n>> I recommend reading Tom Lane's talk \"Concurrency Issues\" at OSCON2002.\n\n> I haven't seen this before, and I can't find it on techdocs. Can you \n> post a link to it?\n\nYou can find a PDF at O'Reilly's conferences archive,\n\nhttp://conferences.oreillynet.com/cs/os2002/view/e_sess/2681\n\nI recall asking Vince to put a copy on the Postgres website, but I don't\nthink he ever got around to it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 09 Jan 2003 16:04:58 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Question about DEADLOCK " } ]
[ { "msg_contents": "Hmmm, Then are there any new enhancements as far as backups are concerned between current 7.2.x to 7.3.x.\nLike can we do a tar when database is up and running or another feature.\n\nThanks a bunch in advance.\n\n-----Original Message-----\nFrom: Neil Conway [mailto:neilc@samurai.com]\nSent: Monday, October 07, 2002 1:48 PM\nTo: Sandeep Chadha\nCc: Tom Lane; pgsql-hackers@postgresql.org; pgsql-general\nSubject: Re: [HACKERS] Hot Backup\n\n\n\"Sandeep Chadha\" <sandeep@newnetco.com> writes:\n> Postgresql has been lacking this all along. I've installed postgres\n> 7.3b2 and still don't see any archive's flushed to any other\n> place. Please let me know how is hot backup procedure implemented in\n> current 7.3 beta(2) release.\n\nAFAIK no such hot backup feature has been implemented for 7.3 -- you\nappear to have been misinformed.\n\nThat said, I agree that would be a good feature to have.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n", "msg_date": "Mon, 7 Oct 2002 15:28:48 -0400", "msg_from": "\"Sandeep Chadha\" <sandeep@newnetco.com>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Hot Backup" } ]
[ { "msg_contents": "Request from sender:\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\nBruce,\n\nI've been trying to send this message to the hacker's mail list, but so\nfar every attempt has failed. Perhaps you could forward this message\nthere for me? \n\nThanks a lot,\ndave\n\n----\n\nHackers,\n\nI'm trying to improve the performance of the PostGIS\n(http://postgis.refractions.net) GEOMETRY indexing and have a few\nquestions for everyone (or just tell me I'm doing it all wrong).\n\nBasically, the problem is that I'm currently using a constant number for\nthe restriction selectivity of the overlap (&&) operator (CREATE\nOPERATOR ... RESTRICT=). This has to be quite low to force the index to\nbe used (the default value didnt use the index very often), but now it\nuses it too often.\n\nI'm currently implementing a much smarter method (see the proposal\nbelow) - basically creating a 2D histogram and using that to make a\nbetter guess at how many rows will be returned.\n\nSo far, I've looked at how the RESTRICT function is called (in 7.2), but\nI've come up against a few problems that I'm not sure what the proper\nway to handle is:\n\n1. Getting the histogram to be generated and where to store it.\n Ideally, I'd be able to store the statistics in pg_statistic and\n 'VACUUM ANALAYSE' would have hooks into my datatype's code.\n Is this even possible? Can I put a custom histogram in\npg_statistics?\n Can I have VACUUM call my statistical analysis functions?\n\n Alternatively, would I have to store the 2d histogram in a separate\n [user] table and have the user explicitly call a function to update\nit?\n\n2. How do I access the histogram from my RESTRICT function?\n I guess this is asking how do I efficiently do a query from within\n a function? The built-in RESTRICT functions seem to be able to\n access the system catalogs directly (\"SearchSysCache\"), but how does\n\n one access tables more directly (SPI?)? Is this going to be slow\n things down significantly?\n\n\nAm I doing the right thing?\n\nHere's a summary of the problem and the proposed solution,\n\ndave\n-------------\nSeveral people have reported index selection problems with PostGIS:\nsometimes it ops to use the spatial (GiST) index when its inappropriate.\n\nI suggest you also read\nhttp://www.postgresql.org/idocs/index.php?planner-stats.html\nsince it gives a good introduction to the statistics collected and used\nby\nthe query planner.\n\n\n\nPostgresql query planning\n-------------------------\n\nI'll start with a quick example, then move on to spatial indexing.\nLets start with a simple table:\n\nname location age\n-------------------------------\ndave james bay 31\npaul james bay 30\noscar central park 23\nchris downtown 22\n\nWith some indexes:\n\nCREATE index people_name_idx on people (name);\nCREATE index people_age_idx on people (age);\n\nWe then start to execute some queries:\n\n#1) SELECT * FROM people WHERE location = 'james bay';\n\nPostgresql's only possible query plan is to do a sequential scan of the\ntable and check to see if location ='james bay' - there isnt an index to\n\nconsult. The sequential scan is simple - it loads each row from the\ntable\nand checks to see if location='james bay'.\n\nHere's what postgresql says about this query:\n\ndblasby=# explain analyse SELECT * FROM people WHERE location = 'james\nbay';\n\nNOTICE: QUERY PLAN:\n\nSeq Scan on people (cost=0.00..1.05 rows=2 width=25) (actual\ntime=0.02..0.03 rows=2 loops=1)\nTotal runtime: 0.07 msec\n\nNote that postgresql is using a \"Seq Scan\" and predicts 2 result rows.\nThe\nplanner is very accurate in this estimate because the statistics\ncollected\nexplicitly say that 'james bay' is the most common value in this column\n(cf.\npg_stats and pg_statistics tables).\n\n#2) SELECT * FROM people WHERE name ='dave';\n\nHere the query planner has two option - it can do a sequential scan or\nit\ncan use the people_name_idx. Here's what the query planner says (I\nexplicitly tell it to use the index in the 2nd run):\n\ndblasby=# explain analyse SELECT * FROM people WHERE name ='dave';\nNOTICE: QUERY PLAN:\n\nSeq Scan on people (cost=0.00..1.05 rows=1 width=25) (actual\ntime=0.02..0.03 rows=1 loops=1)\nTotal runtime: 0.07 msec\n\ndblasby=# set enable_seqscan=off;\ndblasby=# explain analyse SELECT * FROM people WHERE name ='dave';\nNOTICE: QUERY PLAN:\n\nIndex Scan using people_name_idx on people (cost=0.00..4.41 rows=1\nwidth=25) (actual time=33.72..33.73 rows=1 loops=1)\nTotal runtime: 33.82 msec\n\nIn this case the sequential scan is faster because it only has to load\none\npage from the disk and check 4 rows. The index scan will have to load\nin\nthe index, perform the scan, then load in the page with the data in it.\nOn a larger table, the index scan would probably be significantly\nfaster.\n\n#3)SELECT * FROM people WHERE age < 30;\n\nAgain, we have a chose of the index or sequential scan. The estimate of\nthe\nnumber of rows is calculated by the \"<\" operator for integers and the\ntable\nstatistics. We'll talk about row # estimates later.\n\ndblasby=# explain analyse SELECT * FROM people WHERE age < 30;\nNOTICE: QUERY PLAN:\n\nSeq Scan on people (cost=0.00..1.05 rows=3 width=25) (actual\ntime=0.03..0.03 rows=2 loops=1)\nTotal runtime: 0.10 msec\n\nEXPLAIN\ndblasby=# set enable_seqscan=off;\nSET VARIABLE\ndblasby=# explain analyse SELECT * FROM people WHERE age < 30;\nNOTICE: QUERY PLAN:\n\nIndex Scan using people_age_idx on people (cost=0.00..3.04 rows=3\nwidth=25)\n(actual time=0.06..0.07 rows=2 loops=1)\nTotal runtime: 0.15 msec\n\n\n#4) SELECT * FROM people WHERE age < 30 AND name='dave';\n\nHere we have 3 plans - sequence scan and index scan (age) and index scan\n\n(name). The actual plan chosen will be determined by looking at all the\n\nplans and estimating which is fastest.\nIf the 'people' table had a lot of rows, the \"name='dave'\" clause would\nprobably be much more selective (return less results) than the \"age<30\"\nclause. The planner would probably use the name index.\n\nSpatial Indexing\n----------------\n\nFor spatial indexing, queries look like:\n SELECT * FROM mygeomtable WHERE the_geom && 'BOX3D(...)'::box3d;\n\nThe planner has two options - do a sequential scan of the table (and\ncheck\neach geometry against the BOX3D) or do an index scan using the GiST\nindex.\nIts further confused because TOASTed (large) geometries will be\nexpensive to\ntest in the sequential scan because the entire geometry much be read.\n\nThe planner makes it's choice by estimating how many rows the \" the_geom\n&&\n'BOX3D(...)'::box3d \" clause will return - if its just a few row, the\nindex\nwill be extremely efficient. If it returns a large number of rows, the\ncost\ninvolved in consulting the index AND then actually reading the data from\nthe\ntable will be high.\n\nWhen PostGIS was first created, it was very difficult to get postgresql\nto\nactually use the index because the estimate of the number of rows\nreturned\nwas always high. The planner thought the sequential scan was faster.\n\nA later release of PostGIS tried to fix this by lowering this estimate.\nThis made postgresql use the spatial index almost always. Since the\nGiST\nindex has low overhead, this worked well even if the index was used when\nit\nwasnt the best plan. The estimate was very simple - it was a fixed % of\nthe\ntotal data.\n\nEverything was great until people started doing queries like:\n\nSELECT * FROM mygeomtable WHERE\n the_geom && 'BOX3D(...)'::box3d AND\n road_type =27;\n\nPostgresql now has to make a decision between the spatial index and the\nattribute index (on 'road_type'). Since we were doing a very simple\nestimate for the selectivity of the && operator, the spatial index is\noften\nused when another index (ie. road_type) would be more appropriate. When\nthe\nBOX3D is \"large\" the problem is applified because the estimate is very\nwrong.\nSecond, problems with joins and other queries occur because the estimate\nof\nthe number of rows returned was much lower than in actuality and\npostgresql\nchoose poor table joining strategies.\n\nNew Proposal\n------------\n\nThe current selectivity of the \"&&\" operator is fixed - it always\nreports (i\nthink 5%) the same number of rows no matter where the query BOX3D is or\nits\nsize. This is as simple as it gets.\n\nThe next simplest method is to look at the extents of the data, and the\nsize\nof the query window. If the query window is 1/4 the size of the data,\nthen\nwe can estimate that 1/4 of the rows will be returned. This is quite\nworkable, but has problem because spatial data is not uniformly\ndistributed.\n\nThe proposed method extends the simple method - we use another index to\nestimate the number of results for the \"&&\" operator. WHEW - indexing\nour\nindexes! In a nutshell, the plan is to use a modified quad tree (ie.\nhas\nfixed cell sizes and location) to store a 2d histogram of the spatial\ndata.\nThis is best explained in a few diagrams. See the attached 2 diagrams.\n\nIn diagram one we see typical spatial data - road in the province of\nBC. In\nplace like Vancouver, Victoria, Kelowna, Prince George there are LOTS of\n\nroads. Elsewhere there are hardly any. There are a total of 200,000\nrows\nin the roads table. The proposed method does a 3 stage pre-process.\n(See\nfigure 2)\n\n1. computes the extents of the data. Basically\n SELECT extent(the_geom) FROM mygeometrytable;\n2. make a fixed size grid from the extents - something like a 15*15\nmatrix\n3. scan through each row of the geometry table and determine which of\nthe\n grid's cells overlap the bounding box of the geometry. Increment\n those grid cells by 1.\n\nThe result will be a 2d histogram of the spatial data. Cells with lots\nof\ngeometries in them will have a high value, and cells with few will have\na\nlow value. This is stored somewhere (ie. the geometry_columns metatable\nbut\nit would be nice to have it siting in shared memory somewhere) - the\nhistogram will be small enough to sit on one disk page.\n\nAn incomming query's (ie. the_geom && 'BOX3D(..)') selectivity is\ncalculated\nthis way:\n1. Load the appropriate 2d histogram (one disk page)\n2. find which cells the BOX3D overlaps and the % area of the cell that\nthe\n BOX3D overlaps\n3. sum ( % overlap * <number of geometries in that cell>)\n\nThis should give a 'reasonable' estimate of the particular \"&&\"\nselectivity. The estimate will be poor if the data is extreamly skewed.\n\nAn extention of this would be to make a proper quad tree (instead of\nfixed\ncells), then simplify it. Unfortunately this could be very difficult\nand\nisnt proposed (yet).\n\nThoughts?\ndave", "msg_date": "Mon, 7 Oct 2002 16:22:15 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Statistical Analysis, Vacuum, and Selectivity Restriction (PostGIS)\n\t(fwd)" } ]
[ { "msg_contents": "I've been taking a look at improving the performance of the GEQO\ncode. Before looking at algorithmic improvements, I noticed some\n\"low-hanging fruit\": gprof revealed that newNode() was a hotspot. When\nI altered it to be an inline function, the overall time to plan a\n12-table table join (using the default GEQO settings) dropped by about\n9%. I haven't taken a look at how the patch effects the other places\nthat use newNode(), but it stands to reason that they'd see a\nperformance improvement as well (although probably less noticeable).\n\nHowever, I'm not sure if I used the correct syntax for inlining the\nfunction (since it was originally declared in a header and defined\nelsewhere, I couldn't just add 'inline'). The method I used (declaring\nthe function 'extern inline' and defining it in the header file) works\nfor me with GCC 3.2, but I'm open to suggestions for improvement.\n\nBTW, after applying the patch, the GEQO profile looks like:\n\n % cumulative self self total \n time seconds seconds calls s/call s/call name\n 16.07 0.18 0.18 4618735 0.00 0.00 compare_path_costs\n 9.82 0.29 0.11 2149666 0.00 0.00 AllocSetAlloc\n 8.04 0.38 0.09 396333 0.00 0.00 add_path\n 4.46 0.43 0.05 2149661 0.00 0.00 MemoryContextAlloc\n 3.57 0.47 0.04 1150953 0.00 0.00 compare_pathkeys\n\n(Yes, gprof on my machine is still a bit fubared...)\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC", "msg_date": "07 Oct 2002 17:15:34 -0400", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": true, "msg_subject": "inline newNode()" }, { "msg_contents": "Neil Conway <neilc@samurai.com> writes:\n> I've been taking a look at improving the performance of the GEQO\n> code. Before looking at algorithmic improvements, I noticed some\n> \"low-hanging fruit\": gprof revealed that newNode() was a hotspot. When\n> I altered it to be an inline function, the overall time to plan a\n> 12-table table join (using the default GEQO settings) dropped by about\n> 9%.\n\nHow much did you bloat the code? There are an awful lot of calls to\nnewNode(), so even though it's not all that large, I'd think the\nmultiplier would be nasty.\n\nIt might be better to offer two versions, standard out-of-line and a\nmacro-or-inline called something like newNodeInline(), and change just\nthe hottest call sites to use the inline. You could probably get most\nof the speedup from inlining at just a few call sites. (I've been\nintending to do something similar with the TransactionId comparison\nfunctions.)\n\n> However, I'm not sure if I used the correct syntax for inlining the\n> function (since it was originally declared in a header and defined\n> elsewhere, I couldn't just add 'inline'). The method I used (declaring\n> the function 'extern inline' and defining it in the header file) works\n> for me with GCC 3.2, but I'm open to suggestions for improvement.\n\nThis isn't portable at all, AFAIK :-(. Unfortunately I can't think\nof a portable way to do it with a macro, either.\n\nHowever, if you're willing to go the route involving changing call\nsites, then you could change\n\tfoo = newNode(typename);\nto\n\tnewNodeMacro(foo, typename);\nwhereupon it becomes pretty easy to make a suitable macro.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 Oct 2002 17:43:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: inline newNode() " }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n> How much did you bloat the code? There are an awful lot of calls to\n> newNode(), so even though it's not all that large, I'd think the\n> multiplier would be nasty.\n\nThe patch increases the executable from 12844452 to 13005244 bytes,\nwhen compiled with '-pg -g -O2' and without being stripped.\n\n> This isn't portable at all, AFAIK :-(. Unfortunately I can't think\n> of a portable way to do it with a macro, either.\n\nWell, one alternative might be to provide 2 definitions of the\nfunction -- one an extern inline in the header file, and one using the\ncurrent method (in a separate file, non-inline). Then wrap the header\nfile in an #ifdef __GNUC__ block, and the non-inlined version in\n#ifndef __GNUC__. The downside is that it means maintaining two\nversions of the same function -- but given that newNode() is pretty\ntrivial, that might be acceptable.\n\nBTW, the GCC docs on inline functions are here:\n\n http://gcc.gnu.org/onlinedocs/gcc-3.2/gcc/Inline.html#Inline\n\nAccording to that page, using 'static inline' instead of 'extern\ninline' is recommended for future compatability with C99, so that's\nwhat we should probably use (in the __GNUC__ version).\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n", "msg_date": "07 Oct 2002 18:29:03 -0400", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": true, "msg_subject": "Re: inline newNode()" }, { "msg_contents": "Neil Conway <neilc@samurai.com> writes:\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n>> How much did you bloat the code? There are an awful lot of calls to\n>> newNode(), so even though it's not all that large, I'd think the\n>> multiplier would be nasty.\n\n> The patch increases the executable from 12844452 to 13005244 bytes,\n> when compiled with '-pg -g -O2' and without being stripped.\n\nOkay, not as bad as I feared, but still kinda high.\n\nI believe that most of the bloat comes from the MemSet macro; there's\njust not much else in newNode(). Now, the reason MemSet expands to\na fair amount of code is its if-then-else case to decide whether to\ncall memset() or do an inline loop. I've looked at the assembler code\nfor it on a couple of machines, and the loop proper is only about a\nthird of the code that gets generated.\n\nIdeally, we'd like to eliminate the if-test for inlined newNode calls.\nThat would buy back a lot of the bloat and speed things up still\nfurther.\n\nNow the tests on _val == 0 and _len <= MEMSET_LOOP_LIMIT and _len being\na multiple of 4 are no problem, since _val and _len are compile-time\nconstants; these will be optimized away. What is not optimized away\n(on the compilers I've looked at) is the check for _start being\nint-aligned.\n\nA brute-force approach is to say \"we know _start is word-aligned because\nwe just got it from palloc, which guarantees MAXALIGNment\". We could\nmake a variant version of MemSet that omits the alignment check, and use\nit here and anywhere else we're sure it's safe.\n\nA nicer approach would be to somehow make use of the datatype of the\nfirst argument to MemSet. If we could determine at compile time that\nit's supposed to point at a type with at least int alignment, then\nit'd be possible for the compiler to optimize away this check in a\nreasonably safe fashion. I'm not sure if there's a portable way to\ndo this, though. There's no \"alignof()\" construct in C :-(.\nAny ideas?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 Oct 2002 21:08:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: inline newNode() " }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n[ snip interesting analysis ]\n> A nicer approach would be to somehow make use of the datatype of the\n> first argument to MemSet. If we could determine at compile time\n> that it's supposed to point at a type with at least int alignment,\n> then it'd be possible for the compiler to optimize away this check\n> in a reasonably safe fashion. I'm not sure if there's a portable\n> way to do this, though. There's no \"alignof()\" construct in C\n> :-(. Any ideas?\n\nWell, we could make use of (yet another) GCC-ism: the __alignof__\nkeyword, which is described here:\n\n http://gcc.gnu.org/onlinedocs/gcc-3.2/gcc/Alignment.html#Alignment\n\nI don't like making the code GCC-specific any more than anyone else\ndoes, but given that the code-bloat is specific to the inline version\nof newNode (which in the scheme I described earlier would be\nGCC-only) -- so introducing a GCC-specific fix for a GCC-specific\nproblem isn't too bad, IMHO.\n\nOr we could just use your other suggestion: define a variant of\nMemSet() and use it when we know it's safe. Not sure which is the\nbetter solution: any comments?\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n", "msg_date": "08 Oct 2002 00:58:34 -0400", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": true, "msg_subject": "Re: inline newNode()" }, { "msg_contents": "Neil Conway <neilc@samurai.com> writes:\n> I don't like making the code GCC-specific any more than anyone else\n> does, but given that the code-bloat is specific to the inline version\n> of newNode (which in the scheme I described earlier would be\n> GCC-only) -- so introducing a GCC-specific fix for a GCC-specific\n> problem isn't too bad, IMHO.\n\n> Or we could just use your other suggestion: define a variant of\n> MemSet() and use it when we know it's safe. Not sure which is the\n> better solution: any comments?\n\nIf we're going with a GCC-only approach to inlining newNode then it\nseems like a tossup to me too. Any other thoughts out there?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 Oct 2002 11:53:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: inline newNode() " }, { "msg_contents": "Tom Lane wrote:\n> Neil Conway <neilc@samurai.com> writes:\n> > I don't like making the code GCC-specific any more than anyone else\n> > does, but given that the code-bloat is specific to the inline version\n> > of newNode (which in the scheme I described earlier would be\n> > GCC-only) -- so introducing a GCC-specific fix for a GCC-specific\n> > problem isn't too bad, IMHO.\n> \n> > Or we could just use your other suggestion: define a variant of\n> > MemSet() and use it when we know it's safe. Not sure which is the\n> > better solution: any comments?\n> \n> If we're going with a GCC-only approach to inlining newNode then it\n> seems like a tossup to me too. Any other thoughts out there?\n\nSeems newNode can easily be made into a macro. I can do the coding, and\nif you tell me that newNode will always be int-aligned, I can make an\nassume-aligned version of MemSet.\n\nIs that what people want to try?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 8 Oct 2002 12:04:35 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: inline newNode()" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Seems newNode can easily be made into a macro.\n\nIf you can see how to do that, show us ... that would eliminate the\nneed for a GCC-only inline function, which would sway my vote away\nfrom depending on __alignof__ for the other thing.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 Oct 2002 12:08:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: inline newNode() " }, { "msg_contents": "Tom Lane writes:\n\n> A brute-force approach is to say \"we know _start is word-aligned because\n> we just got it from palloc, which guarantees MAXALIGNment\". We could\n> make a variant version of MemSet that omits the alignment check, and use\n> it here and anywhere else we're sure it's safe.\n\nOr make a version of palloc that zeroes the memory.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 8 Oct 2002 23:41:24 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: inline newNode() " }, { "msg_contents": "Neil Conway writes:\n\n> Well, one alternative might be to provide 2 definitions of the\n> function -- one an extern inline in the header file, and one using the\n> current method (in a separate file, non-inline). Then wrap the header\n> file in an #ifdef __GNUC__ block, and the non-inlined version in\n> #ifndef __GNUC__.\n\nExternal inline functions aren't even portable across different versions\nof GCC. It's best not to go there at all.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 8 Oct 2002 23:41:41 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: inline newNode()" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Or make a version of palloc that zeroes the memory.\n\nWell, we might lose a bit of speed that way -- with the current\nMemSet() macro, most of the tests that MemSet() does can be eliminated\nat compile-time, since MemSet() is usually called with some constant\nparameters. If we moved this code inside palloc(), it wouldn't be\npossible to do this constant propagation.\n\nNot sure it would make a big performance difference, though -- and I\nthink palloc() is a clean solution. In fact, it's probably worth\nhaving anyway: a lot of the call sites of palloc() immediately zero\nthe memory after they allocate it...\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n", "msg_date": "08 Oct 2002 18:59:44 -0400", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": true, "msg_subject": "Re: inline newNode()" }, { "msg_contents": "Peter Eisentraut wrote:\n> Tom Lane writes:\n> \n> > A brute-force approach is to say \"we know _start is word-aligned because\n> > we just got it from palloc, which guarantees MAXALIGNment\". We could\n> > make a variant version of MemSet that omits the alignment check, and use\n> > it here and anywhere else we're sure it's safe.\n> \n> Or make a version of palloc that zeroes the memory.\n\nOK, here is a version of newNode that is a macro. However, there are\nproblems:\n\nFirst, I can't get the code to assign the tag (dereference the pointer)\n_and_ return the pointer, in a macro. I can convert it to take a third\npointer argument, and it is only called <10 times, so that is easy, but\nit is used by makeNode, which is called ~1000 times, so that seems like\na bad idea. My solution was to declare a global variable in the header\nfile to be used by the macro. Because the macro isn't called\nrecursively, that should be fine.\n\nNow, the other problem is MemSet. MemSet isn't designed to be called\ninside a macro. In fact, the 'if' tests are easy to do in a macro with\n\"? :\", but the \"while\" loop seems to be impossible in a macro. I tried\nseeing if 'goto' would work in a macro, but of course it doesn't, and\neven if it did, I doubt it would be portable. The patch merely calls\nmemset() rather than MemSet.\n\nNot sure if this is a win or not. It makes makeNode a macro, but\nremoves the use of the MemSet macro in favor of memset().\n\nDoes anyone have additional suggestions? The only thing I can suggest\nis to make a clear-memory version of palloc because palloc always calls\nMemoryContextAlloc() so I can put it in there. How does that sound? \n\nHowever, now that I look at it, MemoryContextAlloc() could be made a\nmacro easier than newNode() and that would save function call in more\ncases than newNode. In fact, the comment at the top of\nMemoryContextAlloc() says:\n\n * This could be turned into a macro, but we'd have to import\n * nodes/memnodes.h into postgres.h which seems a bad idea.\n\nIdeas?\n\nThe regression tests do pass with this patch, so functionally it works\nfine.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\nIndex: src/backend/nodes/nodes.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/backend/nodes/nodes.c,v\nretrieving revision 1.15\ndiff -c -c -r1.15 nodes.c\n*** src/backend/nodes/nodes.c\t20 Jun 2002 20:29:29 -0000\t1.15\n--- src/backend/nodes/nodes.c\t8 Oct 2002 23:52:21 -0000\n***************\n*** 28,42 ****\n *\t macro makeNode. eg. to create a Resdom node, use makeNode(Resdom)\n *\n */\n! Node *\n! newNode(Size size, NodeTag tag)\n! {\n! \tNode\t *newNode;\n \n- \tAssert(size >= sizeof(Node));\t\t/* need the tag, at least */\n- \n- \tnewNode = (Node *) palloc(size);\n- \tMemSet((char *) newNode, 0, size);\n- \tnewNode->type = tag;\n- \treturn newNode;\n- }\n--- 28,32 ----\n *\t macro makeNode. eg. to create a Resdom node, use makeNode(Resdom)\n *\n */\n! Node *newNodeMacroHolder;\n \nIndex: src/include/nodes/nodes.h\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/include/nodes/nodes.h,v\nretrieving revision 1.118\ndiff -c -c -r1.118 nodes.h\n*** src/include/nodes/nodes.h\t31 Aug 2002 22:10:47 -0000\t1.118\n--- src/include/nodes/nodes.h\t8 Oct 2002 23:52:25 -0000\n***************\n*** 261,266 ****\n--- 261,285 ----\n \n #define nodeTag(nodeptr)\t\t(((Node*)(nodeptr))->type)\n \n+ /*\n+ *\tThere is no way to deferernce the palloc'ed pointer to assign the\n+ *\ttag, and return the pointer itself, so we need a holder variable.\n+ *\tFortunately, this function isn't recursive so we just define\n+ *\ta global variable for this purpose.\n+ */\n+ extern Node *newNodeMacroHolder;\n+ \n+ #define newNode(size, tag) \\\n+ ( \\\n+ \tAssertMacro((size) >= sizeof(Node)),\t\t/* need the tag, at least */ \\\n+ \\\n+ \tnewNodeMacroHolder = (Node *) palloc(size), \\\n+ \tmemset((char *)newNodeMacroHolder, 0, (size)), \\\n+ \tnewNodeMacroHolder->type = (tag), \\\n+ \tnewNodeMacroHolder \\\n+ )\n+ \n+ \n #define makeNode(_type_)\t\t((_type_ *) newNode(sizeof(_type_),T_##_type_))\n #define NodeSetTag(nodeptr,t)\t(((Node*)(nodeptr))->type = (t))\n \n***************\n*** 281,291 ****\n *\t\t\t\t\t extern declarations follow\n * ----------------------------------------------------------------\n */\n- \n- /*\n- * nodes/nodes.c\n- */\n- extern Node *newNode(Size size, NodeTag tag);\n \n /*\n * nodes/{outfuncs.c,print.c}\n--- 300,305 ----", "msg_date": "Tue, 8 Oct 2002 20:07:30 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: inline newNode()" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> OK, here is a version of newNode that is a macro.\n\nIf you use memset() instead of MemSet(), I'm afraid you're going to blow\noff most of the performance gain this was supposed to achieve.\n\n> Does anyone have additional suggestions? The only thing I can suggest\n> is to make a clear-memory version of palloc because palloc always calls\n> MemoryContextAlloc() so I can put it in there. How does that sound? \n\nI do not think palloc should auto-zero memory. Hard to explain why,\nbut it just feels like a bad decision. One point is that the MemSet\nhas to be inlined or it cannot compile-out the tests on _len. palloc\ncan't treat the length as a compile-time constant.\n\n> The regression tests do pass with this patch, so functionally it works\n> fine.\n\nSpeed is the issue here, not functionality...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 09 Oct 2002 00:12:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: inline newNode() " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > OK, here is a version of newNode that is a macro.\n> \n> If you use memset() instead of MemSet(), I'm afraid you're going to blow\n> off most of the performance gain this was supposed to achieve.\n\nYep.\n\n> > Does anyone have additional suggestions? The only thing I can suggest\n> > is to make a clear-memory version of palloc because palloc always calls\n> > MemoryContextAlloc() so I can put it in there. How does that sound? \n> \n> I do not think palloc should auto-zero memory. Hard to explain why,\n> but it just feels like a bad decision. One point is that the MemSet\n> has to be inlined or it cannot compile-out the tests on _len. palloc\n> can't treat the length as a compile-time constant.\n\nRight, palloc shouldn't. I was thinking of having another version of\npalloc that _does_ clear out memory, and calling that from a newNode()\nmacro. We already know palloc is going to call MemoryContextAlloc, so\nwe could have a pallocC() that calls a new MemoryContextAllocC() that\nwould call the underlying memory allocation function, then do the loop\nlike MemSet to clear it.\n\nIt would allow newNode to be a macro and prevent the bloat of having\nMemSet appear everywhere newNode appears; it would all be in the new\nfunction MemoryContextAllocC().\n\n> > The regression tests do pass with this patch, so functionally it works\n> > fine.\n> \n> Speed is the issue here, not functionality...\n\nI was just proving it works, that's all.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 9 Oct 2002 00:21:41 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: inline newNode()" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Right, palloc shouldn't. I was thinking of having another version of\n> palloc that _does_ clear out memory, and calling that from a newNode()\n> macro. We already know palloc is going to call MemoryContextAlloc, so\n> we could have a pallocC() that calls a new MemoryContextAllocC() that\n> would call the underlying memory allocation function, then do the loop\n> like MemSet to clear it.\n\nBut if the MemSet is inside the called function then it cannot reduce\nthe if-tests to a compile-time decision to invoke the word-zeroing loop.\nWe want the MemSet to be expanded at the newNode call site, where the\nsize of the allocated memory is a compile-time constant.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 09 Oct 2002 00:28:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: inline newNode() " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Right, palloc shouldn't. I was thinking of having another version of\n> > palloc that _does_ clear out memory, and calling that from a newNode()\n> > macro. We already know palloc is going to call MemoryContextAlloc, so\n> > we could have a pallocC() that calls a new MemoryContextAllocC() that\n> > would call the underlying memory allocation function, then do the loop\n> > like MemSet to clear it.\n> \n> But if the MemSet is inside the called function then it cannot reduce\n> the if-tests to a compile-time decision to invoke the word-zeroing loop.\n> We want the MemSet to be expanded at the newNode call site, where the\n> size of the allocated memory is a compile-time constant.\n\nI can easily do the tests in the MemSet macro, but I can't do a loop in\na macro that has to return a value; I need while(). Though a loop in a\nnew fuction will not be as fast as a MemSet macro, I think it will be\nbetter than what we have now with newNode only because newNode will be a\nmacro and not a function anymore, i.e. the MemSet will happen in the\nfunction called by pallocC and not in newNode anymore, and there will be\nzero code bloat. I wish I saw another way.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 9 Oct 2002 00:35:06 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: inline newNode()" }, { "msg_contents": "Bruce Momjian wrote:\n> Tom Lane wrote:\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > Right, palloc shouldn't. I was thinking of having another version of\n> > > palloc that _does_ clear out memory, and calling that from a newNode()\n> > > macro. We already know palloc is going to call MemoryContextAlloc, so\n> > > we could have a pallocC() that calls a new MemoryContextAllocC() that\n> > > would call the underlying memory allocation function, then do the loop\n> > > like MemSet to clear it.\n> > \n> > But if the MemSet is inside the called function then it cannot reduce\n> > the if-tests to a compile-time decision to invoke the word-zeroing loop.\n> > We want the MemSet to be expanded at the newNode call site, where the\n> > size of the allocated memory is a compile-time constant.\n> \n> I can easily do the tests in the MemSet macro, but I can't do a loop in\n> a macro that has to return a value; I need while(). Though a loop in a\n> new fuction will not be as fast as a MemSet macro, I think it will be\n> better than what we have now with newNode only because newNode will be a\n> macro and not a function anymore, i.e. the MemSet will happen in the\n> function called by pallocC and not in newNode anymore, and there will be\n> zero code bloat. I wish I saw another way.\n\nOK, here's a patch for testing. It needs cleanup because the final\nversion would remove the nodes/nodes.c file. The net effect of the\npatch is to make newNode a macro with little code bloat.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\nIndex: src/backend/nodes/nodes.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/backend/nodes/nodes.c,v\nretrieving revision 1.15\ndiff -c -c -r1.15 nodes.c\n*** src/backend/nodes/nodes.c\t20 Jun 2002 20:29:29 -0000\t1.15\n--- src/backend/nodes/nodes.c\t9 Oct 2002 05:17:13 -0000\n***************\n*** 28,42 ****\n *\t macro makeNode. eg. to create a Resdom node, use makeNode(Resdom)\n *\n */\n! Node *\n! newNode(Size size, NodeTag tag)\n! {\n! \tNode\t *newNode;\n \n- \tAssert(size >= sizeof(Node));\t\t/* need the tag, at least */\n- \n- \tnewNode = (Node *) palloc(size);\n- \tMemSet((char *) newNode, 0, size);\n- \tnewNode->type = tag;\n- \treturn newNode;\n- }\n--- 28,32 ----\n *\t macro makeNode. eg. to create a Resdom node, use makeNode(Resdom)\n *\n */\n! Node *newNodeMacroHolder;\n \nIndex: src/backend/utils/mmgr/mcxt.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/backend/utils/mmgr/mcxt.c,v\nretrieving revision 1.32\ndiff -c -c -r1.32 mcxt.c\n*** src/backend/utils/mmgr/mcxt.c\t12 Aug 2002 00:36:12 -0000\t1.32\n--- src/backend/utils/mmgr/mcxt.c\t9 Oct 2002 05:17:17 -0000\n***************\n*** 453,458 ****\n--- 453,481 ----\n }\n \n /*\n+ * MemoryContextAllocC\n+ *\t\tLike MemoryContextAlloc, but clears allocated memory\n+ *\n+ *\tWe could just call MemoryContextAlloc then clear the memory, but this\n+ *\tfunction is called too many times, so we have a separate version.\n+ */\n+ void *\n+ MemoryContextAllocC(MemoryContext context, Size size)\n+ {\n+ \tvoid *ret;\n+ \n+ \tAssertArg(MemoryContextIsValid(context));\n+ \n+ \tif (!AllocSizeIsValid(size))\n+ \t\telog(ERROR, \"MemoryContextAlloc: invalid request size %lu\",\n+ \t\t\t (unsigned long) size);\n+ \n+ \tret = (*context->methods->alloc) (context, size);\n+ \tMemSet(ret, 0, size);\n+ \treturn ret;\n+ }\n+ \n+ /*\n * pfree\n *\t\tRelease an allocated chunk.\n */\nIndex: src/include/nodes/nodes.h\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/include/nodes/nodes.h,v\nretrieving revision 1.118\ndiff -c -c -r1.118 nodes.h\n*** src/include/nodes/nodes.h\t31 Aug 2002 22:10:47 -0000\t1.118\n--- src/include/nodes/nodes.h\t9 Oct 2002 05:17:20 -0000\n***************\n*** 261,266 ****\n--- 261,284 ----\n \n #define nodeTag(nodeptr)\t\t(((Node*)(nodeptr))->type)\n \n+ /*\n+ *\tThere is no way to dereference the palloc'ed pointer to assign the\n+ *\ttag, and return the pointer itself, so we need a holder variable.\n+ *\tFortunately, this function isn't recursive so we just define\n+ *\ta global variable for this purpose.\n+ */\n+ extern Node *newNodeMacroHolder;\n+ \n+ #define newNode(size, tag) \\\n+ ( \\\n+ \tAssertMacro((size) >= sizeof(Node)),\t\t/* need the tag, at least */ \\\n+ \\\n+ \tnewNodeMacroHolder = (Node *) pallocC(size), \\\n+ \tnewNodeMacroHolder->type = (tag), \\\n+ \tnewNodeMacroHolder \\\n+ )\n+ \n+ \n #define makeNode(_type_)\t\t((_type_ *) newNode(sizeof(_type_),T_##_type_))\n #define NodeSetTag(nodeptr,t)\t(((Node*)(nodeptr))->type = (t))\n \n***************\n*** 281,291 ****\n *\t\t\t\t\t extern declarations follow\n * ----------------------------------------------------------------\n */\n- \n- /*\n- * nodes/nodes.c\n- */\n- extern Node *newNode(Size size, NodeTag tag);\n \n /*\n * nodes/{outfuncs.c,print.c}\n--- 299,304 ----\nIndex: src/include/utils/palloc.h\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/include/utils/palloc.h,v\nretrieving revision 1.19\ndiff -c -c -r1.19 palloc.h\n*** src/include/utils/palloc.h\t20 Jun 2002 20:29:53 -0000\t1.19\n--- src/include/utils/palloc.h\t9 Oct 2002 05:17:21 -0000\n***************\n*** 46,53 ****\n--- 46,56 ----\n * Fundamental memory-allocation operations (more are in utils/memutils.h)\n */\n extern void *MemoryContextAlloc(MemoryContext context, Size size);\n+ extern void *MemoryContextAllocC(MemoryContext context, Size size);\n \n #define palloc(sz)\tMemoryContextAlloc(CurrentMemoryContext, (sz))\n+ \n+ #define pallocC(sz)\tMemoryContextAllocC(CurrentMemoryContext, (sz))\n \n extern void pfree(void *pointer);", "msg_date": "Wed, 9 Oct 2002 01:21:56 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: inline newNode()" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> ... I wish I saw another way.\n\nI liked Neil's proposal better.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 09 Oct 2002 01:24:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: inline newNode() " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > ... I wish I saw another way.\n> \n> I liked Neil's proposal better.\n\nWhat is it you liked, specifically?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 9 Oct 2002 01:24:52 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: inline newNode()" }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > ... I wish I saw another way.\n> \n> I liked Neil's proposal better.\n\nI think Neil is going to test my patch. I think he will find it yields\nidentical performance to his patch without the bloat, and it will work\non all platforms. I don't think the MemSet() constant calls are that\nbig a performance win.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 9 Oct 2002 01:35:47 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: inline newNode()" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> OK, here's a patch for testing. It needs cleanup because the final\n> version would remove the nodes/nodes.c file. The net effect of the\n> patch is to make newNode a macro with little code bloat.\n\nOk, I think this is the better version. The performance seems to be\nabout the same as my original patch, and the code bloat is a lot less:\n12857787 with the patch versus 12845168 without it. The\nimplemementation (with a global variable) is pretty ugly, but I don't\nreally see a way around that...\n\nOne minor quibble with the patch though: MemoryContextAllocC() and\npallocC() aren't very good function names, IMHO. Perhaps\nMemoryContextAllocZero() and palloc0() would be better?\n\nThis isn't specific to your patch, but it occurs to me that we could\nsave a few bits of code bloat if we removed the '_len' variable\ndeclaration from the MemSet() macro -- it isn't needed, AFAICT. That\nwould mean we we'd evaluate the 'len' argument multiple times, so I'm\nnot sure if that's a win overall...\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n", "msg_date": "09 Oct 2002 14:01:56 -0400", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": true, "msg_subject": "Re: inline newNode()" }, { "msg_contents": "Neil Conway wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > OK, here's a patch for testing. It needs cleanup because the final\n> > version would remove the nodes/nodes.c file. The net effect of the\n> > patch is to make newNode a macro with little code bloat.\n> \n> Ok, I think this is the better version. The performance seems to be\n> about the same as my original patch, and the code bloat is a lot less:\n> 12857787 with the patch versus 12845168 without it. The\n> implemementation (with a global variable) is pretty ugly, but I don't\n> really see a way around that...\n> \n> One minor quibble with the patch though: MemoryContextAllocC() and\n> pallocC() aren't very good function names, IMHO. Perhaps\n> MemoryContextAllocZero() and palloc0() would be better?\n\nSure, I like those names.\n\n> This isn't specific to your patch, but it occurs to me that we could\n> save a few bits of code bloat if we removed the '_len' variable\n> declaration from the MemSet() macro -- it isn't needed, AFAICT. That\n> would mean we we'd evaluate the 'len' argument multiple times, so I'm\n> not sure if that's a win overall...\n\nI think that was actually the goal, to not have them evaluated multiple\ntimes, and perhaps bloat if the length itself was a macro.\n\nNew patch attached with your suggested renaming. Now that I look at the\ncode, there are about 55 places where we call palloc then right after it\nMemSet(0), so we could merge those to use palloc0 and reduce existing\nMemSet code bloat some more.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\nIndex: src/backend/nodes/nodes.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/backend/nodes/nodes.c,v\nretrieving revision 1.15\ndiff -c -c -r1.15 nodes.c\n*** src/backend/nodes/nodes.c\t20 Jun 2002 20:29:29 -0000\t1.15\n--- src/backend/nodes/nodes.c\t9 Oct 2002 05:17:13 -0000\n***************\n*** 28,42 ****\n *\t macro makeNode. eg. to create a Resdom node, use makeNode(Resdom)\n *\n */\n! Node *\n! newNode(Size size, NodeTag tag)\n! {\n! \tNode\t *newNode;\n \n- \tAssert(size >= sizeof(Node));\t\t/* need the tag, at least */\n- \n- \tnewNode = (Node *) palloc(size);\n- \tMemSet((char *) newNode, 0, size);\n- \tnewNode->type = tag;\n- \treturn newNode;\n- }\n--- 28,32 ----\n *\t macro makeNode. eg. to create a Resdom node, use makeNode(Resdom)\n *\n */\n! Node *newNodeMacroHolder;\n \nIndex: src/backend/utils/mmgr/mcxt.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/backend/utils/mmgr/mcxt.c,v\nretrieving revision 1.32\ndiff -c -c -r1.32 mcxt.c\n*** src/backend/utils/mmgr/mcxt.c\t12 Aug 2002 00:36:12 -0000\t1.32\n--- src/backend/utils/mmgr/mcxt.c\t9 Oct 2002 05:17:17 -0000\n***************\n*** 453,458 ****\n--- 453,481 ----\n }\n \n /*\n+ * MemoryContextAllocZero\n+ *\t\tLike MemoryContextAlloc, but clears allocated memory\n+ *\n+ *\tWe could just call MemoryContextAlloc then clear the memory, but this\n+ *\tfunction is called too many times, so we have a separate version.\n+ */\n+ void *\n+ MemoryContextAllocZero(MemoryContext context, Size size)\n+ {\n+ \tvoid *ret;\n+ \n+ \tAssertArg(MemoryContextIsValid(context));\n+ \n+ \tif (!AllocSizeIsValid(size))\n+ \t\telog(ERROR, \"MemoryContextAllocZero: invalid request size %lu\",\n+ \t\t\t (unsigned long) size);\n+ \n+ \tret = (*context->methods->alloc) (context, size);\n+ \tMemSet(ret, 0, size);\n+ \treturn ret;\n+ }\n+ \n+ /*\n * pfree\n *\t\tRelease an allocated chunk.\n */\nIndex: src/include/nodes/nodes.h\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/include/nodes/nodes.h,v\nretrieving revision 1.118\ndiff -c -c -r1.118 nodes.h\n*** src/include/nodes/nodes.h\t31 Aug 2002 22:10:47 -0000\t1.118\n--- src/include/nodes/nodes.h\t9 Oct 2002 05:17:20 -0000\n***************\n*** 261,266 ****\n--- 261,284 ----\n \n #define nodeTag(nodeptr)\t\t(((Node*)(nodeptr))->type)\n \n+ /*\n+ *\tThere is no way to dereference the palloc'ed pointer to assign the\n+ *\ttag, and return the pointer itself, so we need a holder variable.\n+ *\tFortunately, this function isn't recursive so we just define\n+ *\ta global variable for this purpose.\n+ */\n+ extern Node *newNodeMacroHolder;\n+ \n+ #define newNode(size, tag) \\\n+ ( \\\n+ \tAssertMacro((size) >= sizeof(Node)),\t\t/* need the tag, at least */ \\\n+ \\\n+ \tnewNodeMacroHolder = (Node *) palloc0(size), \\\n+ \tnewNodeMacroHolder->type = (tag), \\\n+ \tnewNodeMacroHolder \\\n+ )\n+ \n+ \n #define makeNode(_type_)\t\t((_type_ *) newNode(sizeof(_type_),T_##_type_))\n #define NodeSetTag(nodeptr,t)\t(((Node*)(nodeptr))->type = (t))\n \n***************\n*** 281,291 ****\n *\t\t\t\t\t extern declarations follow\n * ----------------------------------------------------------------\n */\n- \n- /*\n- * nodes/nodes.c\n- */\n- extern Node *newNode(Size size, NodeTag tag);\n \n /*\n * nodes/{outfuncs.c,print.c}\n--- 299,304 ----\nIndex: src/include/utils/palloc.h\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/include/utils/palloc.h,v\nretrieving revision 1.19\ndiff -c -c -r1.19 palloc.h\n*** src/include/utils/palloc.h\t20 Jun 2002 20:29:53 -0000\t1.19\n--- src/include/utils/palloc.h\t9 Oct 2002 05:17:21 -0000\n***************\n*** 46,53 ****\n--- 46,56 ----\n * Fundamental memory-allocation operations (more are in utils/memutils.h)\n */\n extern void *MemoryContextAlloc(MemoryContext context, Size size);\n+ extern void *MemoryContextAllocZero(MemoryContext context, Size size);\n \n #define palloc(sz)\tMemoryContextAlloc(CurrentMemoryContext, (sz))\n+ \n+ #define palloc0(sz)\tMemoryContextAllocZero(CurrentMemoryContext, (sz))\n \n extern void pfree(void *pointer);", "msg_date": "Wed, 9 Oct 2002 14:32:57 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: inline newNode()" }, { "msg_contents": "Tom Lane writes:\n\n> If you use memset() instead of MemSet(), I'm afraid you're going to blow\n> off most of the performance gain this was supposed to achieve.\n\nCan someone explain to me why memset() would ever be better than MemSet()?\n\nAlso, shouldn't GCC (at least 3.0 or later) inline memset() automatically?\n\nWhat's the result of using -finline (or your favorite compiler's\ninlining flag)?\n\nAnd has someone wondered why the GEQO code needs so many new nodes?\nPerhaps a more lightweight data representation for internal use could be\nappropriate?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Wed, 9 Oct 2002 23:13:47 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: inline newNode() " }, { "msg_contents": "Peter Eisentraut wrote:\n> Tom Lane writes:\n> \n> > If you use memset() instead of MemSet(), I'm afraid you're going to blow\n> > off most of the performance gain this was supposed to achieve.\n> \n> Can someone explain to me why memset() would ever be better than MemSet()?\n\nI am surprised MemSet was ever faster than memset(). Remember, MemSet\nwas done only to prevent excessive function call overhead to memset(). \nI never anticipated that a simple while() loop would be faster than the\nlibc version, especially ones that have assembler memset versions, but,\nfor example on Sparc, this is true. \n\nI looked at the Sparc assembler code and I can't see why MemSet would be\nfaster. Perhaps someone can send over the Sparc assembler output of\nMemSet on their platform and I can compare it to the Solaris assembler I\nsee here. In fact, they can probably disassemble a memset() routine there\nand see exactly what I see.\n\n> Also, shouldn't GCC (at least 3.0 or later) inline memset() automatically?\n\nNot sure, but yes, that may be true. I think it requires a high\noptimizer level, perhaps higher than our default.\n\n> What's the result of using -finline (or your favorite compiler's\n> inlining flag)?\n\nYes, that would be it.\n\n> And has someone wondered why the GEQO code needs so many new nodes?\n> Perhaps a more lightweight data representation for internal use could be\n> appropriate?\n\nI assume the GEQO results he is seeing is only for a tests, and that the\nmacro version of newNode will help in all cases.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 9 Oct 2002 17:15:23 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: inline newNode()" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Can someone explain to me why memset() would ever be better than MemSet()?\n\nmemset() should *always* be faster than any C-coded implementation\nthereof. Any competent assembly-language writer can beat C-level\nlocutions, or at least equal them, if he's willing to expend the\neffort.\n\nI've frankly been astonished at the number of platforms where it\nseems memset() has not been coded with an appropriate degree of\ntenseness. The fact that we've found it useful to invent MemSet()\nis a pretty damning indictment of the competence of modern C-library\nauthors.\n\nOr am I just stuck in the obsolete notion that vendors should provide\nsome amount of platform-specific tuning, and not a generic library?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 10 Oct 2002 00:00:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: inline newNode() " }, { "msg_contents": "Tom Lane wrote:\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > Can someone explain to me why memset() would ever be better than MemSet()?\n> \n> memset() should *always* be faster than any C-coded implementation\n> thereof. Any competent assembly-language writer can beat C-level\n> locutions, or at least equal them, if he's willing to expend the\n> effort.\n\nYou would think so. I am surprised too.\n\n> I've frankly been astonished at the number of platforms where it\n> seems memset() has not been coded with an appropriate degree of\n> tenseness. The fact that we've found it useful to invent MemSet()\n> is a pretty damning indictment of the competence of modern C-library\n> authors.\n\nRemember, MemSet was invented only to prevent function call overhead,\nand on my BSD/OS system, len >= 256 is faster with the libc memset(). \nIt is a fluke we found out that MemSet is faster that memset for all\nlengths on some platforms.\n\n> Or am I just stuck in the obsolete notion that vendors should provide\n> some amount of platform-specific tuning, and not a generic library?\n\nWhat really surprised me is that MemSet won on Sparc, where they have an\nassembler language version that looks very similar to the MemSet loop.\n\nIn fact, I was the one who encouraged BSD/OS to write assembler language\nverions of many of their str* and mem* functions.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 10 Oct 2002 00:56:04 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: inline newNode()" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Remember, MemSet was invented only to prevent function call overhead,\n> and on my BSD/OS system, len >= 256 is faster with the libc\n> memset(). \n\nYes, I remember finding that when testing MemSet() versus memset() for\nvarious values of MEMSET_LOOP_LIMIT earlier.\n\n> What really surprised me is that MemSet won on Sparc, where they have an\n> assembler language version that looks very similar to the MemSet\n> loop.\n\nWell, I'd assume any C library / compiler of half-decent quality on\nany platform would provide assembly optimized versions of common\nstdlib functions like memset().\n\nWhile playing around with memset() on my machine (P4 running Linux,\nglibc 2.2.5, GCC 3.2.1pre3), I found the following interesting\nresult. I used this simple benchmark (the same one I posted for the\nearlier MemSet() thread on -hackers):\n\n#include <string.h>\n#include \"postgres.h\"\n\n#undef MEMSET_LOOP_LIMIT\n#define MEMSET_LOOP_LIMIT BUFFER_SIZE\n\nint\nmain(void)\n{\n\tchar buffer[BUFFER_SIZE];\n\tlong long i;\n\n\tfor (i = 0; i < 99000000; i++)\n\t{\n\t\tmemset(buffer, 0, sizeof(buffer));\n\t}\n\n\treturn 0;\n}\n\nCompiled with '-DBUFFER_SIZE=256 -O2', I get the following results in\nseconds:\n\nMemSet(): ~9.6\nmemset(): ~19.5\n__builtin_memset(): ~10.00\n\nSo it seems there is a reasonably optimized version of memset()\nprovided by glibc/GCC (not sure which :-) ), it's just a matter of\npersuading the compiler to let us use it. It's still depressing that\nit doesn't beat MemSet(), but perhaps __builtin_memset() has better\naverage-case performane over a wider spectrum of memory size?[1]\n\nBTW, regarding the newNode() stuff: so is it agreed that Bruce's patch\nis a performance win without too high of a code bloat / uglification\npenalty? If so, is it 7.3 or 7.4 material?\n\nCheers,\n\nNeil\n\n[1] Not that I really buy that -- for one thing, if the length is\nconstant, as it is in this case, the compiler can substitute an\noptimized version of the function for the appropriate memory size. I'm\nhaving a little difficulty explaining GCC/glibc's poor performance...\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n", "msg_date": "10 Oct 2002 02:51:16 -0400", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": true, "msg_subject": "Re: inline newNode()" }, { "msg_contents": "On Wed, Oct 09, 2002 at 12:12:12AM -0400, Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > OK, here is a version of newNode that is a macro.\n> \n> If you use memset() instead of MemSet(), I'm afraid you're going to blow\n> off most of the performance gain this was supposed to achieve.\n> \n> > Does anyone have additional suggestions? The only thing I can suggest\n> > is to make a clear-memory version of palloc because palloc always calls\n> > MemoryContextAlloc() so I can put it in there. How does that sound? \n> \n> I do not think palloc should auto-zero memory. Hard to explain why,\n> but it just feels like a bad decision. One point is that the MemSet\n\n Agree. The memory-management routine knows nothing about real memory\n usage - maybe is zeroize memory wanted in palloc caller, but maybe\n not.. The palloc() caller knows it better than plalloc(). If I good\n remember same discussion was long time ago in linux-kernel list and\n result was non-zeroize-memory.\n\n\n Karel\n\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Thu, 10 Oct 2002 09:22:16 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: inline newNode()" }, { "msg_contents": "On 10 Oct 2002, Neil Conway wrote:\n\n> Well, I'd assume any C library / compiler of half-decent quality on\n> any platform would provide assembly optimized versions of common\n> stdlib functions like memset().\n> \n> While playing around with memset() on my machine (P4 running Linux,\n> glibc 2.2.5, GCC 3.2.1pre3), I found the following interesting\n> result. I used this simple benchmark (the same one I posted for the\n> earlier MemSet() thread on -hackers):\n> \n\n[snip]\n\n> Compiled with '-DBUFFER_SIZE=256 -O2', I get the following results in\n> seconds:\n> \n> MemSet(): ~9.6\n> memset(): ~19.5\n> __builtin_memset(): ~10.00\n\nI ran the same code. I do not understand the results you go. Here are\nmine, on an AMD Duron with GCC 3.2 and glibc-2.2.5. Results are:\n\nMemSet(): 14.758 sec\nmemset(): 11.597 sec\n__buildin_memset(): 9.000 sec\n\nWho else wants to test?\n\nGavin\n\n", "msg_date": "Fri, 11 Oct 2002 00:16:39 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: inline newNode()" }, { "msg_contents": "Neil Conway wrote:\n> BTW, regarding the newNode() stuff: so is it agreed that Bruce's patch\n> is a performance win without too high of a code bloat / uglification\n> penalty? If so, is it 7.3 or 7.4 material?\n\nNot sure. It is a small patch but we normally don't do performance\nfixes during beta unless they are major.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 10 Oct 2002 10:59:31 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: inline newNode()" }, { "msg_contents": "Gavin Sherry <swm@linuxworld.com.au> writes:\n> On 10 Oct 2002, Neil Conway wrote:\n> > Compiled with '-DBUFFER_SIZE=256 -O2', I get the following results in\n> > seconds:\n> > \n> > MemSet(): ~9.6\n> > memset(): ~19.5\n> > __builtin_memset(): ~10.00\n> \n> I ran the same code. I do not understand the results you go.\n\nInteresting -- it may be the case that the optimizations performed by\nGCC are architecture specific.\n\nI searched for some threads about this on the GCC list. One\ninteresting result was this:\n\n http://gcc.gnu.org/ml/gcc/2002-04/msg01114.html\n\nOne possible explanation for the different performance you saw is\nexplained by Jan Hubicka:\n\n http://gcc.gnu.org/ml/gcc/2002-04/msg01146.html\n\nOne thing that confuses me is that GCC decides *not* to use\n__builtin_memset() for some reason, even though it appears to be\nsuperior to normal memset() on both of our systems.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n", "msg_date": "10 Oct 2002 13:28:47 -0400", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": true, "msg_subject": "Re: inline newNode()" }, { "msg_contents": "Neil Conway writes:\n\n> MemSet(): ~9.6\n> memset(): ~19.5\n> __builtin_memset(): ~10.00\n\nI did my own tests with this code and the results vary wildly between\nplatforms. (I do not list __builtin_memset() because the results were\ninvariably equal to memset().)\n\nPlatform\tbuffer\t memset()\tMemSet()\n\nfreebsd\t\t32\t 5.3\t\t 4.9\nfreebsd\t\t256\t 23.3\t\t24.2\nmingw\t\t32\t 0.5\t\t 2.0\nmingw\t\t256\t 6.6\t\t10.5\nunixware\t256\t 15.2\t\t10.3\nunixware\t1024\t 29.2\t\t34.1\ncygwin\t\t256\t 6.7\t\t15.8\n\n\"freebsd\" is i386-unknown-freebsd4.7 with GCC 2.95.4.\n\"mingw\" is i686-pc-mingw32 with GCC 3.2.\n\"unixware\" is i586-sco-sysv5uw7.1.3 with vendor compiler version 4.1.\n\"cygwin\" is i686-pc-cygwin with GCC 2.95.3.\n\nGCC was run as 'gcc -O2 -Wall'. (I also tried 'gcc -O3 -finline' but\nthat gave only minimally better results.) The SCO compiler was run as\n'cc -O -Kinine -v'.\n\nMake of those results what you will, but the current cross-over point of\n1024 seems very wrong.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Thu, 10 Oct 2002 19:38:05 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: inline newNode()" }, { "msg_contents": "Bruce Momjian writes:\n\n> I assume the GEQO results he is seeing is only for a tests, and that the\n> macro version of newNode will help in all cases.\n\nWell are we just assuming here or are we fixing actual problems based on\nreal analyses?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n\n", "msg_date": "Fri, 11 Oct 2002 18:36:48 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: inline newNode()" }, { "msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > I assume the GEQO results he is seeing is only for a tests, and that the\n> > macro version of newNode will help in all cases.\n> \n> Well are we just assuming here or are we fixing actual problems based on\n> real analyses?\n\nWhat is your point? He is testing GEQO, but I assume other places would\nsee a speedup from inlining newNode.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 11 Oct 2002 13:24:44 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: inline newNode()" }, { "msg_contents": "Bruce Momjian writes:\n\n> Peter Eisentraut wrote:\n> > Bruce Momjian writes:\n> >\n> > > I assume the GEQO results he is seeing is only for a tests, and that the\n> > > macro version of newNode will help in all cases.\n> >\n> > Well are we just assuming here or are we fixing actual problems based on\n> > real analyses?\n>\n> What is your point? He is testing GEQO, but I assume other places would\n> see a speedup from inlining newNode.\n\nThat's exactly my point: You're assuming.\n\nOf course everything would be faster if we inline it, but IMHO that's the\ncompiler's job.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Sat, 12 Oct 2002 01:07:04 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: inline newNode()" }, { "msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > Peter Eisentraut wrote:\n> > > Bruce Momjian writes:\n> > >\n> > > > I assume the GEQO results he is seeing is only for a tests, and that the\n> > > > macro version of newNode will help in all cases.\n> > >\n> > > Well are we just assuming here or are we fixing actual problems based on\n> > > real analyses?\n> >\n> > What is your point? He is testing GEQO, but I assume other places would\n> > see a speedup from inlining newNode.\n> \n> That's exactly my point: You're assuming.\n\nWe know GEQO is faster, and the 8% is enough to justify the inlining. \nWhether it speeds up other operations is assumed but we don't really\ncare; the GEQO numbers are enough. The code bloat is minimal.\n\n> Of course everything would be faster if we inline it, but IMHO that's the\n> compiler's job.\n\nYes, but we have to help it in certain cases, heap_getattr() being the\nmost dramatic. Are you suggesting we remove all the macros in our\nheader file and replace them with function calls?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 11 Oct 2002 19:35:14 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: inline newNode()" }, { "msg_contents": "Bruce Momjian wrote:\n> Neil Conway wrote:\n> > BTW, regarding the newNode() stuff: so is it agreed that Bruce's patch\n> > is a performance win without too high of a code bloat / uglification\n> > penalty? If so, is it 7.3 or 7.4 material?\n> \n> Not sure. It is a small patch but we normally don't do performance\n> fixes during beta unless they are major.\n\nThis performance improvement isn't related to a specific user problem\nreport, so I am going to hold it for 7.4. At that time we can also\nresearch whether palloc0 should be used in other cases, like palloc\nfollowed by MemSet.\n\nThe only issue there is that palloc0 can't optimize away constants used\nin the macro so it may be better _not_ to make that change. Neil, do\nyou have any performance numbers comparing MemSet with constant vs.\nvariable parameters, e.g:\n\n\tMemSet(ptr, 0, 256)\n\nvs.\n\n\ti = 256;\n\tMemSet(ptr, 0, i)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 11 Oct 2002 20:06:44 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: inline newNode()" }, { "msg_contents": "Peter Eisentraut wrote:\n> Neil Conway writes:\n> \n> > MemSet(): ~9.6\n> > memset(): ~19.5\n> > __builtin_memset(): ~10.00\n> \n> I did my own tests with this code and the results vary wildly between\n> platforms. (I do not list __builtin_memset() because the results were\n> invariably equal to memset().)\n> \n> Platform\tbuffer\t memset()\tMemSet()\n> \n> freebsd\t\t32\t 5.3\t\t 4.9\n> freebsd\t\t256\t 23.3\t\t24.2\n> mingw\t\t32\t 0.5\t\t 2.0\n> mingw\t\t256\t 6.6\t\t10.5\n> unixware\t256\t 15.2\t\t10.3\n> unixware\t1024\t 29.2\t\t34.1\n> cygwin\t\t256\t 6.7\t\t15.8\n> \n> \"freebsd\" is i386-unknown-freebsd4.7 with GCC 2.95.4.\n> \"mingw\" is i686-pc-mingw32 with GCC 3.2.\n> \"unixware\" is i586-sco-sysv5uw7.1.3 with vendor compiler version 4.1.\n> \"cygwin\" is i686-pc-cygwin with GCC 2.95.3.\n> \n> GCC was run as 'gcc -O2 -Wall'. (I also tried 'gcc -O3 -finline' but\n> that gave only minimally better results.) The SCO compiler was run as\n> 'cc -O -Kinine -v'.\n> \n> Make of those results what you will, but the current cross-over point of\n> 1024 seems very wrong.\n\nNo question 1024 looks wrong on a lot of platforms. On Sparc, we were\nseeing a crossover even higher, and on many like BSD/OS, the crossover\nis much lower. Without some platform-specific test, I don't know how we\nare going to set this correctly. Should we develop such a test? Do we\nhave enough MemSet usage in the 256-4k range that people would see a\ndifference between different MEMSET_LOOP_LIMIT values?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 11 Oct 2002 20:09:31 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: inline newNode()" } ]
[ { "msg_contents": "I noticed that the new EXECUTE statement does not call SetQuerySnapshot,\nwhich seems like a bad thing. The omission is masked by the defensive\ncode in CopyQuerySnaphot, which will automatically do SetQuerySnapshot\nif it hasn't been done yet in the current transaction. However, this\ndoesn't provide the right behavior in read-committed mode: if we are\ninside a transaction and not the first query, I'd think EXECUTE should\ntake a new snapshot; but it won't.\n\nComparable code can be found in COPY OUT, for which tcopy/utility.c\ndoes SetQuerySnapshot() before calling the command-specific routine.\n\nQuestions for the list:\n\n1. Where is the cleanest place to call SetQuerySnapshot() for utility\nstatements that need it? Should we follow the lead of the existing\nCOPY code, and add the call to the ExecuteStmt case in utility.c?\nOr should we move the calls into the command-specific routines (DoCopy\nand ExecuteQuery)? Or perhaps it should be done in postgres.c, which\nhas this responsibility for non-utility statements?\n\n2. Would it be a good idea to change CopyQuerySnapshot to elog(ERROR)\ninstead of silently creating a snapshot when none has been made?\nI think I was the one who put in its auto-create-snapshot behavior,\nbut I'm now feeling like that was a mistake. It hides omissions that\nwe should want to find.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 Oct 2002 20:39:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Where to call SetQuerySnapshot" }, { "msg_contents": "Tom Lane wrote:\n> 1. Where is the cleanest place to call SetQuerySnapshot() for utility\n> statements that need it? Should we follow the lead of the existing\n> COPY code, and add the call to the ExecuteStmt case in utility.c?\n> Or should we move the calls into the command-specific routines (DoCopy\n> and ExecuteQuery)? Or perhaps it should be done in postgres.c, which\n> has this responsibility for non-utility statements?\n\nWithout looking at it too closely, I would think postgres.c would be best, \nunless there is a legit reason for a utility statement to *not* want \nSetQuerySnapshot called. If that's the case, then utility.c seems to make sense.\n\n> 2. Would it be a good idea to change CopyQuerySnapshot to elog(ERROR)\n> instead of silently creating a snapshot when none has been made?\n> I think I was the one who put in its auto-create-snapshot behavior,\n> but I'm now feeling like that was a mistake. It hides omissions that\n> we should want to find.\n\nIs an assert appropriate?\n\nJoe\n\n", "msg_date": "Mon, 07 Oct 2002 17:53:44 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Where to call SetQuerySnapshot" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> Tom Lane wrote:\n>> 1. Where is the cleanest place to call SetQuerySnapshot() for utility\n>> statements that need it?\n\n> Without looking at it too closely, I would think postgres.c would be best, \n> unless there is a legit reason for a utility statement to *not* want \n> SetQuerySnapshot called.\n\nActually, there are a number of past threads concerned with whether we\nare doing SetQuerySnapshot in the right places --- eg, should it occur\nbetween statements of a plplgsql function? Right now it doesn't, but\nmaybe it should. In any case I have a note that doing SetQuerySnapshot\nfor COPY OUT in utility.c is a bad idea, because it makes COPY OUT act\ndifferently from any other statement, when used inside a function: it\n*will* change the query snapshot, where nothing else does. So I had\nbeen thinking of pulling it out to postgres.c anyway. I will do that.\n\n>> 2. Would it be a good idea to change CopyQuerySnapshot to elog(ERROR)\n>> instead of silently creating a snapshot when none has been made?\n\n> Is an assert appropriate?\n\nWorks for me.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 Oct 2002 12:01:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Where to call SetQuerySnapshot " }, { "msg_contents": "I said:\n> ... So I had\n> been thinking of pulling it out to postgres.c anyway. I will do that.\n\nI did this and ended up with a rather long list of statement types that\nmight need a snapshot:\n\n elog(DEBUG2, \"ProcessUtility\");\n\n /* set snapshot if utility stmt needs one */\n /* XXX maybe cleaner to list those that shouldn't set one? */\n if (IsA(utilityStmt, AlterTableStmt) ||\n IsA(utilityStmt, ClusterStmt) ||\n IsA(utilityStmt, CopyStmt) ||\n IsA(utilityStmt, ExecuteStmt) ||\n IsA(utilityStmt, ExplainStmt) ||\n IsA(utilityStmt, IndexStmt) ||\n IsA(utilityStmt, PrepareStmt) ||\n IsA(utilityStmt, ReindexStmt))\n SetQuerySnapshot();\n\n(Anything that can call the planner or might create entries in\nfunctional indexes had better set a snapshot, thus stuff like\nReindexStmt has the issue.)\n\nI wonder if we should turn this around, and set a snapshot for all\nutility statements that can't show cause why they don't need one.\nOffhand, TransactionStmt, FetchStmt, and VariableSet/Show/Reset\nmight be the only ones that need be excluded. Comments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 Oct 2002 13:21:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Where to call SetQuerySnapshot " }, { "msg_contents": "Tom Lane wrote:\n> I did this and ended up with a rather long list of statement types that\n> might need a snapshot:\n> \n> elog(DEBUG2, \"ProcessUtility\");\n> \n> /* set snapshot if utility stmt needs one */\n> /* XXX maybe cleaner to list those that shouldn't set one? */\n> if (IsA(utilityStmt, AlterTableStmt) ||\n> IsA(utilityStmt, ClusterStmt) ||\n> IsA(utilityStmt, CopyStmt) ||\n> IsA(utilityStmt, ExecuteStmt) ||\n> IsA(utilityStmt, ExplainStmt) ||\n> IsA(utilityStmt, IndexStmt) ||\n> IsA(utilityStmt, PrepareStmt) ||\n> IsA(utilityStmt, ReindexStmt))\n> SetQuerySnapshot();\n> \n> (Anything that can call the planner or might create entries in\n> functional indexes had better set a snapshot, thus stuff like\n> ReindexStmt has the issue.)\n> \n> I wonder if we should turn this around, and set a snapshot for all\n> utility statements that can't show cause why they don't need one.\n> Offhand, TransactionStmt, FetchStmt, and VariableSet/Show/Reset\n> might be the only ones that need be excluded. Comments?\n\nIt looks like an exclusion list would be easier to read and maintain.\n\nJoe\n\n\n", "msg_date": "Tue, 08 Oct 2002 11:33:03 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Where to call SetQuerySnapshot" } ]
[ { "msg_contents": "\nI sent this yesterday, but it seems not to have made it to the list... \n\n\nI have a couple of comments orthogonal to the present discussion. \n\n1) It would be fairly easy to write log records over a network to a \ndedicated process on another system. If the other system has an \nuninterruptible power supply, this is about as safe as writing to disk. \n\nThis would get rid of the need for any fsync on the log at all. There \nwould be extra code needed on restart to get the end of the log from the \nother system, but it doesn't seem like much. \n\nI think this would be an attractive option to a lot of people. Most \npeople have at least two systems, and the requirements of the logging \nsystem would be minimal. \n\n\n2) It is also possible, with kernel modifications, to have special \nlogging partitions where log records are written where the head is. \nTzi-cker Chueh and Lan Huang at Stony Brook \n(http://www.cs.sunysb.edu/~lanhuang/research.htm) have written this, \nalthough I don't think they have released any code. \n\n(A similar idea called WADS is mentioned in Gray & Reuter's book.) \n\nIf the people at Red Hat are interested in having some added value for \nusing PostgreSQL on Red Hat Linux, this would be one idea. It could \nalso be used to speed up ext3 and other journaling file systems. \n\n\n\n\n", "msg_date": "Mon, 7 Oct 2002 21:30:05 -0400", "msg_from": "\"Ken Hirsch\" <kahirsch@bellsouth.net>", "msg_from_op": true, "msg_subject": "Re: Proposed LogWriter Scheme, WAS: Potential Large" } ]
[ { "msg_contents": "\n> ISTM aio_write only improves the picture if there's some magic in-kernel\n> processing that makes this same kind of judgment as to when to issue the\n> \"ganged\" write for real, and is able to do it on time because it's in\n> the kernel. I haven't heard anything to make me think that that feature\n> actually exists. AFAIK the kernel isn't much more enlightened about\n> physical head positions than we are.\n\nCan the magic be, that kaio directly writes from user space memory to the \ndisk ? Since in your case all transactions A-E want the same buffer written,\nthe memory (not it's content) will also be the same. This would automatically \nwrite the latest possible version of our WAL buffer to disk. \n\nThe problem I can see offhand is how the kaio system can tell which transaction\ncan be safely notified of the write, or whether the programmer is actually responsible\nfor not changing the buffer until notified of completion ?\n\nAndreas\n", "msg_date": "Tue, 8 Oct 2002 11:15:41 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: Analysis of ganged WAL writes" }, { "msg_contents": "On Tue, 2002-10-08 at 04:15, Zeugswetter Andreas SB SD wrote:\n> Can the magic be, that kaio directly writes from user space memory to the \n> disk ? Since in your case all transactions A-E want the same buffer written,\n> the memory (not it's content) will also be the same. This would automatically \n> write the latest possible version of our WAL buffer to disk. \n> \n\n*Some* implementations allow for zero-copy aio. That is a savings. On\nheavily used systems, it can be a large savings.\n\n> The problem I can see offhand is how the kaio system can tell which transaction\n> can be safely notified of the write, or whether the programmer is actually responsible\n> for not changing the buffer until notified of completion ?\n\nThat's correct. The programmer can not change the buffer contents until\nnotification has completed for that outstanding aio operation. To do\notherwise results in undefined behavior. Since some systems do allow\nfor zero-copy aio operations, requiring the buffers not be modified,\nonce queued, make a lot of sense. Of course, even on systems that don't\nsupport zero-copy, changing the buffered data prior to write completion\njust seems like a bad idea to me.\n\nHere's a quote from SGI's aio_write man page:\nIf the buffer pointed to by aiocbp->aio_buf or the control block pointed\nto by aiocbp changes or becomes an illegal address prior to asynchronous\nI/O completion then the behavior is undefined. Simultaneous synchronous\noperations using the same aiocbp produce undefined results.\n\nAnd on SunOS we have:\n The aiocbp argument points to an aiocb structure. If the\n buffer pointed to by aiocbp->aio_buf or the control block\n pointed to by aiocbp becomes an illegal address prior to\n asynchronous I/O completion, then the behavior is undefined.\nand\n For any system action that changes the process memory space\n while an asynchronous I/O is outstanding to the address\n range being changed, the result of that action is undefined.\n\n\nGreg", "msg_date": "08 Oct 2002 07:34:52 -0500", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: Analysis of ganged WAL writes" }, { "msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> Can the magic be, that kaio directly writes from user space memory to the \n> disk ?\n\nThis makes more assumptions about the disk drive's behavior than I think\nare justified...\n\n> Since in your case all transactions A-E want the same buffer written,\n> the memory (not it's content) will also be the same.\n\nBut no, it won't: the successive writes will ask to write different\nsnapshots of the same buffer.\n\n> The problem I can see offhand is how the kaio system can tell which\n> transaction can be safely notified of the write,\n\nYup, exactly. Whose snapshot made it down to (stable) disk storage?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 Oct 2002 09:45:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Analysis of ganged WAL writes " }, { "msg_contents": "> > Since in your case all transactions A-E want the same buffer written,\n> > the memory (not it's content) will also be the same.\n>\n> But no, it won't: the successive writes will ask to write different\n> snapshots of the same buffer.\n\nSuccessive writes would write different NON-OVERLAPPING sections of the\nsame log buffer. It wouldn't make sense to send three separate copies of\nthe entire block. That could indeed cause problems.\n\nIf a separate log writing process was doing all the writing, it could\npre-gang the writes. However, I'm not sure this is necessary. I'll test the\nsimpler way first.\n\n> > The problem I can see offhand is how the kaio system can tell which\n> > transaction can be safely notified of the write,\n>\n> Yup, exactly. Whose snapshot made it down to (stable) disk storage?\n\nIf you do as above, it can inform the transactions when the blocks get\nwritten to disk since there are no inconsistent writes. If transactions A,\nB and C had commits in block 1023, and the aio system writes that block to\nthe disk, it can notify all three that their transaction write is complete\nwhen that block (or partial block) is written to disk.\n\nIt transaction C's write didn't make it into the buffer, I've got to assume\nthe aio system or the disk cache logic can handle realizing that it didn't\nqueue that write and therefore not inform transaction C of a completion.\n\n- Curtis\n\n", "msg_date": "Tue, 8 Oct 2002 10:15:30 -0400", "msg_from": "\"Curtis Faith\" <curtis@galtair.com>", "msg_from_op": false, "msg_subject": "Re: Analysis of ganged WAL writes " }, { "msg_contents": "\"Curtis Faith\" <curtis@galtair.com> writes:\n> Successive writes would write different NON-OVERLAPPING sections of the\n> same log buffer. It wouldn't make sense to send three separate copies of\n> the entire block. That could indeed cause problems.\n\nSo you're going to undo the code's present property that all writes are\nblock-sized? Aren't you worried about incurring page-in reads because\nthe kernel can't know that we don't care about data beyond what we've\nwritten so far in the block?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 Oct 2002 10:41:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Analysis of ganged WAL writes " }, { "msg_contents": "> \"Curtis Faith\" <curtis@galtair.com> writes:\n> > Successive writes would write different NON-OVERLAPPING sections of the\n> > same log buffer. It wouldn't make sense to send three separate\n> copies of\n> > the entire block. That could indeed cause problems.\n>\n> So you're going to undo the code's present property that all writes are\n> block-sized? Aren't you worried about incurring page-in reads because\n> the kernel can't know that we don't care about data beyond what we've\n> written so far in the block?\n\nYes, I'll try undoing the current behavior.\n\nI'm not really worried about doing page-in reads because the disks internal\nbuffers should contain most of the blocks surrounding the end of the log\nfile. If the successive partial writes exceed a block (which they will in\nheavy use) then most of the time this won't be a problem anyway since the\ndisk will gang the full blocks before writing.\n\nIf the inserts are not coming fast enough to fill the log then the disks\ncache should contain the data from the last time that block (or the block\nbefore) was written. Disks have become pretty good at this sort of thing\nsince writing sequentially is a very common scenario.\n\nIt may not work, but one doesn't make significant progress without trying\nthings that might not work.\n\nIf it doesn't work, then I'll make certain that commit log records always\nfill the buffer they are written too, with variable length commit records\nand something to identify the size of the padding used to fill the rest of\nthe block.\n\n", "msg_date": "Tue, 8 Oct 2002 10:57:15 -0400", "msg_from": "\"Curtis Faith\" <curtis@galtair.com>", "msg_from_op": false, "msg_subject": "Re: Analysis of ganged WAL writes " }, { "msg_contents": "\"Curtis Faith\" <curtis@galtair.com> writes:\n> I'm not really worried about doing page-in reads because the disks internal\n> buffers should contain most of the blocks surrounding the end of the log\n> file. If the successive partial writes exceed a block (which they will in\n> heavy use) then most of the time this won't be a problem anyway since the\n> disk will gang the full blocks before writing.\n\nYou seem to be willing to make quite a large number of assumptions about\nwhat the disk hardware will do or not do. I trust you're going to test\nyour results on a wide range of hardware before claiming they have any\ngeneral validity ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 Oct 2002 11:08:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Analysis of ganged WAL writes " }, { "msg_contents": "> \"Curtis Faith\" <curtis@galtair.com> writes:\n> > I'm not really worried about doing page-in reads because the\n> disks internal\n> > buffers should contain most of the blocks surrounding the end\n> of the log\n> > file. If the successive partial writes exceed a block (which\n> they will in\n> > heavy use) then most of the time this won't be a problem\n> anyway since the\n> > disk will gang the full blocks before writing.\n>\n> You seem to be willing to make quite a large number of assumptions about\n> what the disk hardware will do or not do. I trust you're going to test\n> your results on a wide range of hardware before claiming they have any\n> general validity ...\n\nTom, I'm actually an empiricist. I trust nothing that I haven't tested or\nread the code for myself. I've found too many instances of bugs or poor\nimplementations in the O/S to believe without testing.\n\nOn the other hand, one has to make some assumptions in order to devise\nuseful tests.\n\nI'm not necessarily expecting that I'll come up with something that will\nhelp everyone all the time. I'm just hoping that I can come up with\nsomething that will help those using modern hardware, most of the time.\n\nEven if it works, this will probably become one of those flags that need to\nbe tested as part of the performance analysis for any given system. Or\nperhaps ideally, I'll come up with a LogDiskTester that simulates log\noutput and determines the best settings to use for a given disk and O/S,\noptimized for size or speed, heavy inserts, etc.\n\n", "msg_date": "Tue, 8 Oct 2002 11:17:28 -0400", "msg_from": "\"Curtis Faith\" <curtis@galtair.com>", "msg_from_op": false, "msg_subject": "Re: Analysis of ganged WAL writes " } ]
[ { "msg_contents": "Hi,\n\nMy friend just notice me with some thing that make using dump files harder \nto use.\n\nSo here we go:\nfirst we make some chaos :)\n\\c template1 postgres\ncreate user foo with password 'bar' createdb nocreateuser;\n\\c template1 foo\ncreate database foodb;\n\\c template1 postgres\nalter user foo with nocreatedb;\n\nthen we... pg_dumpall -s (nowaday for explain we need only schemas :)\n\nand we try to applay this pg_dumpall file.... and suprise :)\n\nwe have something like that:\n\n\\connect template1\n.....\nCREATE USER \"foo\" WITH SYSID 32 PASSWORD 'bar' NOCREATEDB NOCREATEUSER;\n.....\n\\connect template1 \"foo\"\nCREATE DATABASE \"foodb\" WITH TEMPLATE = template0 ENCODING = 'LATIN2';\n\\connect \"foodb\" \"foo\"\n\nI think evryone see why it dont work.\n\nI think that rebuild of dumping procedure to detect such a problems and \nremake dump file like that\n\n\\connect template1\n.....\nCREATE USER \"foo\" WITH SYSID 32 PASSWORD 'bar' CREATEDB NOCREATEUSER;\n.....\n\\connect template1 \"foo\"\nCREATE DATABASE \"foodb\" WITH TEMPLATE = template0 ENCODING = 'LATIN2';\n\\connect template1 \"postgres\"\nALTER USER \"foo\" WITH NOCREATEDB;\n\\connect \"foodb\" \"foo\"\n\nwill solve this problem.\n\nregards\nRobert Partyka\nbobson@wdg.pl\nwww.WDG.pl \n\n", "msg_date": "Tue, 08 Oct 2002 11:57:59 +0200", "msg_from": "Robert Partyka <bobson@ares.fils.us.edu.pl>", "msg_from_op": true, "msg_subject": "pg_dump file question" }, { "msg_contents": "Robert Partyka <bobson@ares.fils.us.edu.pl> writes:\n> \\connect template1\n> .....\n> CREATE USER \"foo\" WITH SYSID 32 PASSWORD 'bar' NOCREATEDB NOCREATEUSER;\n> .....\n> \\connect template1 \"foo\"\n> CREATE DATABASE \"foodb\" WITH TEMPLATE = template0 ENCODING = 'LATIN2';\n> \\connect \"foodb\" \"foo\"\n> \n> I think evryone see why it dont work.\n\nYes -- it's a known problem with 7.2 that pg_dump can create\nself-inconsistent dumps. In 7.3, this specific case has been fixed:\ndatabases are now created using CREATE DATABASE ... WITH\nOWNER. However, I'm not sure if there are other, similar problems that\nhaven't been fixed yet.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n", "msg_date": "08 Oct 2002 12:49:10 -0400", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump file question" } ]
[ { "msg_contents": "Check out this link, if you need something to laugh at:\nhttp://www.postgresql.org/idocs/index.php?1'\n\nKeeping in mind, that there are bunch of overflows in PostgreSQL(really?),\nit is\nvery dangerous i guess. Right?\n\n\n\n________________________________________________________________________\nThis letter has been delivered unencrypted. We'd like to remind you that\nthe full protection of e-mail correspondence is provided by S-mail\nencryption mechanisms if only both, Sender and Recipient use S-mail.\nRegister at S-mail.com: http://www.s-mail.com\n", "msg_date": "Tue, 08 Oct 2002 09:58:34 +0000", "msg_from": "Sir Mordred The Traitor <mordred@s-mail.com>", "msg_from_op": true, "msg_subject": "Little note to php coders" }, { "msg_contents": "On Tue, 8 Oct 2002, Sir Mordred The Traitor wrote:\n\n> Check out this link, if you need something to laugh at:\n> http://www.postgresql.org/idocs/index.php?1'\n> \n> Keeping in mind, that there are bunch of overflows in PostgreSQL(really?),\n> it is\n> very dangerous i guess. Right?\n\nI'm not sure what list this really fits onto so I've left as hackers.\n\nThe old argument about data validation and whose job it is. However, is there a\nreason why all CGI parameters aren't scanned and rejected if they contain\nany punctuation. I was going to say if they contain anything non alphanumeric\nbut then I'm not sure about internationalisation and that test.\n\n\n-- \nNigel J. Andrews\n\n", "msg_date": "Tue, 8 Oct 2002 11:11:17 +0100 (BST)", "msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>", "msg_from_op": false, "msg_subject": "Re: Little note to php coders" }, { "msg_contents": "On Tue, 8 Oct 2002, Sir Mordred The Traitor wrote:\n\n> Check out this link, if you need something to laugh at:\n> http://www.postgresql.org/idocs/index.php?1'\n>\n> Keeping in mind, that there are bunch of overflows in PostgreSQL(really?),\n> it is\n> very dangerous i guess. Right?\n\nDon't see what you're complaining about. I get teh 7.2.1 admin guide.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n http://www.camping-usa.com http://www.cloudninegifts.com\n http://www.meanstreamradio.com http://www.unknown-artists.com\n==========================================================================\n\n\n\n", "msg_date": "Tue, 8 Oct 2002 06:34:39 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: Little note to php coders" }, { "msg_contents": "This is one of the reasons I usually recommend running with magic quotes\non, it provides a bit of insurance for those spots where your data\nvalidation is not up to snuff.\n\nRobert Treat\n\nOn Tue, 2002-10-08 at 06:11, Nigel J. Andrews wrote:\n> On Tue, 8 Oct 2002, Sir Mordred The Traitor wrote:\n> \n> > Check out this link, if you need something to laugh at:\n> > http://www.postgresql.org/idocs/index.php?1'\n> > \n> > Keeping in mind, that there are bunch of overflows in PostgreSQL(really?),\n> > it is\n> > very dangerous i guess. Right?\n> \n> I'm not sure what list this really fits onto so I've left as hackers.\n> \n> The old argument about data validation and whose job it is. However, is there a\n> reason why all CGI parameters aren't scanned and rejected if they contain\n> any punctuation. I was going to say if they contain anything non alphanumeric\n> but then I'm not sure about internationalisation and that test.\n> \n> \n> -- \n> Nigel J. Andrews\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n\n\n", "msg_date": "08 Oct 2002 09:27:40 -0400", "msg_from": "Robert Treat <xzilla@users.sourceforge.net>", "msg_from_op": false, "msg_subject": "Re: Little note to php coders" } ]
[ { "msg_contents": "What I understood from the Administrator's guide is:\n\n - Yes, PostgreSQL provides hot backup: it's the pg_dump utility. It'h hot because users can still be connected and work whil pg_dump is running ( though they will be slowed down). ( See Administrator's guide ch9)\n\n - No, PostgreSQL does NOT provide a way to restore a database up to the last commited transaction, with a reapply of the WAL, as Oracle or SQL Server ( and others, I guess) do. That would be a VERY good feature. See Administrator's guide ch11\n\nSo, with Pg, if you backup your db every night with pg_dump, and your server crashes during the day, you will loose up to one day of work.\n\nAm I true?\n\nErwan\n\n\n-------------------------------------------------------------------------------\nErwan DUROSELLE // SEAFRANCE DSI \nResponsable Bases de Données // Databases Manager\nTel: +33 (0)1 55 31 59 70 // Fax: +33 (0)1 55 31 85 28\nemail: eduroselle@seafrance.fr\n-------------------------------------------------------------------------------\n\n\n>>> Neil Conway <neilc@samurai.com> 07/10/2002 19:48 >>>\n\"Sandeep Chadha\" <sandeep@newnetco.com> writes:\n> Postgresql has been lacking this all along. I've installed postgres\n> 7.3b2 and still don't see any archive's flushed to any other\n> place. Please let me know how is hot backup procedure implemented in\n> current 7.3 beta(2) release.\n\nAFAIK no such hot backup feature has been implemented for 7.3 -- you\nappear to have been misinformed.\n\nThat said, I agree that would be a good feature to have.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: Have you searched our list archives?\n\nhttp://archives.postgresql.org\n\n", "msg_date": "Tue, 08 Oct 2002 12:35:22 +0200", "msg_from": "\"Erwan DUROSELLE\" <EDuroselle@seafrance.fr>", "msg_from_op": true, "msg_subject": "=?ISO-8859-1?Q?R=E9p.=20:=20Re:=20=20[HACKERS]=20Hot=20?=\n\t=?ISO-8859-1?Q?Backup?=" }, { "msg_contents": "On Tue, Oct 08, 2002 at 12:35:22PM +0200, Erwan DUROSELLE wrote:\n> What I understood from the Administrator's guide is:\n> \n> - Yes, PostgreSQL provides hot backup: it's the pg_dump utility. It'h\n> hot because users can still be connected and work whil pg_dump is running\n> ( though they will be slowed down). ( See Administrator's guide ch9)\n\nCorrect.\n\n> - No, PostgreSQL does NOT provide a way to restore a database up to the\n> last commited transaction, with a reapply of the WAL, as Oracle or SQL\n> Server ( and others, I guess) do. That would be a VERY good feature. See\n> Administrator's guide ch11\n\nUmm, I thought the whole point of WAL was that if the database crashed, the\nWAL would provide the info to replay to the last committed transaction.\n\nhttp://www.postgresql.org/idocs/index.php?wal.html\n\n... because we know that in the event of a crash we will be able to recover\nthe database using the log: ...\n\nThese docs seem to corrobrate this.\n\n> So, with Pg, if you backup your db every night with pg_dump, and your\n> server crashes during the day, you will loose up to one day of work.\n\nI've never lost any data with postgres, even if it's crashed, even without\nWAL.\n\n-- \nMartijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/\n> There are 10 kinds of people in the world, those that can do binary\n> arithmetic and those that can't.\n", "msg_date": "Tue, 8 Oct 2002 20:45:13 +1000", "msg_from": "Martijn van Oosterhout <kleptog@svana.org>", "msg_from_op": false, "msg_subject": "Re: =?iso-8859-1?B?Uulw?=" }, { "msg_contents": "Martijn!\n\nTuesday, October 08, 2002, 3:45:13 PM, you wrote:\n\n>> - No, PostgreSQL does NOT provide a way to restore a database up to the\n>> last commited transaction, with a reapply of the WAL, as Oracle or SQL\n>> Server ( and others, I guess) do. That would be a VERY good feature. See\n>> Administrator's guide ch11\n\nMvO> Umm, I thought the whole point of WAL was that if the database crashed, the\nMvO> WAL would provide the info to replay to the last committed transaction.\n\nMvO> http://www.postgresql.org/idocs/index.php?wal.html\n\nMvO> ... because we know that in the event of a crash we will be able to recover\nMvO> the database using the log: ...\n\nMvO> These docs seem to corrobrate this.\n\n>> So, with Pg, if you backup your db every night with pg_dump, and your\n>> server crashes during the day, you will loose up to one day of work.\n\nMvO> I've never lost any data with postgres, even if it's crashed, even without\nMvO> WAL.\n\nSuppose you made your nightly backup, and then after a day of work\nthe building where your server is located disappears in flames..\n\nThat's it - you lost one day of work (of course, if your dumps where\nstored outside that building otherwise you lost everything)..\n\nThere is a need in \"incremental\" backup, which backs up only those\ntransactions which has been fulfilled after last \"full dump\" or last\n\"incremental dump\". These backups should be done quite painlessly -\njust copy some part of WAL, and should be small enough (compared to\nfull dump), so they can be done each hour or even more frequently..\n\nI hope sometime PostgreSQL will support that. :-)\n\nSincerely Yours,\nTimur\nmailto:itvthor@sdf.lonestar.org\n\n", "msg_date": "Tue, 8 Oct 2002 16:00:12 +0500", "msg_from": "\"Timur V. Irmatov\" <itvthor@sdf.lonestar.org>", "msg_from_op": false, "msg_subject": "=?Windows-1251?B?UmVbMl06IFtHRU5FUkFMXSBS6XA=?=" } ]
[ { "msg_contents": "Nice. That little, cute admin :-). \nThis is already fixed, and where is 'thanks' i wonder?\nI've been talking about sql injection.\n\nHow about that in http://www.postgresql.org/mirrors/index.php:\n-------\nWarning: PostgreSQL query failed: ERROR: invalid INET value 'r'\nin /usr/local/www/www/mirrors/index.php on line 263\nDatabase update failed, contact the webmaster.\n\ninsert into mirrorsites(mirrorhostid,ipaddr,portnum,...) values(..)\n------\nInsert statement is shortened.\n\n\n\n\n\n\n\n________________________________________________________________________\nThis letter has been delivered unencrypted. We'd like to remind you that\nthe full protection of e-mail correspondence is provided by S-mail\nencryption mechanisms if only both, Sender and Recipient use S-mail.\nRegister at S-mail.com: http://www.s-mail.com\n", "msg_date": "Tue, 08 Oct 2002 10:57:46 +0000", "msg_from": "Sir Mordred The Traitor <mordred@s-mail.com>", "msg_from_op": true, "msg_subject": "Re: Little note to php coders" } ]
[ { "msg_contents": "\nMvO> http://www.postgresql.org/idocs/index.php?wal.html \n- The URL you refer to is the ch11 I was refering to. It seems that this chapter is not as easily understandable as it should...\nIt says that with WAL, \"pg is able to garantee consistency in the case of a crash\".\nOK, but I think is about /consistency/.\nFor what I understand, it just says that in the case of a core dump of a server process (improbable) or a power cut (probable) or an unwanted kill -9 (may happen), Pg will not have any corrupted table or index. \n\nCool, but not enough.\n\nAs Timur pointed out, I was refering to a disk crash or total loss of a server.\nIn this case, you loose up to 1 day of data.\n\nMvO> I've never lost any data with postgres, even if it's crashed, even without\nMvO> WAL.\nYour're a lucky guy, but Pg may not be the weakest part of your information system.\nIn my short DBA life ( 3 years, Oracle & MS SQL ), I have already seen twice the case when a WHOLE raid5 array was broken and all the data lost.\nOne time, it was the controller which wrote ch... on the disks, the other time, a power failure/peak/else crashed all of the 8 disks of the array.\nBoth times, the incremental backup method reduced the data loss to almost nothing.\n\n> There is a need in \"incremental\" backup, which backs up only those\n> transactions which has been fulfilled after last \"full dump\" or last\n> \"incremental dump\". These backups should be done quite painlessly -\n> just copy some part of WAL, and should be small enough (compared to\n> full dump), so they can be done each hour or even more frequently..\n> \n> I hope sometime PostgreSQL will support that. :-)\n\nSo do I.\nI think this would be on top of my \"missing features\" list.\n\nAs someone said, Replication may be a way to reduce the risks.\n\nE.D.\n\n-------------------------------------------------------------------------------\nErwan DUROSELLE // SEAFRANCE DSI \nResponsable Bases de Données // Databases Manager\nTel: +33 (0)1 55 31 59 70 // Fax: +33 (0)1 55 31 85 28\nemail: eduroselle@seafrance.fr\n-------------------------------------------------------------------------------\n\n\n>>> \"Timur V. Irmatov\" <itvthor@sdf.lonestar.org> 08/10/2002 13:00 >>>\nMartijn!\n\nTuesday, October 08, 2002, 3:45:13 PM, you wrote:\n\n>> - No, PostgreSQL does NOT provide a way to restore a database up to the\n>> last commited transaction, with a reapply of the WAL, as Oracle or SQL\n>> Server ( and others, I guess) do. That would be a VERY good feature. See\n>> Administrator's guide ch11\n\nMvO> Umm, I thought the whole point of WAL was that if the database crashed, the\nMvO> WAL would provide the info to replay to the last committed transaction.\n\n\nMvO> ... because we know that in the event of a crash we will be able to recover\nMvO> the database using the log: ...\n\nMvO> These docs seem to corrobrate this.\n\n>> So, with Pg, if you backup your db every night with pg_dump, and your\n>> server crashes during the day, you will loose up to one day of work.\n\nMvO> I've never lost any data with postgres, even if it's crashed, even without\nMvO> WAL.\n\nSuppose you made your nightly backup, and then after a day of work\nthe building where your server is located disappears in flames..\n\nThat's it - you lost one day of work (of course, if your dumps where\nstored outside that building otherwise you lost everything)..\n\nThere is a need in \"incremental\" backup, which backs up only those\ntransactions which has been fulfilled after last \"full dump\" or last\n\"incremental dump\". These backups should be done quite painlessly -\njust copy some part of WAL, and should be small enough (compared to\nfull dump), so they can be done each hour or even more frequently..\n\nI hope sometime PostgreSQL will support that. :-)\n\nSincerely Yours,\nTimur\nmailto:itvthor@sdf.lonestar.org \n\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n", "msg_date": "Tue, 08 Oct 2002 14:17:47 +0200", "msg_from": "\"Erwan DUROSELLE\" <EDuroselle@seafrance.fr>", "msg_from_op": true, "msg_subject": "Re: Hot Backup" }, { "msg_contents": "On 8 Oct 2002 at 14:17, Erwan DUROSELLE wrote:\n\n> \n> MvO> http://www.postgresql.org/idocs/index.php?wal.html \n> - The URL you refer to is the ch11 I was refering to. It seems that this chapter is not as easily understandable as it should...\n> It says that with WAL, \"pg is able to garantee consistency in the case of a crash\".\n> OK, but I think is about /consistency/.\n> For what I understand, it just says that in the case of a core dump of a server process (improbable) or a power cut (probable) or an unwanted kill -9 (may happen), Pg will not have any corrupted table or index. \n> \n> Cool, but not enough.\n> \n> As Timur pointed out, I was refering to a disk crash or total loss of a server.\n> In this case, you loose up to 1 day of data.\n> > There is a need in \"incremental\" backup, which backs up only those\n> > transactions which has been fulfilled after last \"full dump\" or last\n> > \"incremental dump\". These backups should be done quite painlessly -\n> > just copy some part of WAL, and should be small enough (compared to\n> > full dump), so they can be done each hour or even more frequently..\n> > \n> > I hope sometime PostgreSQL will support that. :-)\n\nWell, there are replication solutions which rsyncs WAL files after they are \nrotated so two database instances are upto sync with each other at a difference \nof one WAL file. If you are interested I can post the pdf.\n\nI guess that takes care of scenario you plan to avoid..\n\nBye\n Shridhar\n\n--\nTruthful, adj.:\tDumb and illiterate.\t\t-- Ambrose Bierce, \"The Devil's \nDictionary\"\n\n", "msg_date": "Tue, 08 Oct 2002 18:28:01 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Re: Hot Backup" }, { "msg_contents": "On Tue, 2002-10-08 at 08:58, Shridhar Daithankar wrote:\n> On 8 Oct 2002 at 14:17, Erwan DUROSELLE wrote:\n> > MvO> http://www.postgresql.org/idocs/index.php?wal.html \n> > - The URL you refer to is the ch11 I was refering to. It seems that this chapter is not as easily understandable as it should...\n> > It says that with WAL, \"pg is able to garantee consistency in the case of a crash\".\n> > OK, but I think is about /consistency/.\n> > For what I understand, it just says that in the case of a core dump of a server process (improbable) or a power cut (probable) or an unwanted kill -9 (may happen), Pg will not have any corrupted table or index. \n> > \n> > Cool, but not enough.\n> > \n> > As Timur pointed out, I was refering to a disk crash or total loss of a server.\n> > In this case, you loose up to 1 day of data.\n\nIs it me or do doomsdays scenarios sometimes seem a little silly? I'd\nlike to ask just where are you storing your \"incremental backups\" with\nOracle/m$ sql ?? If it's on the same drive, then when you drive craps\nout you've lost the incremental backups as well. Are you putting them\non a different drive (you can do that with the WAL) you'd still have the\nproblem that if the building went up in smoke you'd lose that\nincremental backup. Unless you are doing \"incremental backups\" to a\ncomputer in another physical location, you still fail all of your\nscenarios.\n\n> > > There is a need in \"incremental\" backup, which backs up only those\n> > > transactions which has been fulfilled after last \"full dump\" or last\n> > > \"incremental dump\". These backups should be done quite painlessly -\n> > > just copy some part of WAL, and should be small enough (compared to\n> > > full dump), so they can be done each hour or even more frequently..\n> > > \n> > > I hope sometime PostgreSQL will support that. :-)\n> \n> Well, there are replication solutions which rsyncs WAL files after they are \n> rotated so two database instances are upto sync with each other at a difference \n> of one WAL file. If you are interested I can post the pdf.\n> \n> I guess that takes care of scenario you plan to avoid..\n> \n\nThis type of scenario sounds as good as the above mentioned methods for\noracle/m$ server. Could you post your pdf? Seems like it might be worth\nadding to the techdocs site.\n\nRobert Treat\n\n\n\n\n", "msg_date": "08 Oct 2002 09:40:44 -0400", "msg_from": "Robert Treat <xzilla@users.sourceforge.net>", "msg_from_op": false, "msg_subject": "Re: Hot Backup" }, { "msg_contents": "On 8 Oct 2002 at 9:40, Robert Treat wrote:\n> Is it me or do doomsdays scenarios sometimes seem a little silly? I'd\n> like to ask just where are you storing your \"incremental backups\" with\n> Oracle/m$ sql ?? If it's on the same drive, then when you drive craps\n> out you've lost the incremental backups as well. Are you putting them\n> on a different drive (you can do that with the WAL) you'd still have the\n> problem that if the building went up in smoke you'd lose that\n> incremental backup. Unless you are doing \"incremental backups\" to a\n> computer in another physical location, you still fail all of your\n> scenarios.\n\nWell, all I can say is having a sync'ed and replicated database ssytem is good. \nEither from load sharing point of view or from fail over point.\n\nForget backup. If you are upto restoring from backup, doesn't matter most of \nthe times whether you are restoring from dump or cycling thr. thousands of WAL \nfiles..\n\n> This type of scenario sounds as good as the above mentioned methods for\n> oracle/m$ server. Could you post your pdf? Seems like it might be worth\n> adding to the techdocs site.\n\nWell, I found it googling around. Se if this helps...I have posted this before \ndunno which list. This crosposting habit of PG lists made me to lose tracks of \nwhere things originated and where they ended.. Not that it's bad..\n\nHTH\nBye\n Shridhar\n\n--\nlike:\tWhen being alive at the same time is a wonderful coincidence.", "msg_date": "Tue, 08 Oct 2002 19:21:55 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Re: Hot Backup" }, { "msg_contents": "Hi Erwan,\n\nErwan DUROSELLE wrote:\n<snip>\n> As someone said, Replication may be a way to reduce the risks.\n\nYes, it can, but it all depends on how much the data is worth, what kind\nof load happens, etc.\n\nSomething as simple as a master->slave replication setup will do,\nbecause the master would generally be the beefy box doing processing,\nand the slave database server only receives changes, etc.\n\nUseful for lots of circumstances, although not the only solution.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n \n> E.D.\n> \n> -------------------------------------------------------------------------------\n> Erwan DUROSELLE // SEAFRANCE DSI\n> Responsable Bases de Donn�es // Databases Manager\n> Tel: +33 (0)1 55 31 59 70 // Fax: +33 (0)1 55 31 85 28\n> email: eduroselle@seafrance.fr\n> -------------------------------------------------------------------------------\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Wed, 09 Oct 2002 00:04:40 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Hot Backup" }, { "msg_contents": "On Tue, Oct 08, 2002 at 09:40:44AM -0400, Robert Treat wrote:\n\n> Is it me or do doomsdays scenarios sometimes seem a little silly? I'd\n\nNot if your contract requires five-nines reliablility and no more\nthan 180 minutes of downtime _ever_. Is five-nines realistic? For\nmost purposes, probably not, according to recent pronouncements (see,\ne.g. <http://www.bcr.com/bcrmag/2002/05/p22.asp>). But it's in\nlots of contracts anyway.\n\n> like to ask just where are you storing your \"incremental backups\" with\n> Oracle/m$ sql ?? If it's on the same drive, then when you drive craps\n\nThe more or less standard way of doing this is to stream the\nPITR-required stuff to another device on another controller -- lots\nof people stream to tape. People have been doing this for ages,\npartly because disks used to be (a) expensive and (b) unreliable. \n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Tue, 22 Oct 2002 10:34:42 -0400", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: Hot Backup" }, { "msg_contents": "I think you missed the part of the thread where the nuclear bomb hit the\ndata center. hmm... maybe it wasn't a nuclear bomb, but it was getting\nthere. :-)\n\nBTW - I believe we'll have real PITR in 7.4, about 6 months away. \n\nRobert Treat\n\nOn Tue, 2002-10-22 at 10:34, Andrew Sullivan wrote:\n> On Tue, Oct 08, 2002 at 09:40:44AM -0400, Robert Treat wrote:\n> \n> > Is it me or do doomsdays scenarios sometimes seem a little silly? I'd\n> \n> Not if your contract requires five-nines reliablility and no more\n> than 180 minutes of downtime _ever_. Is five-nines realistic? For\n> most purposes, probably not, according to recent pronouncements (see,\n> e.g. <http://www.bcr.com/bcrmag/2002/05/p22.asp>). But it's in\n> lots of contracts anyway.\n> \n> > like to ask just where are you storing your \"incremental backups\" with\n> > Oracle/m$ sql ?? If it's on the same drive, then when you drive craps\n> \n> The more or less standard way of doing this is to stream the\n> PITR-required stuff to another device on another controller -- lots\n> of people stream to tape. People have been doing this for ages,\n> partly because disks used to be (a) expensive and (b) unreliable. \n> \n\n\n\n", "msg_date": "24 Oct 2002 10:42:00 -0400", "msg_from": "Robert Treat <xzilla@users.sourceforge.net>", "msg_from_op": false, "msg_subject": "Re: Hot Backup" }, { "msg_contents": "On Thu, Oct 24, 2002 at 10:42:00AM -0400, Robert Treat wrote:\n> I think you missed the part of the thread where the nuclear bomb hit the\n> data center. hmm... maybe it wasn't a nuclear bomb, but it was getting\n> there. :-)\n\nNo, I didn't miss it. Have a look at the Internet Society bid to run\n.org -- it's available for public consumption on ICANN's site. One may\nbelive that, if people are launching nuclear attacks, suicide\nbombings, and anthrax releases, the disposition of some set of data\none looks after is unlikely to be of tremendous importance. But\nlawyers and insurers don't think that way, and if you really want\nPostgreSQL to be taken seriously in the \"enterprise market\", you have\nto please lawyers and insurers. \n\nHaving undertaken the exercise, I really can say that it is a little\nstrange to think about what would happen to data I am in charge of in\ncase a fairly large US centre were completely blown off the map. But\nwith a little careful planning, you actually _can_ think about that,\nand provide strong assurances that things won't get lost. But it\ndoesn't pay to call such questions \"silly\", because they are\nquestions that people will demand answers to before they entrust you\nwith their millions of dollars of data. \n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Thu, 24 Oct 2002 16:11:28 -0400", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: Hot Backup" }, { "msg_contents": "I must agree Andrew. Having worked in the Financial sector these kinds\nof questions are actually put forward with genuine concern, and when\npeople are paying the \"big bucks\" to here the answers, nothing is ever\n\"yeah, right\", it's always answered with a costed solution to meet the\nneeds.\nGiven the events of Sept 11th, the serious consideration of a building\nbeing destroyed is actually a potential reality. However, rather than\nbeing a terrorist attack, it could easily be a natural disaster, like an\nearthquake or a tornado. Admittedly nuclear strikes are remote ;-) Well,\nI'd like to think that, seeing I live in New Zealand.\n\nMost of the upper management don't have a clue as to what can be done on\na lower level to protect that data that is being gathered. In protecting\ndata, having a reliable OS and Database application is only part of the\nsolution. I've worked on Enterprise solutions that contain massive RAID\n5 volumes with hard mirroring occuring across a WAN to a remote site,\nphysically located in another building, JUST, to cover the what if\nscenarios, in addition, there was log shipping occuring every 30mins to\nremote servers in other centres and then nightly backups of those logs. \n\nI tend to take the view that doing database work is best taken from a\nholistic approach. I feel happy once I have an overall level of\nsatisfaction that not only will the database and OS perform to\nexpectation, but that the \"data\", the one thing I am really concerned\nabout is always recoverable.\n\nBasically, backup is available to N degrees of complexity, what kills it\nis how much you're willing to spend and at what level you feel satisfied\nthat you could put your hand over your heart and say, \"Yes, this data is\n100% recoverable from this point in time, regardless of the following\ncircumstances...\". \n\nMy 5c.\n\nHadley\n\n\nOn Fri, 2002-10-25 at 09:11, Andrew Sullivan wrote:\n> On Thu, Oct 24, 2002 at 10:42:00AM -0400, Robert Treat wrote:\n> > I think you missed the part of the thread where the nuclear bomb hit the\n> > data center. hmm... maybe it wasn't a nuclear bomb, but it was getting\n> > there. :-)\n> \n> No, I didn't miss it. Have a look at the Internet Society bid to run\n> .org -- it's available for public consumption on ICANN's site. One may\n> belive that, if people are launching nuclear attacks, suicide\n> bombings, and anthrax releases, the disposition of some set of data\n> one looks after is unlikely to be of tremendous importance. But\n> lawyers and insurers don't think that way, and if you really want\n> PostgreSQL to be taken seriously in the \"enterprise market\", you have\n> to please lawyers and insurers. \n> \n> Having undertaken the exercise, I really can say that it is a little\n> strange to think about what would happen to data I am in charge of in\n> case a fairly large US centre were completely blown off the map. But\n> with a little careful planning, you actually _can_ think about that,\n> and provide strong assurances that things won't get lost. But it\n> doesn't pay to call such questions \"silly\", because they are\n> questions that people will demand answers to before they entrust you\n> with their millions of dollars of data. \n> \n> A\n> \n> -- \n> ----\n> Andrew Sullivan 204-4141 Yonge Street\n> Liberty RMS Toronto, Ontario Canada\n> <andrew@libertyrms.info> M2P 2A8\n> +1 416 646 3304 x110\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n-- \nHadley Willan > Systems Development > Deeper Design Limited. \nhadley@deeper.co.nz > www.deeperdesign.com > +64 (21) 28 41 463\n\n\n", "msg_date": "25 Oct 2002 09:39:12 +1300", "msg_from": "Hadley Willan <hadley.willan@deeper.co.nz>", "msg_from_op": false, "msg_subject": "Re: Hot Backup" }, { "msg_contents": "The world rejoiced as andrew@libertyrms.info (Andrew Sullivan) wrote:\n> Having undertaken the exercise, I really can say that it is a little\n> strange to think about what would happen to data I am in charge of in\n> case a fairly large US centre were completely blown off the map. But\n> with a little careful planning, you actually _can_ think about that,\n> and provide strong assurances that things won't get lost. But it\n> doesn't pay to call such questions \"silly\", because they are\n> questions that people will demand answers to before they entrust you\n> with their millions of dollars of data. \n\nI was associated with one data center that has the whole\n\"barbed-wire-fences, 40-foot-underground-bunker, retina-scanning\"\nthing; they apparently /did/ do analysis based on the site being a\npotential target for nuclear attack.\n\nRealistically, two scenarios are much more realistic:\n\n a) The site resides in a significant tornado zone where towns\n occasionally get scraped off the map;\n\n b) The site isn't far from a small but busy airport, and they did\n consciously consider the possibility of aircraft crashing into the\n building. Presumably by accident, not by design; the company owns\n quite a number of jet aircraft, so that vulnerabilities involving\n misuse of aircraft would rapidly \"fly\" to mind... (Painfully\n and vastly moreso since 9/11, of course :-(.)\n\nWhen doing risk analysis, it is certainly necessary to consider these\nsorts of (admittedly paranoid) scenarios. \n\nIt's a bit fun, in a way; you get to look for some pretty odd-ball\nsituations; the \"server room being overrun by Mongol Hordes.\" That\nparticular one isn't too likely, of course :-).\n-- \n(reverse (concatenate 'string \"moc.enworbbc@\" \"sirhc\"))\nhttp://www.ntlug.org/~cbbrowne/advocacy.html\n\"I've discovered that P=NP, but the proof is too long to fit within\nthe confines of this signature...\"\n", "msg_date": "Thu, 24 Oct 2002 17:12:14 -0400", "msg_from": "cbbrowne@cbbrowne.com", "msg_from_op": false, "msg_subject": "Re: Hot Backup" }, { "msg_contents": "On Thu, 2002-10-24 at 16:11, Andrew Sullivan wrote:\n> On Thu, Oct 24, 2002 at 10:42:00AM -0400, Robert Treat wrote:\n> > I think you missed the part of the thread where the nuclear bomb hit the\n> > data center. hmm... maybe it wasn't a nuclear bomb, but it was getting\n> > there. :-)\n> \n> No, I didn't miss it. Have a look at the Internet Society bid to run\n> .org -- it's available for public consumption on ICANN's site. One may\n> belive that, if people are launching nuclear attacks, suicide\n> bombings, and anthrax releases, the disposition of some set of data\n> one looks after is unlikely to be of tremendous importance. But\n> lawyers and insurers don't think that way, and if you really want\n> PostgreSQL to be taken seriously in the \"enterprise market\", you have\n> to please lawyers and insurers. \n> \n> Having undertaken the exercise, I really can say that it is a little\n> strange to think about what would happen to data I am in charge of in\n> case a fairly large US centre were completely blown off the map. But\n> with a little careful planning, you actually _can_ think about that,\n> and provide strong assurances that things won't get lost. But it\n> doesn't pay to call such questions \"silly\", because they are\n> questions that people will demand answers to before they entrust you\n> with their millions of dollars of data. \n> \n\nIf someone tries to argue that PostgreSQL isn't viable as a database\nsolution because it doesn't have PITR, but their PITR solution is\nstoring all that data on the same machine, well, that's silly. I'm not\nsaying PITR isn't a good thing, but I can always come up with some\nfar-fetched way to show you why your backup schema isn't sufficient, the\npoint is to determine how much data loss is acceptable and then plan\naccordingly. (And yes, I have had to plan the what if the data center\ngets blown up scenario before, but that doesn't mean I require my\ndatabase can withstand global thermal-nuclear war on every project I do)\n\nRobert Treat\n\n\n", "msg_date": "24 Oct 2002 18:40:09 -0400", "msg_from": "Robert Treat <xzilla@users.sourceforge.net>", "msg_from_op": false, "msg_subject": "Re: Hot Backup" } ]
[ { "msg_contents": "\n> > Can the magic be, that kaio directly writes from user space memory to the \n> > disk ?\n> \n> This makes more assumptions about the disk drive's behavior than I think\n> are justified...\n\nNo, no assumption about the drive, only the kaio implementation, namely, that\nthe kaio implementation reads the user memory at a latest (for the implementation)\npossible time.\n\n> > Since in your case all transactions A-E want the same buffer written,\n> > the memory (not it's content) will also be the same.\n> \n> But no, it won't: the successive writes will ask to write different\n> snapshots of the same buffer.\n\nThat is what I meant, yes. Should have better formulated memory location.\n\n> > The problem I can see offhand is how the kaio system can tell which\n> > transaction can be safely notified of the write,\n> \n> Yup, exactly. Whose snapshot made it down to (stable) disk storage?\n\nThat is something the kaio implementation could prbbly know in our scenario, \nwhere we iirc only append new records inside the page (don't modify anything \nup front). Worst thing that can happen is that the kaio is too pessimistic \nabout what is already on disk (memory already written, but aio call not yet done).\n\nAndreas\n", "msg_date": "Tue, 8 Oct 2002 18:06:29 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: Analysis of ganged WAL writes " } ]
[ { "msg_contents": "Hi,\n\n First of all, sorry for disturbing you all guys with my email talking about \nthis subject again.\n\n I am a Spanish student and I following a course about Genetic Algorithms. My\nteacher suggested me to search for some information of how to optimize things\nin a database. Then I thought about PostgreSQL. I have downloaded tons of \npdfs papers.\n\n Mainly I have to do a small assigment showing what can be achieved with\nPostgreSQL using genetic query optimization.\n\n So far, I have found a lot of information but with a very high level, and \nwhat I am looking for is much more simple: I just need some insight of what can \nbe done with the query optimizer and a few examples to start with and make out \na new set of examples or to solve a little bit bigger problem.\n\n Hope you guys can help me out a little bit, with PDFs, papers or references.\n\n Many thanks in advance\n\n Miguel\n\n\n", "msg_date": "Tue, 8 Oct 2002 16:22:48 MET", "msg_from": "iafmgc@unileon.es", "msg_from_op": true, "msg_subject": "genetic query optimization" } ]
[ { "msg_contents": "\nA lot of different things going on but my perl program (whose backend crashed)\nwas doing a lot of insert into table as select * from another table for a\nlot of different tables. I see triggers referenced here and it should be\nnoted that for one of the tables the triggers were first disabled (update\npg_class) and re-enabled after the inserts are done (or it takes forever).\n\nThe pgsql log shows:\n...\n2002-10-08 15:48:38 [18033] DEBUG: recycled transaction log file\n00000052000000B1\n2002-10-08 15:49:24 [28612] DEBUG: server process (pid 16003) was\nterminated by signal 11\n2002-10-08 15:49:24 [28612] DEBUG: terminating any other active server\nprocesses\n2002-10-08 18:49:24 [28616] NOTICE: Message from PostgreSQL backend:\n The Postmaster has informed me that some other backend\n died abnormally and possibly corrupted shared memory.\n I have rolled back the current transaction and am\n going to terminate your database system connection and exit.\n Please reconnect to the database system and repeat your query.\n...\n\nA core file was found in <datadir>/base/326602604\nand a backtrace shows:\n(gdb) bt\n#0 DeferredTriggerSaveEvent (relinfo=0x83335f0, event=0, oldtup=0x0,\n newtup=0x8348150) at trigger.c:2056\n#1 0x080b9d0c in ExecARInsertTriggers (estate=0x8333778,\nrelinfo=0x83335f0,\n trigtuple=0x8348150) at trigger.c:952\n#2 0x080c0f23 in ExecAppend (slot=0x8333660, tupleid=0x0,\nestate=0x8333778)\n at execMain.c:1280\n#3 0x080c0dcd in ExecutePlan (estate=0x8333778, plan=0x83336f0,\n operation=CMD_INSERT, numberTuples=0, direction=ForwardScanDirection,\n destfunc=0x8334278) at execMain.c:1119\n#4 0x080c026c in ExecutorRun (queryDesc=0x826fd88, estate=0x8333778,\nfeature=3,\n count=0) at execMain.c:233\n#5 0x0810b2d5 in ProcessQuery (parsetree=0x826c500, plan=0x83336f0,\ndest=Remote,\n completionTag=0xbfffec10 \"\") at pquery.c:259\n#6 0x08109c83 in pg_exec_query_string (\n query_string=0x826c168 \"insert into jobsequences select * from\nrev_000_jobsequences\", dest=Remote, parse_context=0x8242cd8) at\npostgres.c:811\n#7 0x0810abee in PostgresMain (argc=4, argv=0xbfffee40,\n username=0x8202d59 \"laurette\") at postgres.c:1929\n#8 0x080f24fe in DoBackend (port=0x8202c28) at postmaster.c:2243\n#9 0x080f1e9a in BackendStartup (port=0x8202c28) at postmaster.c:1874\n#10 0x080f10e9 in ServerLoop () at postmaster.c:995\n#11 0x080f0c56 in PostmasterMain (argc=1, argv=0x81eb398) at\npostmaster.c:771\n#12 0x080d172b in main (argc=1, argv=0xbffff7d4) at main.c:206\n#13 0x401e7177 in __libc_start_main (main=0x80d15a8 <main>, argc=1,\n ubp_av=0xbffff7d4, init=0x80676ac <_init>, fini=0x81554f0 <_fini>,\n rtld_fini=0x4000e184 <_dl_fini>, stack_end=0xbffff7cc)\n at ../sysdeps/generic/libc-start.c:129\n\n\nThanks,\n\n-- \nLaurette Cisneros\nThe Database Group\n(510) 420-3137\nNextBus Information Systems, Inc.\nwww.nextbus.com\n------------------------------\nIt's 10 o'clock...\nDo you know where your bus is?\n\n", "msg_date": "Tue, 8 Oct 2002 16:34:02 -0700 (PDT)", "msg_from": "Laurette Cisneros <laurette@nextbus.com>", "msg_from_op": true, "msg_subject": "pgsql 7.2.3 crash" }, { "msg_contents": "Laurette Cisneros <laurette@nextbus.com> writes:\n> A core file was found in <datadir>/base/326602604\n> and a backtrace shows:\n> (gdb) bt\n> #0 DeferredTriggerSaveEvent (relinfo=0x83335f0, event=0, oldtup=0x0,\n> newtup=0x8348150) at trigger.c:2056\n\nHm. Line 2056 is this:\n\n\tfor (i = 0; i < ntriggers; i++)\n\t{\n\t\tTrigger *trigger = &trigdesc->triggers[tgindx[i]];\n\n->\t\tnew_event->dte_item[i].dti_tgoid = trigger->tgoid;\n\nIt seems there must be something wrong with the trigdesc data structure\nfor that table, but what? Can you poke around in the corefile with gdb\nprint commands and determine what's wrong with the trigdesc?\n\n> I see triggers referenced here and it should be\n> noted that for one of the tables the triggers were first disabled (update\n> pg_class) and re-enabled after the inserts are done (or it takes\n> forever).\n\nDid that happen while this backend was running?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 09 Oct 2002 10:14:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql 7.2.3 crash " }, { "msg_contents": "On Wed, 9 Oct 2002, Tom Lane wrote:\n\n> Laurette Cisneros <laurette@nextbus.com> writes:\n> > A core file was found in <datadir>/base/326602604\n> > and a backtrace shows:\n> > (gdb) bt\n> > #0 DeferredTriggerSaveEvent (relinfo=0x83335f0, event=0, oldtup=0x0,\n> > newtup=0x8348150) at trigger.c:2056\n> \n> Hm. Line 2056 is this:\n> \n> \tfor (i = 0; i < ntriggers; i++)\n> \t{\n> \t\tTrigger *trigger = &trigdesc->triggers[tgindx[i]];\n> \n> ->\t\tnew_event->dte_item[i].dti_tgoid = trigger->tgoid;\n> \n> It seems there must be something wrong with the trigdesc data structure\n> for that table, but what? Can you poke around in the corefile with gdb\n> print commands and determine what's wrong with the trigdesc?\n\nHere's my poking/printing around with gdb:\n(gdb) print trigger\n$1 = (Trigger *) 0x1272c9a0\n(gdb) print *trigger\nCannot access memory at address 0x1272c9a0\n(gdb) print trigger->tgoid\nCannot access memory at address 0x1272c9a0\n(gdb) print trigdesc\n$2 = (TriggerDesc *) 0x4c86b2d8\n(gdb) print &trigdesc\n$3 = (TriggerDesc **) 0xbfffea18 \n(gdb) print *trigdesc \n$4 = {n_before_statement = {5378, 22310, 37184, 2085}, n_before_row =\n{51128, 19585,\n 62320, 19589}, n_after_row = {45784, 19590, 52748, 2084},\nn_after_statement = {\n 0, 0, 0, 0}, tg_before_statement = {0x4c86b2d8, 0x8259910, 0x0, 0x0},\n tg_before_row = {0xa0, 0x350000, 0x1f, 0x4006}, tg_after_row =\n{0x8242920,\n 0x4c860cb0, 0x4c86eec0, 0x0}, tg_after_statement = {0x0, 0x0, 0x0,\n0x0},\n triggers = 0x4c86e7a0, numtriggers = 8}\n\n> \n> > I see triggers referenced here and it should be\n> > noted that for one of the tables the triggers were first disabled (update\n> > pg_class) and re-enabled after the inserts are done (or it takes\n> > forever).\n> \n> Did that happen while this backend was running?\n\nYes. I had run this perl program about 4-5 times in a row (which includes\nthe sequence, disable triggers, insert rows, enable triggers) and then it\ncrashed on one of the runs.\n\nThanks,\n\n-- \nLaurette Cisneros\nThe Database Group\n(510) 420-3137\nNextBus Information Systems, Inc.\nwww.nextbus.com\n------------------------------\nIt's 10 o'clock...\nDo you know where your bus is?\n\n", "msg_date": "Wed, 9 Oct 2002 13:18:51 -0700 (PDT)", "msg_from": "Laurette Cisneros <laurette@nextbus.com>", "msg_from_op": true, "msg_subject": "Re: pgsql 7.2.3 crash " }, { "msg_contents": "Laurette Cisneros <laurette@nextbus.com> writes:\n> I see triggers referenced here and it should be\n> noted that for one of the tables the triggers were first disabled (update\n> pg_class) and re-enabled after the inserts are done (or it takes\n> forever).\n>> \n>> Did that happen while this backend was running?\n\n> Yes. I had run this perl program about 4-5 times in a row (which includes\n> the sequence, disable triggers, insert rows, enable triggers) and then it\n> crashed on one of the runs.\n\nHm. The stack trace shows that this backend crashed while executing the\ncommand\n\tinsert into jobsequences select * from rev_000_jobsequences\nIs it possible that you disabled and re-enabled triggers on jobsequences\n*while this command was running* ?\n\nThe gdb info makes it look like the triggers code is using a stale\ntrigger description structure. The pointer that's being used is cached\nin the ResultRelInfo struct (ri_TrigDesc) during query startup. If\nsome external operation changed the trigger state while the query is\nrunning, trouble would ensue.\n\nThis looks like something we ought to fix in any case, but I'm unsure\nwhether it explains your crash. Do you think that that's what could\nhave happened?\n\n\nHackers: we might reasonably fix this by doing a deep copy of the\nrelcache's trigger info during initResultRelInfo(); or we could fix it\nby getting rid of ri_TrigDesc and re-fetching from the relcache every\ntime. The former would imply that trigger state would remain unchanged\nthroughout a query, the latter would try to track currently-committed\ntrigger behavior. Either way has got pitfalls I think.\n\nThe fact that there's a problem at all is because people are using\ndirect poking of the system catalogs instead of some kind of ALTER TABLE\ncommand to disable/enable triggers; an ALTER command would presumably\ngain exclusive lock on the table and thereby delay until active queries\nfinish. But that technique is out there (even in pg_dump files :-() and\nso we'd best try to make the system proof against it.\n\nAny thoughts on which way to go?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 12 Oct 2002 21:13:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql 7.2.3 crash " }, { "msg_contents": "Tom Lane wrote:\n> Hackers: we might reasonably fix this by doing a deep copy of the\n> relcache's trigger info during initResultRelInfo(); or we could fix it\n> by getting rid of ri_TrigDesc and re-fetching from the relcache every\n> time. The former would imply that trigger state would remain unchanged\n> throughout a query, the latter would try to track currently-committed\n> trigger behavior. Either way has got pitfalls I think.\n> \n> The fact that there's a problem at all is because people are using\n> direct poking of the system catalogs instead of some kind of ALTER TABLE\n> command to disable/enable triggers; an ALTER command would presumably\n> gain exclusive lock on the table and thereby delay until active queries\n> finish. But that technique is out there (even in pg_dump files :-() and\n> so we'd best try to make the system proof against it.\n> \n> Any thoughts on which way to go?\n\nI'd say:\n\n1. go with the former\n2. we definitely should also have an ALTER command to allow disable/enable of\n triggers\n3. along with the ALTER, document that directly messing with the system\n catalogs is highly discouraged\n\nJoe\n\n\n", "msg_date": "Sat, 12 Oct 2002 22:31:31 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: pgsql 7.2.3 crash" }, { "msg_contents": "On Sat, 12 Oct 2002, Joe Conway wrote:\n\n> Tom Lane wrote:\n> > Hackers: we might reasonably fix this by doing a deep copy of the\n> > relcache's trigger info during initResultRelInfo(); or we could fix it\n> > by getting rid of ri_TrigDesc and re-fetching from the relcache every\n> > time. The former would imply that trigger state would remain unchanged\n> > throughout a query, the latter would try to track currently-committed\n> > trigger behavior. Either way has got pitfalls I think.\n> > \n> > The fact that there's a problem at all is because people are using\n> > direct poking of the system catalogs instead of some kind of ALTER TABLE\n> > command to disable/enable triggers; an ALTER command would presumably\n> > gain exclusive lock on the table and thereby delay until active queries\n> > finish. But that technique is out there (even in pg_dump files :-() and\n> > so we'd best try to make the system proof against it.\n> > \n> > Any thoughts on which way to go?\n> \n> I'd say:\n> \n> 1. go with the former\n\nI agree.\n\n> 2. we definitely should also have an ALTER command to allow disable/enable of\n> triggers\n\nI thought this was worked on for 7.3? I remember speaking to someone\n(?) at OSCON because I had been working on 'ENABLE TRIGGER <trigname>' and\nis compliment on the plane. Much of the work seemed to have been in CVS\nalready.\n\nGavin\n\n", "msg_date": "Sun, 13 Oct 2002 15:52:30 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: pgsql 7.2.3 crash" }, { "msg_contents": "Gavin Sherry wrote:\n> On Sat, 12 Oct 2002, Joe Conway wrote:\n> \n> > Tom Lane wrote:\n> > > Hackers: we might reasonably fix this by doing a deep copy of the\n> > > relcache's trigger info during initResultRelInfo(); or we could fix it\n> > > by getting rid of ri_TrigDesc and re-fetching from the relcache every\n> > > time. The former would imply that trigger state would remain unchanged\n> > > throughout a query, the latter would try to track currently-committed\n> > > trigger behavior. Either way has got pitfalls I think.\n> > > \n> > > The fact that there's a problem at all is because people are using\n> > > direct poking of the system catalogs instead of some kind of ALTER TABLE\n> > > command to disable/enable triggers; an ALTER command would presumably\n> > > gain exclusive lock on the table and thereby delay until active queries\n> > > finish. But that technique is out there (even in pg_dump files :-() and\n> > > so we'd best try to make the system proof against it.\n> > > \n> > > Any thoughts on which way to go?\n> > \n> > I'd say:\n> > \n> > 1. go with the former\n> \n> I agree.\n> \n> > 2. we definitely should also have an ALTER command to allow disable/enable of\n> > triggers\n> \n> I thought this was worked on for 7.3? I remember speaking to someone\n> (?) at OSCON because I had been working on 'ENABLE TRIGGER <trigname>' and\n> is compliment on the plane. Much of the work seemed to have been in CVS\n> already.\n\nIt is in TODO:\n\n\t* Allow triggers to be disabled [trigger]\n\n From the TODO.detail archives, it seems it got stuck on an\nimplementation issue:\n\n\thttp://candle.pha.pa.us/mhonarc/todo.detail/trigger/msg00001.html\n\nThe patch didn't prevent deferred contraint triggers from being fired.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 13 Oct 2002 21:39:05 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql 7.2.3 crash" }, { "msg_contents": "Gavin Sherry <swm@linuxworld.com.au> writes:\n> On Sat, 12 Oct 2002, Joe Conway wrote:\n>> Tom Lane wrote:\n>>> Hackers: we might reasonably fix this by doing a deep copy of the\n>>> relcache's trigger info during initResultRelInfo(); or we could fix it\n>>> by getting rid of ri_TrigDesc and re-fetching from the relcache every\n>>> time. The former would imply that trigger state would remain unchanged\n>>> throughout a query, the latter would try to track currently-committed\n>>> trigger behavior. Either way has got pitfalls I think.\n\n>>> Any thoughts on which way to go?\n\n>> I'd say:\n>> 1. go with the former\n\n> I agree.\n\nThat's my leaning too, after further reflection. Will make it so.\n\n>> 2. we definitely should also have an ALTER command to allow\n>> disable/enable of triggers\n\n> I thought this was worked on for 7.3?\n\nUnless I missed it, it's not in current sources.\n\nI was wondering whether an ALTER TABLE command is really the right way\nto approach this. If we had an ALTER-type command, presumably the\nimplication is that its effects would be global to all backends. But\nthe uses that I've seen for suspending trigger invocations would be\nhappier with a local, temporary setting that only affects the current\nbackend. Any thoughts about that?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 Oct 2002 00:04:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Disabling triggers (was Re: pgsql 7.2.3 crash)" }, { "msg_contents": "On Mon, 14 Oct 2002, Tom Lane wrote:\n\n> I was wondering whether an ALTER TABLE command is really the right way\n> to approach this. If we had an ALTER-type command, presumably the\n> implication is that its effects would be global to all backends. But\n> the uses that I've seen for suspending trigger invocations would be\n> happier with a local, temporary setting that only affects the current\n> backend. Any thoughts about that?\n\nI may be missing something here, but the only circumstance where i could\nsee such being useful would be a load of a database ... other then that,\nhow would overriding triggers be considered a good thing?\n\n\n", "msg_date": "Mon, 14 Oct 2002 01:08:31 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Disabling triggers (was Re: pgsql 7.2.3 crash)" }, { "msg_contents": "Tom Lane wrote:\n> I was wondering whether an ALTER TABLE command is really the right way\n> to approach this. If we had an ALTER-type command, presumably the\n> implication is that its effects would be global to all backends. But\n> the uses that I've seen for suspending trigger invocations would be\n> happier with a local, temporary setting that only affects the current\n> backend. Any thoughts about that?\n\nI think SET would be the proper place, but I don't see how to make it\ntable-specific.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 14 Oct 2002 00:13:39 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Disabling triggers (was Re: pgsql 7.2.3 crash)" }, { "msg_contents": "Tom Lane wrote:\n> I was wondering whether an ALTER TABLE command is really the right way\n> to approach this. If we had an ALTER-type command, presumably the\n> implication is that its effects would be global to all backends. But\n> the uses that I've seen for suspending trigger invocations would be\n> happier with a local, temporary setting that only affects the current\n> backend. Any thoughts about that?\n> \n\nHmmm. Well the most common uses I've run across for disabling triggers in the \n Oracle Apps world are:\n\n1) bulk loading of data\n2) temporarily turning off \"workflow\" procedures\n\nThe first case would benefit from being able to disable the trigger locally, \nwithout affecting other backends. Of course, I don't know how common it is to \nbulk load data while others are hitting the same table.\n\nThe second case is usually something like an insert into the employee table \nfires off an email to IT to create a login and security to make a badge. \nCommonly we turn off workflows (by disabling their related triggers) in our \ndevelopment and test databases so someone doesn't disable the CEO's login when \nwe fire him as part of our testing! I think in this scenario it is better to \nbe able to disable the trigger globally ;-)\n\nJoe\n\n\n\n\n", "msg_date": "Sun, 13 Oct 2002 21:15:08 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Disabling triggers (was Re: pgsql 7.2.3 crash)" }, { "msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> On Mon, 14 Oct 2002, Tom Lane wrote:\n>> I was wondering whether an ALTER TABLE command is really the right way\n>> to approach this. If we had an ALTER-type command, presumably the\n>> implication is that its effects would be global to all backends. But\n>> the uses that I've seen for suspending trigger invocations would be\n>> happier with a local, temporary setting that only affects the current\n>> backend. Any thoughts about that?\n\n> I may be missing something here, but the only circumstance where i could\n> see such being useful would be a load of a database ... other then that,\n> how would overriding triggers be considered a good thing?\n\nWell, exactly: it seems like something you'd want to constrain as\ntightly as possible. So some kind of local, SET-like operation seems\nsafer to me than a global, ALTER-TABLE-like operation.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 Oct 2002 00:20:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Disabling triggers (was Re: pgsql 7.2.3 crash) " }, { "msg_contents": "Yeah I think that could have happened since I was running it several times and had\ncancelled it (ctrl-c) it a couple of those times. Could be the backend of one \ncancelled run hadn't finished what it was doing and if that was renabling\ntriggers it could have walked on it.\n\nThanks.\n\nL.\nOn Sat, 12 Oct 2002, Tom Lane wrote:\n\n> Laurette Cisneros <laurette@nextbus.com> writes:\n> > I see triggers referenced here and it should be\n> > noted that for one of the tables the triggers were first disabled (update\n> > pg_class) and re-enabled after the inserts are done (or it takes\n> > forever).\n> >> \n> >> Did that happen while this backend was running?\n> \n> > Yes. I had run this perl program about 4-5 times in a row (which includes\n> > the sequence, disable triggers, insert rows, enable triggers) and then it\n> > crashed on one of the runs.\n> \n> Hm. The stack trace shows that this backend crashed while executing the\n> command\n> \tinsert into jobsequences select * from rev_000_jobsequences\n> Is it possible that you disabled and re-enabled triggers on jobsequences\n> *while this command was running* ?\n> \n> The gdb info makes it look like the triggers code is using a stale\n> trigger description structure. The pointer that's being used is cached\n> in the ResultRelInfo struct (ri_TrigDesc) during query startup. If\n> some external operation changed the trigger state while the query is\n> running, trouble would ensue.\n> \n> This looks like something we ought to fix in any case, but I'm unsure\n> whether it explains your crash. Do you think that that's what could\n> have happened?\n> \n> \n> Hackers: we might reasonably fix this by doing a deep copy of the\n> relcache's trigger info during initResultRelInfo(); or we could fix it\n> by getting rid of ri_TrigDesc and re-fetching from the relcache every\n> time. The former would imply that trigger state would remain unchanged\n> throughout a query, the latter would try to track currently-committed\n> trigger behavior. Either way has got pitfalls I think.\n> \n> The fact that there's a problem at all is because people are using\n> direct poking of the system catalogs instead of some kind of ALTER TABLE\n> command to disable/enable triggers; an ALTER command would presumably\n> gain exclusive lock on the table and thereby delay until active queries\n> finish. But that technique is out there (even in pg_dump files :-() and\n> so we'd best try to make the system proof against it.\n> \n> Any thoughts on which way to go?\n> \n> \t\t\tregards, tom lane\n> \n\n-- \nLaurette Cisneros\nThe Database Group\n(510) 420-3137\nNextBus Information Systems, Inc.\nwww.nextbus.com\n------------------------------\nIt's 10 o'clock...\nDo you know where your bus is?\n\n", "msg_date": "Mon, 14 Oct 2002 09:35:10 -0700 (PDT)", "msg_from": "Laurette Cisneros <laurette@nextbus.com>", "msg_from_op": true, "msg_subject": "Re: pgsql 7.2.3 crash " }, { "msg_contents": "Laurette Cisneros <laurette@nextbus.com> writes:\n> Yeah I think that could have happened since I was running it several times and had\n> cancelled it (ctrl-c) it a couple of those times. Could be the backend of one \n> cancelled run hadn't finished what it was doing and if that was renabling\n> triggers it could have walked on it.\n\nSounds like we've got the explanation, then.\n\nI've just committed fixes into 7.3 to prevent this scenario in future.\nThe patch is probably too large to risk back-patching into 7.2.* though.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 Oct 2002 12:52:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql 7.2.3 crash " }, { "msg_contents": "Great. I am working my way toward 7.3 anyway...\n\nThanks!\n\nL.\nOn Mon, 14 Oct 2002, Tom Lane wrote:\n\n> Laurette Cisneros <laurette@nextbus.com> writes:\n> > Yeah I think that could have happened since I was running it several times and had\n> > cancelled it (ctrl-c) it a couple of those times. Could be the backend of one \n> > cancelled run hadn't finished what it was doing and if that was renabling\n> > triggers it could have walked on it.\n> \n> Sounds like we've got the explanation, then.\n> \n> I've just committed fixes into 7.3 to prevent this scenario in future.\n> The patch is probably too large to risk back-patching into 7.2.* though.\n> \n> \t\t\tregards, tom lane\n> \n\n-- \nLaurette Cisneros\nThe Database Group\n(510) 420-3137\nNextBus Information Systems, Inc.\nwww.nextbus.com\n------------------------------\nIt's 10 o'clock...\nDo you know where your bus is?\n\n", "msg_date": "Mon, 14 Oct 2002 09:57:14 -0700 (PDT)", "msg_from": "Laurette Cisneros <laurette@nextbus.com>", "msg_from_op": true, "msg_subject": "Re: pgsql 7.2.3 crash " }, { "msg_contents": "On Mon, 14 Oct 2002, Tom Lane wrote:\n\n> Gavin Sherry <swm@linuxworld.com.au> writes:\n> > On Sat, 12 Oct 2002, Joe Conway wrote:\n> >> Tom Lane wrote:\n> >>> Hackers: we might reasonably fix this by doing a deep copy of the\n> >>> relcache's trigger info during initResultRelInfo(); or we could fix it\n> >>> by getting rid of ri_TrigDesc and re-fetching from the relcache every\n> >>> time. The former would imply that trigger state would remain unchanged\n> >>> throughout a query, the latter would try to track currently-committed\n> >>> trigger behavior. Either way has got pitfalls I think.\n> \n> >>> Any thoughts on which way to go?\n> \n> >> I'd say:\n> >> 1. go with the former\n> \n> > I agree.\n> \n> That's my leaning too, after further reflection. Will make it so.\n> \n> >> 2. we definitely should also have an ALTER command to allow\n> >> disable/enable of triggers\n> \n> > I thought this was worked on for 7.3?\n> \n> Unless I missed it, it's not in current sources.\n\nHere is an email I sent to pgsql-patches.\n\n--- BEGIN\n\n---------- Forwarded message ----------\nDate: Tue, 13 Aug 2002 15:38:50 +1000 (EST)\nFrom: Gavin Sherry <swm@linuxworld.com.au>\nTo: Tom Lane <tgl@sss.pgh.pa.us>\nCc: Neil Conway <nconway@klamath.dyndns.org>, pgsql-patches@postgresql.org\nSubject: Re: [PATCHES] Fix disabled triggers with deferred constraints\n\nOn Tue, 13 Aug 2002, Tom Lane wrote:\n\n> Gavin Sherry <swm@linuxworld.com.au> writes:\n> > ...The spec is a large one and I didn't look at all references to\n> > triggers since there are hundreds -- but I don't believe that there is any\n> > precedent for an implementation of DISABLE TRIGGER.\n>\n> Thanks for the dig. I was hoping we could get some guidance from the\n> spec, but it looks like not. How about other implementations --- does\n> Oracle support disabled triggers? DB2? etc?\n\nOracle 8 (and I presume 9) allows you to disable/enable triggers through\nalter table and alter trigger. My 8.1.7 documentation is silent on the\ncases you mention below and I do not have an oracle installation handy to\ntest. Anyone?\n\n>\n> > FWIW, i think that in the case of deferred triggers they should all be\n> > added to the queue and whether they are executed or not should be\n> > evaluated inside DeferredTriggerExecute() with:\n> > if(LocTriggerData.tg_trigger->tgenabled == false)\n> > return;\n>\n> So check the state at execution, not when the triggering event occurs.\n> I don't have any strong reason to object to that, but I have a gut\n> feeling that it still needs to be thought about...\n>\n> > FWIW, i think that in the case of deferred triggers they should all be\n> > added to the queue and whether they are executed or not should be\n> > evaluated inside DeferredTriggerExecute() with:\n> > if(LocTriggerData.tg_trigger->tgenabled == false)\n> > return;\n>\n> So check the state at execution, not when the triggering event occurs.\n> I don't have any strong reason to object to that, but I have a gut\n> feeling that it still needs to be thought about...\n>\n> Let's see, I guess there are several possible changes of state for a\n> deferred trigger between the triggering event and the end of\n> transaction:\n>\n> * Trigger deleted. Surely the trigger shouldn't be executed, but should\n> we raise an error or just silently ignore it? (I suspect right now we\n> crash :-()\n>\n> * Trigger created. In some ideal world we might think that such a\n> trigger should be fired, but in reality that ain't gonna happen; we're\n> not going to record every possible event on the speculation that some\n> trigger for it might be created later in the transaction.\n\nIt doesn't need to be an ideal world. We're only talking about deferred\ntriggers after all. Why couldn't CreateTrgger() just have a look through\ndeftrig_events, check for its relid and if its in there, call\ndeferredTriggerAddEvent().\n\n> * Trigger disabled. Your proposal is to not fire it. Okay, comports\n> with the deleted case, if we make that behavior be silently-ignore.\n\nIt doesn't need to be an ideal world. We're only talking about deferred\ntriggers after all. Why couldn't CreateTrgger() just have a look through\ndeftrig_events, check for its relid and if its in there, call\ndeferredTriggerAddEvent().\n\n> * Trigger disabled. Your proposal is to not fire it. Okay, comports\n> with the deleted case, if we make that behavior be silently-ignore.\n>\n> * Trigger enabled. Your proposal is to fire it. Seems not to comport\n> with the creation case --- does that bother anyone?\n>\n> * Trigger changed from not-deferred to deferred. If we already fired it\n> for the event, we surely shouldn't fire it again. I believe the code\n> gets this case right.\n\nAgreed.\n\n> * Trigger changed from deferred to not-deferred. As Neil was pointing\n> out recently, this really should cause the trigger to be fired for the\n> pending event immediately, but we don't get that right at the moment.\n> (I suppose a stricter interpretation would be to raise an error because\n> we can't do anything that really comports with the intended behavior\n> of either case.)\n\nI think this should generate an error as it doesn't sit well with the\nspec IMHO.\n\nGavin\n\n--- END\n\nThis is why I thought ALTER TABLE was being worked on.\n\n> \n> I was wondering whether an ALTER TABLE command is really the right way\n> to approach this. If we had an ALTER-type command, presumably the\n> implication is that its effects would be global to all backends. But\n> the uses that I've seen for suspending trigger invocations would be\n> happier with a local, temporary setting that only affects the current\n> backend. Any thoughts about that?\n\nOracle supports DISABLE TRIGGER and ALTER TABLE DISABLE ALL TRIGGERS. I\ncannot find anything in my version 9 manual about whether the effect is\nlocal or global. As you say, I think the syntax suggests global. (I do not\nhave an oracle system to test on).\n\nThere is no trigger disablement in DB2 7.2, which is the most recent\ndocumentation I have.\n\nPersonally, I think we should one up oracle. How about:\n\nDISABLE TRIGGER <name> [LOCALLY|GLOBALLY];\nALTER TABLE DISABLE [ALL] TRIGGERS [LOCALLY|GLOBALLY];\n\nand their respective complements.\n\nThis should be able to be implemented through the current invalidation\nsystem. We simply skill registering inval messages which are local.\n\nGavin\n\n\n\n\n\n\n", "msg_date": "Wed, 16 Oct 2002 00:44:43 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: Disabling triggers (was Re: pgsql 7.2.3 crash)" }, { "msg_contents": "On Mon, Oct 14, 2002 at 12:04:14AM -0400, Tom Lane wrote:\n\n> implication is that its effects would be global to all backends. But\n> the uses that I've seen for suspending trigger invocations would be\n> happier with a local, temporary setting that only affects the current\n> backend. Any thoughts about that?\n\nNone except that it would indeed be a big help.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Tue, 22 Oct 2002 10:51:48 -0400", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: Disabling triggers (was Re: pgsql 7.2.3 crash)" }, { "msg_contents": "Hi,\n\nOn Mon, 2002-12-09 at 10:24, SEGUERRA FRANCIS TED ARANAS wrote:\n\n> how do i port from mysql to postgresql?..\n\nhttp://techdocs.postgresql.org/#convertfrom\n\nBest regards,\n.\n\n-- \nDevrim GUNDUZ \nTR.NET Sistem Destek Uzmani\n\nTel : (312) 295 93 18 Fax : (312) 295 94 94 Tel : (216) 542 90 00\n\n", "msg_date": "09 Dec 2002 10:21:24 +0200", "msg_from": "Devrim GUNDUZ <devrim@tr.net>", "msg_from_op": false, "msg_subject": "Re: Porting from MySQL to PostgreSQL (was: pgsql 7.2.3" }, { "msg_contents": "how do i port from mysql to postgresql?...\n\nthanks bruce,\nfrancis\n\n-- \nov3rr|d3r\n\n", "msg_date": "Mon, 9 Dec 2002 16:24:14 +0800 (PHT)", "msg_from": "SEGUERRA FRANCIS TED ARANAS <u98-0995@hindang.msuiit.edu.ph>", "msg_from_op": false, "msg_subject": "Re: pgsql 7.2.3 crash" }, { "msg_contents": "Joe Conway wrote:\n> The second case is usually something like an insert into the employee table \n> fires off an email to IT to create a login and security to make a badge. \n> Commonly we turn off workflows (by disabling their related triggers) in our \n> development and test databases so someone doesn't disable the CEO's login \n> when we fire him as part of our testing! I think in this scenario it is \n> better to be able to disable the trigger globally ;-)\n\nI think in this scenario it's probably better to not fire the CEO,\ngratifying as it may be! :-)\n\n\n\n-- \nKevin Brown\t\t\t\t\t kevin@sysexperts.com\n", "msg_date": "Mon, 9 Dec 2002 01:57:04 -0800", "msg_from": "Kevin Brown <kevin@sysexperts.com>", "msg_from_op": false, "msg_subject": "Re: Disabling triggers (was Re: pgsql 7.2.3 crash)" }, { "msg_contents": "thanks\n\n\n\n\n\n\nOn 9 Dec 2002, Devrim GUNDUZ wrote:\n\n> Hi,\n> \n> On Mon, 2002-12-09 at 10:24, SEGUERRA FRANCIS TED ARANAS wrote:\n> \n> > how do i port from mysql to postgresql?..\n> \n> http://techdocs.postgresql.org/#convertfrom\n> \n> Best regards,\n> .\n> \n> \n\n-- \nov3rr|d3r\n\n", "msg_date": "Tue, 10 Dec 2002 02:03:36 +0800 (PHT)", "msg_from": "SEGUERRA FRANCIS TED ARANAS <u98-0995@hindang.msuiit.edu.ph>", "msg_from_op": false, "msg_subject": "Re: Porting from MySQL to PostgreSQL (was: pgsql 7.2.3" } ]
[ { "msg_contents": "Hi everyone,\n\nThe Italian translation of the PostgreSQL \"Advocacy and Marketing\" site,\ndone by Stefano Reksten <sreksten@sdb.it> is now complete and ready for\npublic use:\n\nhttp://advocacy.postgresql.org?lang=it\n\nThanks heaps Stefano, you've put in a lot of effort and it's really\ngoing to help. :-)\n\nIn addition to this, Stefano has also volunteered to be an Italian\nlanguage contact for the PostgreSQL Advocacy and Marketing team. With\nluck we'll gain good PostgreSQL representatives for *all* of the major\nlanguages and get some nifty stuff happening.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Wed, 09 Oct 2002 12:19:34 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": true, "msg_subject": "Italian version of the PostgreSQL \"Advocacy and Marketing\" site is\n\tready" } ]
[ { "msg_contents": "I'd say yes replication can solve lot of issues, but is there a way to do replication in postgres(active-active or active-passive)\n\n-----Original Message-----\nFrom: Bruce Momjian [mailto:pgman@candle.pha.pa.us]\nSent: Tuesday, October 08, 2002 8:27 PM\nTo: shridhar_daithankar@persistent.co.in\nCc: pgsql-hackers@postgresql.org; pgsql-general\nSubject: Re: [GENERAL] [HACKERS] Hot Backup\n\n\nShridhar Daithankar wrote:\n> On 7 Oct 2002 at 13:48, Neil Conway wrote:\n> \n> > \"Sandeep Chadha\" <sandeep@newnetco.com> writes:\n> > > Postgresql has been lacking this all along. I've installed postgres\n> > > 7.3b2 and still don't see any archive's flushed to any other\n> > > place. Please let me know how is hot backup procedure implemented in\n> > > current 7.3 beta(2) release.\n> > AFAIK no such hot backup feature has been implemented for 7.3 -- you\n> > appear to have been misinformed.\n> \n> Is replication an answer to hot backup?\n\nWe already allow hot backups using pg_dump. If you mean point-in-time\nrecovery, we have a patch for that ready for 7.4.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: Have you checked our extensive FAQ?\n\nhttp://www.postgresql.org/users-lounge/docs/faq.html\n", "msg_date": "Wed, 9 Oct 2002 09:42:56 -0400", "msg_from": "\"Sandeep Chadha\" <sandeep@newnetco.com>", "msg_from_op": true, "msg_subject": "Re: [GENERAL] Hot Backup" }, { "msg_contents": "On Wed, Oct 09, 2002 at 09:42:56AM -0400, Sandeep Chadha wrote:\n> I'd say yes replication can solve lot of issues, but is there a way\n> to do replication in postgres(active-active or active-passive)\n\nYes. Check out the rserv code in contrib/, the (?) dbmirror code in\ncontrib/, or contact PostgreSQL, Inc for a commercial version of the\nrserv code.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Tue, 22 Oct 2002 10:29:12 -0400", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Hot Backup" } ]
[ { "msg_contents": "\nI've been working on kludging a working\nfor update barrier style lock (*) for reads\nusing HeapTupleSatisfiesDirty to test\naccessibility to make the foreign keys\nwork better. I'm fairly close to getting\na testable kludge for the fk/noaction cases\nfor people to check real sequences against\n(since I'm using simple examples as I think\nof it).\n\nAt some point I'm going to want to do\nsomething that's less of a kludge which\nmight hopefully also let me remove the whole\nhack in tqual.c (right now the hack's gotten\nworse since I use the value to specify what\nkind of check to do). In addition, I'm not\n100% sure how to proceed on the\nnon-noaction/restrict cases, since I'd kind\nof want to do a dirty read to find candidate\nrows for the update/delete which gets\ninto having heap_delete fail for example\nsince the row is invisible. For the lock\nabove I made a new \"for ...\" specifier for the\nstatement to separate the behavior, but I'm\nnot sure something like that is really a good\nidea in practice and I'm a little worried\nabout changing the logic in heap_delete (etc)\nfor invisible rows in any case.\n\nSo, I'm looking for suggestions on the best\nway to proceed or comments that I'm going\nabout this entirely the wrong way... :)\n\n\n(*) - It blocks on the transaction which\nhas a real lock on the row, but does not\nitself get a persistent lock on it.\n\n", "msg_date": "Wed, 9 Oct 2002 07:16:12 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": true, "msg_subject": "Request for suggestions" }, { "msg_contents": "\n\nI wasn't particularly clear (sorry, wrote the message\n1/2 right before bed, 1/2 right after getting up) so\nI'm going to followup with details and hope that\nI'm more awake.\n\nA little background just in case there are people\nthat haven't looked.\n\nRight now, foreign key checks always default to using\nHeapTupleSatisfiesNow to check for the validity of\nrows and uses for update to do the locking. I believe\nthat we can use something like the lock suggested\nby Alex Hayard which does not actually lock the row\nbut only waits for concurrent modification that\nactually has a lock to finish, except that to do\nso would make the constraint fail, unless checks\nfor changes to the primary key actually could see\nuncommitted changes to the foreign key table. Unless\nthe old row being worked on was made by this transaction\nin which case you shouldn't need to do a dirty check.\n\n\n\nTo that end, I've put together some modifications\nfor testing on my local system (since I'm not 100%\nsure that the above is true for all cases) where\nthe primary key triggers actually use\nHeapTupleSatisfiesDirty using the\nReferentialIntegritySnapshotOverride hack (making\nit contain three values, none, use now, use dirty)\nand added a for foreign key specifier to selects\nwhich has the semantics of Alex's lock. The code\nin heap_mark4fk is called in effectively the same place\nas heap_mark4update in the execution code. Basically\nif (...) { heap_mark4update() } else { heap_mark4fk() }.\n\nHowever, the heap_mark4update code (which I cribbed\nthe mark4fk code from) doesn't like getting rows which\nHeapTupleSatisfiesUpdate says are invisible (it throws an\ninvalid tid error IIRC).\nRight now, I'm waiting for the transaction that made\nthe row to complete and returning HeapTupleUpdated if\nit rolled back and HeapTupleMayBeUpdated if it didn't,\nbut I know that's wrong.\n\nI think the logic needs to be something like:\n If the row is invisible,\n If the row has xmax==0, wait for xmin to complete\n If the transaction rolled back, ignore the row.\n Otherwise, check to see if someone else has locked it.\n If so, go back to the the HeapTupleSatisfiesUpdate test\n Otherwise, work with the row as it was.\n Otherwise,\n If xmax==xmin, we want to ignore the row\n Otherwise, -- can this case even occur? --\n Wait on xmax per normal rules of heap_mark4update\nbut I'm very fuzzy on this.\n\nIn addition, at some point I'm going to have to modify\nthe actual referential actions (as opposed to no action)\nto do a similar check, which means I'm going to want\na delete or update statement which needs to wait\non uncommitted transactions to modify the rows. It looks\nlike heap_delete and heap_update also will error\non rows that HeapTupleSatisfiesUpdate says are invisible.\nFor heap_mark4fk it was reasonably safe to change the\nresult==HeapTupleInvisible case since it was new code\nthat I was adding, but I'm a bit leery about doing\nsomething similar to heap_delete or heap_update.\nIs the coding for result==HeapTupleInvisible in those\nfunctions meant as a defensive measure that shouldn't\noccur?\n\n", "msg_date": "Wed, 9 Oct 2002 08:17:28 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": true, "msg_subject": "(Followup) Request for suggestions" } ]
[ { "msg_contents": "\nJust think, that maybe a postgresql php coder (or admin if you like it),\nemail me, and give me *.php sources. Seems like most of his scripts written \nin a very insecure and lame style.\n\nBest regards.\n\n________________________________________________________________________\nThis letter has been delivered unencrypted. We'd like to remind you that\nthe full protection of e-mail correspondence is provided by S-mail\nencryption mechanisms if only both, Sender and Recipient use S-mail.\nRegister at S-mail.com: http://www.s-mail.com\n", "msg_date": "Wed, 09 Oct 2002 15:16:24 +0000", "msg_from": "Sir Mordred The Traitor <mordred@s-mail.com>", "msg_from_op": true, "msg_subject": "Just a thought" }, { "msg_contents": "On Wed, 9 Oct 2002, Sir Mordred The Traitor wrote:\n\n>\n> Just think, that maybe a postgresql php coder (or admin if you like it),\n> email me, and give me *.php sources. Seems like most of his scripts written\n> in a very insecure and lame style.\n\nProbably no worse than your writing style.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n http://www.camping-usa.com http://www.cloudninegifts.com\n http://www.meanstreamradio.com http://www.unknown-artists.com\n==========================================================================\n\n\n\n", "msg_date": "Wed, 9 Oct 2002 11:21:19 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: Just a thought" } ]
[ { "msg_contents": "Sure not. I even don't argue that.\nBut i dont like that a postgresql.org could be just that easily owned.\n>On Wed, 9 Oct 2002, Sir Mordred The Traitor wrote:\n>\n>>\n>> Just think, that maybe a postgresql php coder (or admin if you like it),\n>> email me, and give me *.php sources. Seems like most of his scripts\nwritten\n>> in a very insecure and lame style.\n>\n>Probably no worse than your writing style.\n>\n>Vince.\n>--\n>==========================================================================\n>Vince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n> 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n> http://www.camping-usa.com http://www.cloudninegifts.com\n> http://www.meanstreamradio.com http://www.unknown-artists.com\n>==========================================================================\n>\n>\n>\n>\n>\n\n\n\n\n________________________________________________________________________\nThis letter has been delivered unencrypted. We'd like to remind you that\nthe full protection of e-mail correspondence is provided by S-mail\nencryption mechanisms if only both, Sender and Recipient use S-mail.\nRegister at S-mail.com: http://www.s-mail.com\n", "msg_date": "Wed, 09 Oct 2002 15:24:45 +0000", "msg_from": "Sir Mordred The Traitor <mordred@s-mail.com>", "msg_from_op": true, "msg_subject": "Re: Just a thought" } ]
[ { "msg_contents": "I'd have agree on most of what you said. I still think most crashes occur due to data corruption which can only be recovered by using a good backup. \n\nAnyways my problem is I have a 5 gig database. I run a cron job every hour which runs pg_dump which takes over 30 minutes to run and degrades the db performance. I was hoping for something which can solve my problem and then I don't have to take backup every hour. Is there a plan on implementing incremental backup technique for pg_dump or Is it going to be same for next one or two releases.\n\nThanks much For you time \nSandeep. \n\n-----Original Message-----\nFrom: scott.marlowe [mailto:scott.marlowe@ihs.com]\nSent: Wednesday, October 09, 2002 12:19 PM\nTo: Sandeep Chadha\nCc: Tom Lane; pgsql-hackers@postgresql.org; pgsql-general\nSubject: [GENERAL] Point in Time Recovery WAS: Hot Backup \n\n\nHi Sandeep. What you were calling Hot Backup is really called Point in \nTime Recovery (PITR). Hot Backup means making a complete backup of the \ndatabase while it is running, something Postgresql has supported for a \nvery long time.\n\nOn Mon, 7 Oct 2002, Sandeep Chadha wrote:\n\n> Hello to all the Doers of Postgres!!!\n> \n> Last time I went through forums, people spoke highly about 7.3 and its \n> capability to do hot backups. My problem is if the database goes down \n> and I lose my main data store, then I will lose all transactions back \n> to the time I did the pg_dump.\n\nLet's make it clear that this kind of failure is EXTREMELY rare on real \ndatabase servers since they almost ALL run their data sets on RAID arrays. \nWhile it is possible to lost >1 drive at the same time and all your \ndatabase, it is probably more likely to have a bad memory chip corrupt \nyour data silently, or a bad query delete data it shouldn't.\n\nThat said, there IS work ongoing to provide this facility for Postgresql, \nbut I would much rather have work done on making large complex queries run \nfaster, or fix the little issues with foreign keys cause deadlocks.\n\n> Other databases (i e Oracle) solves this by retaining their archive \n> logs in some physically separate storage. So, when you lose your data, \n> you can restore the data from back-up, and then apply your archive log, \n> and avoid losing any committed transactions. \n> \n> > Postgresql has been lacking this all along. I've installed postgres \n> 7.3b2 and still don't see any archive's flushed to any other place. \n> Please let me know how is hot backup procedure implemented in current \n> 7.3 beta(2) release.\n\nAgain, you'll get better response to your questions if you call it \"point \nin time recovery\" or pitr. Hot backup is the wrong word, and something \nPostgresql DOES have.\n\nIt also supports WALs, which stands for Write ahead logs. These files \nstore what the database is about to do before it does it. Should the \ndatabase crash with transactions pending, the server will come back up and \nprocess the pending transactions that are in the WAL files, ensuring the \nintegrity of your database.\n\nPoint in Time recovery is very nice, but it's the last step in many to \nensure a stable, coherent database, and will probably be in 7.4 or \nsomewhere around there. If you're running in a RAID array, then the loss \nof your datastore should be a very remote possibility.\n\n", "msg_date": "Wed, 9 Oct 2002 12:46:18 -0400", "msg_from": "\"Sandeep Chadha\" <sandeep@newnetco.com>", "msg_from_op": true, "msg_subject": "Re: [GENERAL] Point in Time Recovery WAS: Hot Backup " }, { "msg_contents": "On Wed, 2002-10-09 at 12:46, Sandeep Chadha wrote:\n> I'd have agree on most of what you said. I still think most crashes occur due to data corruption which can only be recovered by using a good backup. \n> \n> Anyways my problem is I have a 5 gig database. I run a cron job every hour which runs pg_dump which takes over 30 minutes to run and degrades the db performance. I was hoping for something which can solve my problem and then I don't have to take backup every hour. Is there a plan on implementing incremental backup technique for pg_dump or Is it going to be same for next one or two releases.\n> \n> Thanks much For you time \n\nOh, if thats your problem then use asynchronous replication instead. It\ndoesn't remove the slow time, but will distribute the slowness across\nevery transaction rather than all at once (via creation of replication\nlogs). Things won't degrade much during the snapshot transfer itself,\nas there isn't very much work involved (differences only).\n\nNow periodically backup the secondary box. Needs diskspace, but not\nvery much power otherwise.\n\n#!/bin/sh\nwhile(true)\ndo\n\tasynchreplicate.sh\n\tpg_dumpall > \"`date`.bak\"\ndone\n\n\n\n> -----Original Message-----\n> From: scott.marlowe [mailto:scott.marlowe@ihs.com]\n> Sent: Wednesday, October 09, 2002 12:19 PM\n> To: Sandeep Chadha\n> Cc: Tom Lane; pgsql-hackers@postgresql.org; pgsql-general\n> Subject: [GENERAL] Point in Time Recovery WAS: Hot Backup \n> \n> \n> Hi Sandeep. What you were calling Hot Backup is really called Point in \n> Time Recovery (PITR). Hot Backup means making a complete backup of the \n> database while it is running, something Postgresql has supported for a \n> very long time.\n> \n> On Mon, 7 Oct 2002, Sandeep Chadha wrote:\n> \n> > Hello to all the Doers of Postgres!!!\n> > \n> > Last time I went through forums, people spoke highly about 7.3 and its \n> > capability to do hot backups. My problem is if the database goes down \n> > and I lose my main data store, then I will lose all transactions back \n> > to the time I did the pg_dump.\n> \n> Let's make it clear that this kind of failure is EXTREMELY rare on real \n> database servers since they almost ALL run their data sets on RAID arrays. \n> While it is possible to lost >1 drive at the same time and all your \n> database, it is probably more likely to have a bad memory chip corrupt \n> your data silently, or a bad query delete data it shouldn't.\n> \n> That said, there IS work ongoing to provide this facility for Postgresql, \n> but I would much rather have work done on making large complex queries run \n> faster, or fix the little issues with foreign keys cause deadlocks.\n> \n> > Other databases (i e Oracle) solves this by retaining their archive \n> > logs in some physically separate storage. So, when you lose your data, \n> > you can restore the data from back-up, and then apply your archive log, \n> > and avoid losing any committed transactions. \n> > \n> > > Postgresql has been lacking this all along. I've installed postgres \n> > 7.3b2 and still don't see any archive's flushed to any other place. \n> > Please let me know how is hot backup procedure implemented in current \n> > 7.3 beta(2) release.\n> \n> Again, you'll get better response to your questions if you call it \"point \n> in time recovery\" or pitr. Hot backup is the wrong word, and something \n> Postgresql DOES have.\n> \n> It also supports WALs, which stands for Write ahead logs. These files \n> store what the database is about to do before it does it. Should the \n> database crash with transactions pending, the server will come back up and \n> process the pending transactions that are in the WAL files, ensuring the \n> integrity of your database.\n> \n> Point in Time recovery is very nice, but it's the last step in many to \n> ensure a stable, coherent database, and will probably be in 7.4 or \n> somewhere around there. If you're running in a RAID array, then the loss \n> of your datastore should be a very remote possibility.\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n-- \n Rod Taylor\n\n", "msg_date": "09 Oct 2002 13:11:48 -0400", "msg_from": "Rod Taylor <rbt@rbt.ca>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Point in Time Recovery WAS: Hot Backup" }, { "msg_contents": "Rod Taylor wrote:\n> \n<snip>\n> Oh, if thats your problem then use asynchronous replication instead.\n\nFor specific info, the contrib/rserv package does master->slave\nasynchronous replication as Rod is suggesting. From memory it was\nhaving troubles working with PostgreSQL 7.2.x, but someone recently\nsubmitted patches that make it work.\n\nThere's a HOW-TO guide that a community member wrote on setting up rserv\nwith PostgreSQL 7.0.3, although it should be practically identical for\nPostgreSQL 7.2.x (when rserv is patched to make it work).\n\nhttp://techdocs.postgresql.org/techdocs/settinguprserv.php\n\nThat could be the basis for your async replication solution.\n\nHope that helps.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\n> It doesn't remove the slow time, but will distribute the slowness across\n> every transaction rather than all at once (via creation of replication\n> logs). Things won't degrade much during the snapshot transfer itself,\n> as there isn't very much work involved (differences only).\n> \n> Now periodically backup the secondary box. Needs diskspace, but not\n> very much power otherwise.\n> \n> #!/bin/sh\n> while(true)\n> do\n> asynchreplicate.sh\n> pg_dumpall > \"`date`.bak\"\n> done\n> \n> > -----Original Message-----\n> > From: scott.marlowe [mailto:scott.marlowe@ihs.com]\n> > Sent: Wednesday, October 09, 2002 12:19 PM\n> > To: Sandeep Chadha\n> > Cc: Tom Lane; pgsql-hackers@postgresql.org; pgsql-general\n> > Subject: [GENERAL] Point in Time Recovery WAS: Hot Backup\n> >\n> >\n> > Hi Sandeep. What you were calling Hot Backup is really called Point in\n> > Time Recovery (PITR). Hot Backup means making a complete backup of the\n> > database while it is running, something Postgresql has supported for a\n> > very long time.\n> >\n> > On Mon, 7 Oct 2002, Sandeep Chadha wrote:\n> >\n> > > Hello to all the Doers of Postgres!!!\n> > >\n> > > Last time I went through forums, people spoke highly about 7.3 and its\n> > > capability to do hot backups. My problem is if the database goes down\n> > > and I lose my main data store, then I will lose all transactions back\n> > > to the time I did the pg_dump.\n> >\n> > Let's make it clear that this kind of failure is EXTREMELY rare on real\n> > database servers since they almost ALL run their data sets on RAID arrays.\n> > While it is possible to lost >1 drive at the same time and all your\n> > database, it is probably more likely to have a bad memory chip corrupt\n> > your data silently, or a bad query delete data it shouldn't.\n> >\n> > That said, there IS work ongoing to provide this facility for Postgresql,\n> > but I would much rather have work done on making large complex queries run\n> > faster, or fix the little issues with foreign keys cause deadlocks.\n> >\n> > > Other databases (i e Oracle) solves this by retaining their archive\n> > > logs in some physically separate storage. So, when you lose your data,\n> > > you can restore the data from back-up, and then apply your archive log,\n> > > and avoid losing any committed transactions.\n> > >\n> > > > Postgresql has been lacking this all along. I've installed postgres\n> > > 7.3b2 and still don't see any archive's flushed to any other place.\n> > > Please let me know how is hot backup procedure implemented in current\n> > > 7.3 beta(2) release.\n> >\n> > Again, you'll get better response to your questions if you call it \"point\n> > in time recovery\" or pitr. Hot backup is the wrong word, and something\n> > Postgresql DOES have.\n> >\n> > It also supports WALs, which stands for Write ahead logs. These files\n> > store what the database is about to do before it does it. Should the\n> > database crash with transactions pending, the server will come back up and\n> > process the pending transactions that are in the WAL files, ensuring the\n> > integrity of your database.\n> >\n> > Point in Time recovery is very nice, but it's the last step in many to\n> > ensure a stable, coherent database, and will probably be in 7.4 or\n> > somewhere around there. If you're running in a RAID array, then the loss\n> > of your datastore should be a very remote possibility.\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to majordomo@postgresql.org so that your\n> > message can get through to the mailing list cleanly\n> >\n> --\n> Rod Taylor\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Thu, 10 Oct 2002 04:04:34 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Point in Time Recovery WAS: Hot Backup" }, { "msg_contents": "On Wed, 2002-10-09 at 14:04, Justin Clift wrote:\n> Rod Taylor wrote:\n> > \n> <snip>\n> > Oh, if thats your problem then use asynchronous replication instead.\n> \n> For specific info, the contrib/rserv package does master->slave\n\nThanks. I was having a heck of a time remembering what it was called or\neven where the DBA found it.\n\n-- \n Rod Taylor\n\n", "msg_date": "09 Oct 2002 14:13:26 -0400", "msg_from": "Rod Taylor <rbt@rbt.ca>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Point in Time Recovery WAS: Hot Backup" } ]
[ { "msg_contents": "Now that we have relaxed the restriction on functions/languages, should\nwe make sure we have GRANTS for all of them, including /contrib, or\nremove them all.\n\nNot knowing what we will do for 7.4, it seems we should make sure they\nall have GRANTs. \n\nComments?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 9 Oct 2002 13:37:24 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "GRANT on functions/languages" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Now that we have relaxed the restriction on functions/languages, should\n> we make sure we have GRANTS for all of them, including /contrib, or\n> remove them all.\n\nI feel no strong need to do anything. The contrib entries that have\nexplicit GRANTs are okay and pretty future-proof; but the ones that\ndon't have 'em are not broken. We have other problems to contend with\nthat deserve more attention than this one...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 10 Oct 2002 00:47:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: GRANT on functions/languages " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Now that we have relaxed the restriction on functions/languages, should\n> > we make sure we have GRANTS for all of them, including /contrib, or\n> > remove them all.\n> \n> I feel no strong need to do anything. The contrib entries that have\n> explicit GRANTs are okay and pretty future-proof; but the ones that\n> don't have 'em are not broken. We have other problems to contend with\n> that deserve more attention than this one...\n\nI was looking for consistency so we could have things ready if we\ntighten up in 7.4. I believe someone volunteered to fix them all so I\nfigured we should do that.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 10 Oct 2002 00:56:54 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: GRANT on functions/languages" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I was looking for consistency so we could have things ready if we\n> tighten up in 7.4. I believe someone volunteered to fix them all so I\n> figured we should do that.\n\nSomeone did volunteer, but I haven't seen results. My point is that\nit's not important as things stand. If anyone wants to go through the\ncontrib functions and introduce some consistency, it would be much more\nuseful to check their strictness and volatility attributes ... not to\nmention converting them all to v1 call convention for portability's sake\n...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 10 Oct 2002 01:05:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: GRANT on functions/languages " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I was looking for consistency so we could have things ready if we\n> > tighten up in 7.4. I believe someone volunteered to fix them all so I\n> > figured we should do that.\n> \n> Someone did volunteer, but I haven't seen results. My point is that\n> it's not important as things stand. If anyone wants to go through the\n> contrib functions and introduce some consistency, it would be much more\n> useful to check their strictness and volatility attributes ... not to\n> mention converting them all to v1 call convention for portability's sake\n> ...\n\nIn my mind, the big question is whether we need things to be done in a\ncertain way now so we can tighten function/language permissions in 7.4\nif we want to. For 7.3, we had to relax because of upgrade problems,\nbut if we want to tighten for 7.4, we have to plan that now.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 10 Oct 2002 10:05:42 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: GRANT on functions/languages" } ]
[ { "msg_contents": "Hello, i've got this query that's really slow...\nFigure this:\n\ntestdb=> select now() ; select gid from bs where gid not in ( select x\nfrom z2test ); select now();\n now\n-------------------------------\n 2002-10-09 22:37:21.234627+02\n(1 row)\n\n gid\n----------\n <lotsa rows>\n(524 rows)\n now\n-------------------------------\n 2002-10-09 23:20:53.227844+02\n(1 row)\n\nThat's 45 minutes i don't wanna spend in there...\nI got indexes:\n\ntestdb=> \\d bs_gid_idx\n Index \"bs_gid_idx\"\n Column | Type\n--------+-----------------------\n gid | character varying(16)\n online | smallint\nbtree\n\ntestdb=> \\d z2test_x_idx;\n Index \"z2test_x_idx\"\n Column | Type\n--------+-----------------------\n x | character varying(16)\nbtree\n\nRowcounts are:\n\ntestdb=> select count(*) from bs ; select count(*) from z2test ;\n count\n-------\n 25376\n(1 row)\n\n count\n-------\n 19329\n(1 row)\n\nThe bs table have many other columns besides the gid one, the z2test\ntable only has the x column.\n\nHow can i speed this query up?\nIt never scans by the indexes.\nI know it's a lot of iterations anyway i do it, but this is too damn\nslow.\n\nI can't profile anything at this box, because it's in production state,\nbut if you really want me to, i'll do it tomorrow on another box.\n\nMagnus\n\n--\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n Programmer/Networker [|] Magnus Naeslund\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n\n", "msg_date": "Wed, 9 Oct 2002 23:34:16 +0200", "msg_from": "\"Magnus Naeslund(f)\" <mag@fbab.net>", "msg_from_op": true, "msg_subject": "Damn slow query" }, { "msg_contents": "\nOn Wed, 9 Oct 2002, Magnus Naeslund(f) wrote:\n\n> Hello, i've got this query that's really slow...\n> Figure this:\n>\n> testdb=> select now() ; select gid from bs where gid not in ( select x\n> from z2test ); select now();\n\nPer FAQ suggestion, try something like\nselect gid from bs where not exists (select * from z2test where\n z2test.x=bs.gid);\nto see if it is faster.\n\n\n", "msg_date": "Wed, 9 Oct 2002 15:08:33 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Damn slow query" }, { "msg_contents": "Magnus Naeslund(f) wrote:\n> Hello, i've got this query that's really slow...\n> Figure this:\n> \n> testdb=> select now() ; select gid from bs where gid not in ( select x\n> from z2test ); select now();\n\n\"IN (subselect)\" is notoriously slow (in fact it is an FAQ). Can you rewrite \nthis as:\n\nselect b.gid from bs b where not exists (select 1 from z2test z where z.x = \nb.gid);\n\nor possibly:\n\nselect b.gid from bs b left join z2test z on z.x = b.gid where z.x IS NULL;\n\nHTH,\n\nJoe\n\n", "msg_date": "Wed, 09 Oct 2002 15:08:47 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Damn slow query" }, { "msg_contents": "Joe Conway <mail@joeconway.com> wrote:\n> \"IN (subselect)\" is notoriously slow (in fact it is an FAQ). Can you\n> rewrite this as:\n>\n\n...\n\nStephan Szabo <sszabo@megazone23.bigpanda.com> wrote:\n> Per FAQ suggestion, try something like\n\n...\n\nThanks alot, below are the results on your suggestions, quite an\ndramatic differance (but this is another box, faster, and running 7.3b2\nso the 45 minutes doesn't hold here, but it took more than 10 minutes\nbefore i stopped the original query).\n\nIs this an todo item, or should every user figure this out (yeah i know\ni should have read the FAQ when it went so totally bad).\nThe NOT IN it seems quite natural here, but then again, i don't think as\nthe db as you do :)\n\nmag=> \\timing\nTiming is on.\nmag=> explain analyze select count(gid) from bs where not exists (\nselect * from z2test where z2test.x=bs.gid );\n Aggregate (cost=129182.18..129182.18 rows=1 width=9) (actual\ntime=590.90..590.90 rows=1 loops=1)\n -> Seq Scan on bs (cost=0.00..129150.46 rows=12688 width=9) (actual\ntime=42.57..590.46 rows=524 loops=1)\n Filter: (NOT (subplan))\n SubPlan\n -> Index Scan using z2temp_x_idx on z2test (cost=0.00..5.07\nrows=1 width=9) (actual time=0.02..0.02 rows=1 loops=25376)\n Index Cond: (x = $0)\n Total runtime: 591.01 msec\n\nTime: 592.25 ms\n\nmag=> EXPLAIN analyze select count(b.gid) from bs b left join z2test z\non z.x = b.gid where z.x IS NULL;\n Aggregate (cost=1703.65..1703.65 rows=1 width=18) (actual\ntime=370.31..370.31 rows=1 loops=1)\n -> Hash Join (cost=346.61..1640.21 rows=25376 width=18) (actual\ntime=75.45..369.91 rows=524 loops=1)\n Hash Cond: (\"outer\".gid = \"inner\".x)\n Filter: (\"inner\".x IS NULL)\n -> Seq Scan on bs b (cost=0.00..595.76 rows=25376 width=9)\n(actual time=0.01..34.20 rows=25376 loops=1)\n -> Hash (cost=298.29..298.29 rows=19329 width=9) (actual\ntime=43.82..43.82 rows=0 loops=1)\n -> Seq Scan on z2test z (cost=0.00..298.29 rows=19329\nwidth=9) (actual time=0.02..22.69 rows=19329 loops=1)\n Total runtime: 370.42 msec\n\nTime: 371.90 ms\nmag=>\n\n\nMagnus\n\n\n", "msg_date": "Thu, 10 Oct 2002 00:30:08 +0200", "msg_from": "\"Magnus Naeslund(f)\" <mag@fbab.net>", "msg_from_op": true, "msg_subject": "Re: Damn slow query" }, { "msg_contents": "Magnus Naeslund(f) wrote:\n> Joe Conway <mail@joeconway.com> wrote:\n> > \"IN (subselect)\" is notoriously slow (in fact it is an FAQ). Can you\n> > rewrite this as:\n> >\n> \n> ...\n> \n> Stephan Szabo <sszabo@megazone23.bigpanda.com> wrote:\n> > Per FAQ suggestion, try something like\n> \n> ...\n> \n> Thanks alot, below are the results on your suggestions, quite an\n> dramatic differance (but this is another box, faster, and running 7.3b2\n> so the 45 minutes doesn't hold here, but it took more than 10 minutes\n> before i stopped the original query).\n> \n> Is this an todo item, or should every user figure this out (yeah i know\n> i should have read the FAQ when it went so totally bad).\n> The NOT IN it seems quite natural here, but then again, i don't think as\n> the db as you do :)\n\nWe already have a TODO item:\n\n\t* Allow Subplans to use efficient joins(hash, merge) with upper variable\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 9 Oct 2002 18:33:40 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Damn slow query" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> wrote:\n>\n> We already have a TODO item:\n>\n> * Allow Subplans to use efficient joins(hash, merge) with upper\n> variable\n\nCool.\nOne thing to note here is that the JOIN query that Joe suggested is both\nfaster than the subselect thing (no suprise) but also don't care if\nz2test has an index on it or not.\nThe subselect query started taking huge amount of time again if i\ndropped the z2test_x_idx ...\n\nSo if the todo could somehow figure out that that subselect should be an\nJOIN instead of an NOT EXISTS query, that would be great, because the\nindex on z2test isn't that super-obvious (i think, because i know the\ndata is tiny).\n\nMagnus\n\n--\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n Programmer/Networker [|] Magnus Naeslund\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n\n", "msg_date": "Thu, 10 Oct 2002 00:39:44 +0200", "msg_from": "\"Magnus Naeslund(f)\" <mag@fbab.net>", "msg_from_op": true, "msg_subject": "Re: Damn slow query" }, { "msg_contents": "Magnus Naeslund(f) wrote:\n> One thing to note here is that the JOIN query that Joe suggested is both\n> faster than the subselect thing (no suprise) but also don't care if\n> z2test has an index on it or not.\n\nIt's worth noting though that JOIN is not always the fastest method. I've \nfound situations where NOT EXISTS was significantly faster than the LEFT JOIN \nmethod (although both are usually orders of magnatude faster than NOT IN).\n\nJoe\n\n", "msg_date": "Wed, 09 Oct 2002 15:45:41 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Damn slow query" } ]
[ { "msg_contents": "dear hacker,\r\nhello.\r\nI want to know how to build a function of my own which returns rows of resultset, not just a row.\r\nCan anybody help me?\r\nThank you in advance.\r\n\r\nJinqiang Han\r\n", "msg_date": "Thu, 10 Oct 2002 12:11:55 +0800", "msg_from": "=?GB2312?Q?=BA=AB=BD=FC=C7=BF?= <jqhan@db.pku.edu.cn>", "msg_from_op": true, "msg_subject": "inquiry about multi-row resultset in functions" }, { "msg_contents": "??? wrote:\n > dear hacker, hello. I want to know how to build a function of my own which\n > returns rows of resultset, not just a row.\n > Can anybody help me? Thank you in advance.\n >\n\nIt is possible, but not very user friendly if you are using PostgreSQL 7.2.x\nor before. See contrib/dblink/dblink.c for an example of how to write a C\nfunction to do this. It is also possible in SQL language functions, but very\ninefficient and difficult to use. Search the mail archives for examples. It is\nnot possible at all with PL/pgSQL in 7.2.x (or earlier).\n\nIn 7.3, which is currently in beta testing, creating a function returning a\nresultset (also know as table functions) is much easier. Table functions can\nbe created using C, SQL, or PL/pgSQL languages. See:\nhttp://developer.postgresql.org/docs/postgres/xfunc-tablefunctions.html\nhttp://developer.postgresql.org/docs/postgres/xfunc-sql.html\nhttp://developer.postgresql.org/docs/postgres/xfunc-c.html\nhttp://developer.postgresql.org/docs/postgres/plpgsql-control-structures.html\n\nHTH,\n\nJoe\n\n\n\n\n", "msg_date": "Wed, 09 Oct 2002 21:27:28 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: inquiry about multi-row resultset in functions" } ]
[ { "msg_contents": "Hi,\n\nI just learned that bison 1.50 was released on Oct. 5th and it indeed\ncompiles ecpg just nicely on my machine. Could we please install this on\nour main machine and merge the ecpg.big branch back into main?\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Thu, 10 Oct 2002 09:05:26 +0200", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": true, "msg_subject": "Bison 1.50 was released" }, { "msg_contents": "Can we please hold off until bison 1.50 becomes a defacto? It will be a\nmatter of weeks before distros offer this as an upgrade package let\nalone months before distros offer this as a standard. Seems like these\nchanges are ideal for a release after next (7.5/7.6) as enough time will\nof gone by for it to be much more commonly found. By not jumping on the\nwagon now, it will also allow more time for bugs in the wild to be\ncaught and fixed before we force it onto the masses.\n\nGreg\n\n\nOn Thu, 2002-10-10 at 02:05, Michael Meskes wrote:\n> Hi,\n> \n> I just learned that bison 1.50 was released on Oct. 5th and it indeed\n> compiles ecpg just nicely on my machine. Could we please install this on\n> our main machine and merge the ecpg.big branch back into main?\n> \n> Michael\n> -- \n> Michael Meskes\n> Michael@Fam-Meskes.De\n> Go SF 49ers! Go Rhein Fire!\n> Use Debian GNU/Linux! Use PostgreSQL!\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster", "msg_date": "10 Oct 2002 08:44:40 -0500", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: Bison 1.50 was released" }, { "msg_contents": "Greg Copeland <greg@CopelandConsulting.Net> writes:\n> Can we please hold off until bison 1.50 becomes a defacto?\n\nWe don't have a whole lot of choice, unless you prefer releasing a\nbroken or crippled ecpg with 7.3.\n\nIn practice this only affects people who pull sources from CVS, anyway.\nIf you use a tarball then you'll get prebuilt bison output.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 10 Oct 2002 10:18:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bison 1.50 was released " }, { "msg_contents": "Greg Copeland wrote:\n-- Start of PGP signed section.\n> Can we please hold off until bison 1.50 becomes a defacto? It will be a\n> matter of weeks before distros offer this as an upgrade package let\n> alone months before distros offer this as a standard. Seems like these\n> changes are ideal for a release after next (7.5/7.6) as enough time will\n> of gone by for it to be much more commonly found. By not jumping on the\n> wagon now, it will also allow more time for bugs in the wild to be\n> caught and fixed before we force it onto the masses.\n\nNo, we can't wait. A fully function ecpg has exceeded the bison tables\nsizes, so we need a new version now. Rememeber we ship pre-bison'ed\nfiles in the tarball, so only developers will need a new bison.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 10 Oct 2002 10:28:40 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bison 1.50 was released" }, { "msg_contents": "Oh, that's right. I had forgotten that it wasn't for general PostgreSQL\nuse. Since it's a ecpg deal only, I guess I remove my objection.\n\n\nGreg\n\n\nOn Thu, 2002-10-10 at 09:18, Tom Lane wrote:\n> Greg Copeland <greg@CopelandConsulting.Net> writes:\n> > Can we please hold off until bison 1.50 becomes a defacto?\n> \n> We don't have a whole lot of choice, unless you prefer releasing a\n> broken or crippled ecpg with 7.3.\n> \n> In practice this only affects people who pull sources from CVS, anyway.\n> If you use a tarball then you'll get prebuilt bison output.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html", "msg_date": "10 Oct 2002 09:39:02 -0500", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: Bison 1.50 was released" } ]
[ { "msg_contents": " \r\nHello,\r\nwhich global variable is the path of postgresql binary? such as /usr/local/pgsql/bin\r\nI only find pg_pathbname stand for postgres full pathname.\r\nthank you in advance.\r\nJinqiang Han\r\n", "msg_date": "Thu, 10 Oct 2002 20:12:50 +0800", "msg_from": "=?GB2312?Q?=BA=AB=BD=FC=C7=BF?= <jqhan@db.pku.edu.cn>", "msg_from_op": true, "msg_subject": "help" } ]
[ { "msg_contents": "\nHere are the open items. There are only a few major ones left. I would\nlike to get those done so we can shoot for a final release perhaps at\nthe end of October.\n\n---------------------------------------------------------------------------\n\n P O S T G R E S Q L\n\n 7 . 3 O P E N I T E M S\n\n\nCurrent at ftp://momjian.postgresql.org/pub/postgresql/open_items.\n\nSource Code Changes\n-------------------\nSchema handling - ready? interfaces? client apps?\nDrop column handling - ready for all clients, apps?\nFix BeOS, QNX4 ports\nGet bison upgrade on postgresql.org for ecpg only (Marc)\nFix vacuum btree bug (Tom)\nFix client apps for autocommit = off\nReturn last command of INSTEAD rule for tuple count/oid/tag for rules, SPI\nAdd schema dump option to pg_dump\nAdd/remove GRANT EXECUTE to all /contrib functions?\nFix pg_dump to handle 64-bit off_t offsets for custom format\n\nOn Going\n--------\nSecurity audit\n\n\nDocumentation Changes\n---------------------\nMove documation to gborg for moved projects\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 10 Oct 2002 10:04:20 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Open items" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Schema handling - ready? interfaces? client apps?\n> Drop column handling - ready for all clients, apps?\n\nHow will delaying 7.3 remedy either of these issues?\n\n> Get bison upgrade on postgresql.org for ecpg only (Marc)\n\nNow that bison 1.50 is out, this should probably be \"upgrade bison on\npostgresql.org, merge ecpg branch\".\n\n> Return last command of INSTEAD rule for tuple count/oid/tag for\n> rules, SPI\n\nCan someone summarize what the consensus on this issue is? I can take\na look at implementing it, I just lost track of what the final\nconclusion was...\n\n> Add schema dump option to pg_dump\n\nWhy do we need this for 7.3?\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n", "msg_date": "10 Oct 2002 13:34:14 -0400", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": false, "msg_subject": "Re: Open items" }, { "msg_contents": "Neil Conway wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Schema handling - ready? interfaces? client apps?\n> > Drop column handling - ready for all clients, apps?\n> \n> How will delaying 7.3 remedy either of these issues?\n> \n> > Get bison upgrade on postgresql.org for ecpg only (Marc)\n> \n> Now that bison 1.50 is out, this should probably be \"upgrade bison on\n> postgresql.org, merge ecpg branch\".\n> \n> > Return last command of INSTEAD rule for tuple count/oid/tag for\n> > rules, SPI\n> \n> Can someone summarize what the consensus on this issue is? I can take\n> a look at implementing it, I just lost track of what the final\n> conclusion was...\n\nConclusion was to return last INSTEAD rule with the same tag as the\noriginal query.\n\n> > Add schema dump option to pg_dump\n> \n> Why do we need this for 7.3?\n\nIt is probably something we should have in the release where we are\nadding schemas, but it is not required.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 10 Oct 2002 14:41:27 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Open items" } ]
[ { "msg_contents": "\nI'm selecting a huge ResultSet from our database- about one million rows,\nwith one of the fields being varchar(500). I get an out of memory error from\njava.\n\nIf the whole ResultSet gets stashed in memory, this isn't really surprising,\nbut I'm wondering why this happens (if it does), rather than a subset around\nthe current record being cached and other rows being retrieved as needed.\n\nIf it turns out that there are good reasons for it to all be in memory, then\nmy question is whether there is a better approach that people typically use\nin this situation. For now, I'm simply breaking up the select into smaller\nchunks, but that approach won't be satisfactory in the long run.\n\nThanks\n\n-Nick\n\n--------------------------------------------------------------------------\nNick Fankhauser nickf@ontko.com Phone 1.765.935.4283 Fax 1.765.962.9788\nRay Ontko & Co. Software Consulting Services http://www.ontko.com/\n\n", "msg_date": "Thu, 10 Oct 2002 10:24:49 -0500", "msg_from": "\"Nick Fankhauser\" <nickf@ontko.com>", "msg_from_op": true, "msg_subject": "Out of memory error on huge resultset" }, { "msg_contents": "\nNick,\n\nUse a cursor, the current driver doesn't support caching, the backend\ngives you everything you ask for, you can't just say you want a limited\nset.\n\nSo if you use cursors you can fetch a subset\n\nDave\nOn Thu, 2002-10-10 at 11:24, Nick Fankhauser wrote:\n> \n> I'm selecting a huge ResultSet from our database- about one million rows,\n> with one of the fields being varchar(500). I get an out of memory error from\n> java.\n> \n> If the whole ResultSet gets stashed in memory, this isn't really surprising,\n> but I'm wondering why this happens (if it does), rather than a subset around\n> the current record being cached and other rows being retrieved as needed.\n> \n> If it turns out that there are good reasons for it to all be in memory, then\n> my question is whether there is a better approach that people typically use\n> in this situation. For now, I'm simply breaking up the select into smaller\n> chunks, but that approach won't be satisfactory in the long run.\n> \n> Thanks\n> \n> -Nick\n> \n> --------------------------------------------------------------------------\n> Nick Fankhauser nickf@ontko.com Phone 1.765.935.4283 Fax 1.765.962.9788\n> Ray Ontko & Co. Software Consulting Services http://www.ontko.com/\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n> \n\n\n\n", "msg_date": "10 Oct 2002 11:35:48 -0400", "msg_from": "Dave Cramer <Dave@micro-automation.net>", "msg_from_op": false, "msg_subject": "Re: Out of memory error on huge resultset" }, { "msg_contents": "Nick,\n\nThis has been discussed before on this list many times. But the short \nanswer is that that is how the postgres server handles queries. If you \nissue a query the server will return the entire result. (try the same \nquery in psql and you will have the same problem). To work around this \nyou can use explicit cursors (see the DECLARE CURSOR, FETCH, and MOVE \nsql commands for postgres).\n\nthanks,\n--Barry\n\n\nNick Fankhauser wrote:\n> I'm selecting a huge ResultSet from our database- about one million rows,\n> with one of the fields being varchar(500). I get an out of memory error from\n> java.\n> \n> If the whole ResultSet gets stashed in memory, this isn't really surprising,\n> but I'm wondering why this happens (if it does), rather than a subset around\n> the current record being cached and other rows being retrieved as needed.\n> \n> If it turns out that there are good reasons for it to all be in memory, then\n> my question is whether there is a better approach that people typically use\n> in this situation. For now, I'm simply breaking up the select into smaller\n> chunks, but that approach won't be satisfactory in the long run.\n> \n> Thanks\n> \n> -Nick\n> \n> --------------------------------------------------------------------------\n> Nick Fankhauser nickf@ontko.com Phone 1.765.935.4283 Fax 1.765.962.9788\n> Ray Ontko & Co. Software Consulting Services http://www.ontko.com/\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n\n\n", "msg_date": "Thu, 10 Oct 2002 09:40:19 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: Out of memory error on huge resultset" }, { "msg_contents": "Barry,\n Is it true ?\nI create table with one column varchar(500) and enter 1 milion rows with \nlength 10-20 character.JDBC query 'select * from a' get error 'out of \nmemory', but psql not.\nI insert 8 milion rows and psql work fine yet (slow, but work)\n\nIn C library is 'execute query' without fetch - in jdbc execute fetch all rows \nand this is problem - I think that executequery must prepare query and fetch \n(ResultSet.next or ...) must fetch only fetchSize rows.\nI am not sure, but I think that is problem with jdbc, not postgresql \nHackers ? \nDoes psql fetch all rows and if not how many ?\nCan I change fetch size in psql ?\nCURSOR , FETCH and MOVE isn't solution.\nIf I use jdbc in third-party IDE, I can't force this solution\n\nregards\n\nOn Thursday 10 October 2002 06:40 pm, Barry Lind wrote:\n> Nick,\n>\n> This has been discussed before on this list many times. But the short\n> answer is that that is how the postgres server handles queries. If you\n> issue a query the server will return the entire result. (try the same\n> query in psql and you will have the same problem). To work around this\n> you can use explicit cursors (see the DECLARE CURSOR, FETCH, and MOVE\n> sql commands for postgres).\n>\n> thanks,\n> --Barry\n>\n> Nick Fankhauser wrote:\n> > I'm selecting a huge ResultSet from our database- about one million rows,\n> > with one of the fields being varchar(500). I get an out of memory error\n> > from java.\n> >\n> > If the whole ResultSet gets stashed in memory, this isn't really\n> > surprising, but I'm wondering why this happens (if it does), rather than\n> > a subset around the current record being cached and other rows being\n> > retrieved as needed.\n> >\n> > If it turns out that there are good reasons for it to all be in memory,\n> > then my question is whether there is a better approach that people\n> > typically use in this situation. For now, I'm simply breaking up the\n> > select into smaller chunks, but that approach won't be satisfactory in\n> > the long run.\n> >\n> > Thanks\n> >\n> > -Nick\n> >\n> > -------------------------------------------------------------------------\n> >- Nick Fankhauser nickf@ontko.com Phone 1.765.935.4283 Fax\n> > 1.765.962.9788 Ray Ontko & Co. Software Consulting Services \n> > http://www.ontko.com/\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> >\n> > http://www.postgresql.org/users-lounge/docs/faq.html\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n\n", "msg_date": "Fri, 11 Oct 2002 14:27:40 +0200", "msg_from": "snpe <snpe@snpe.co.yu>", "msg_from_op": false, "msg_subject": "Re: Out of memory error on huge resultset" }, { "msg_contents": "\nCheck out:\n\n http://www.mysql.com/doc/en/MySQL-PostgreSQL_features.html\n\nMySQL AB compares MySQL with PostgreSQL.\n\nQuoted from one page\n> Because we couldn't get vacuum() to work reliable with PostgreSQL 7.1.1,\n> we haven't been able to generate a --fast version of the benchmarks yet\n> (where we would have done a vacuum() at critical places in the benchmark\n> to get better performance for PostgreSQL). We will do a new run of the\n> benchmarks as soon as the PostgreSQL developers can point out what we\n> have done wrong or have fixed vacuum() so that it works again.\n\nand from another.\n\n> Drawbacks with PostgreSQL compared to MySQL Server:\n>\n> VACUUM makes PostgreSQL hard to use in a 24/7 environment.\n\nThey also state that they have more sophisticated ALTER TABLE...\n\nOnly usable feature in their ALTER TABLE that doesn't (yet) exist in\nPostgreSQL was changing column order (ok, the order by in table creation\ncould be nice), and that's still almost purely cosmetic. Anyway, I could\nhave used that command yesterday. Could this be added to pgsql.\n\nMySQL supports data compression between front and back ends. This could be\neasily implemented, or is it already supported?\n\nI think all the other statements were misleading in the sense, that they\ncompared their newest product with PostgreSQL 7.1.1.\n\nThere's also following line:\n\n> PostgreSQL currently offers the following advantages over MySQL Server:\n\nAfter which there's only one empty line.\n\n> Note that because we know the MySQL road map, we have included in the\n> following table the version when MySQL Server should support this\n> feature. Unfortunately we couldn't do this for\n> previous comparisons, because we don't know the PostgreSQL roadmap.\n\nThey could be provided one... ;-)\n\n> Upgrading MySQL Server is painless. When you are upgrading MySQL Server,\n> you don't need to dump/restore your data, as you have to do with most\n> PostgreSQL upgrades.\n\nOk... this is true, but not so hard - yesterday I installed 7.3b2 onto my\nlinux box.\n\nOf course PostgreSQL isn't yet as fast as it could be. ;)\n\n-- \nAntti Haapala\n\n", "msg_date": "Fri, 11 Oct 2002 16:20:22 +0300 (EEST)", "msg_from": "Antti Haapala <antti.haapala@iki.fi>", "msg_from_op": false, "msg_subject": "MySQL vs PostgreSQL." }, { "msg_contents": "On Fri, 2002-10-11 at 09:20, Antti Haapala wrote:\n> \n> Check out:\n> \n> http://www.mysql.com/doc/en/MySQL-PostgreSQL_features.html\n> \n> MySQL AB compares MySQL with PostgreSQL.\n\nI wouldn't look too far into these at all. I've tried to get\n' \" as identifier quote (ANSI SQL) ' corrected on the crash-me pages for\nus a couple of times (they say we don't support it for some reason).\n\nI've not looked, but I thought 7.1 supported rename table as well.\n\nAnyway, max table row length was wrong with 7.1 wrong too unless I'm\nconfused as to what a blob is (is text and varchar a blob -- what about\nyour own 10Mb fixed length datatype -- how about a huge array of\nintegers if the previous are considered blobs?)\n\n-- \n Rod Taylor\n\n", "msg_date": "11 Oct 2002 09:36:52 -0400", "msg_from": "Rod Taylor <rbt@rbt.ca>", "msg_from_op": false, "msg_subject": "Re: MySQL vs PostgreSQL." }, { "msg_contents": "On 11 Oct 2002 at 16:20, Antti Haapala wrote:\n\n> Check out:\n> http://www.mysql.com/doc/en/MySQL-PostgreSQL_features.html\n\nWell, I guess there are many threads on this. You can dig around archives..\n> > Upgrading MySQL Server is painless. When you are upgrading MySQL Server,\n> > you don't need to dump/restore your data, as you have to do with most\n> > PostgreSQL upgrades.\n> \n> Ok... this is true, but not so hard - yesterday I installed 7.3b2 onto my\n> linux box.\n\nWell, that remains as a point. Imagine a 100GB database on a 150GB disk array. \nHow do you dump and reload? In place conversion of data is an absolute \nnecessary feature and it's already on TODO.\n\n \n> Of course PostgreSQL isn't yet as fast as it could be. ;)\n\nCheck few posts I have made in last three weeks. You will find that postgresql \nis fast enough to surpass mysql in what are considered as mysql strongholds. Of \ncourse it's not a handy win but for sure, postgresql is not slow.\n\nAnd for vacuum thing, I have written a autovacuum daemon that can automatically \nvacuum databases depending upon their activity. Check it at \ngborg.postgresql.org. (I can't imagine this as an advertisement of myself but \nlooks like the one)\n\nLet thread be rested. Postgresql certaily needs some maketing hand but refuting \nclaims in that article is not the best way to start it. I guess most hackers \nwould agree with this..\n\n\nBye\n Shridhar\n\n--\nCat, n.:\tLapwarmer with built-in buzzer.\n\n", "msg_date": "Fri, 11 Oct 2002 19:08:23 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Re: MySQL vs PostgreSQL." }, { "msg_contents": "No disadvantage, in fact that is what we would like to do. \n\n\nsetFetchSize(size) turns on cursor support, otherwise fetch normally\n\nDave\n\nOn Fri, 2002-10-11 at 10:30, Aaron Mulder wrote:\n> \tWhat would be the disadvantage of making the JDBC driver use a \n> cursor under the covers (always)? Is it significantly slower or more \n> resource-intensive than fetching all the data at once? Certainly it seems \n> like it would save memory in some cases.\n> \n> Aaron\n> \n> On 10 Oct 2002, Dave Cramer wrote:\n> > Nick,\n> > \n> > Use a cursor, the current driver doesn't support caching, the backend\n> > gives you everything you ask for, you can't just say you want a limited\n> > set.\n> > \n> > So if you use cursors you can fetch a subset\n> > \n> > Dave\n> > On Thu, 2002-10-10 at 11:24, Nick Fankhauser wrote:\n> > > \n> > > I'm selecting a huge ResultSet from our database- about one million rows,\n> > > with one of the fields being varchar(500). I get an out of memory error from\n> > > java.\n> > > \n> > > If the whole ResultSet gets stashed in memory, this isn't really surprising,\n> > > but I'm wondering why this happens (if it does), rather than a subset around\n> > > the current record being cached and other rows being retrieved as needed.\n> > > \n> > > If it turns out that there are good reasons for it to all be in memory, then\n> > > my question is whether there is a better approach that people typically use\n> > > in this situation. For now, I'm simply breaking up the select into smaller\n> > > chunks, but that approach won't be satisfactory in the long run.\n> > > \n> > > Thanks\n> > > \n> > > -Nick\n> > > \n> > > --------------------------------------------------------------------------\n> > > Nick Fankhauser nickf@ontko.com Phone 1.765.935.4283 Fax 1.765.962.9788\n> > > Ray Ontko & Co. Software Consulting Services http://www.ontko.com/\n> > > \n> > > \n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 5: Have you checked our extensive FAQ?\n> > > \n> > > http://www.postgresql.org/users-lounge/docs/faq.html\n> > > \n> > > \n> > \n> > \n> > \n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Don't 'kill -9' the postmaster\n> > \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n> \n\n\n\n", "msg_date": "11 Oct 2002 10:30:30 -0400", "msg_from": "Dave Cramer <Dave@micro-automation.net>", "msg_from_op": false, "msg_subject": "Re: Out of memory error on huge resultset" }, { "msg_contents": "\tWhat would be the disadvantage of making the JDBC driver use a \ncursor under the covers (always)? Is it significantly slower or more \nresource-intensive than fetching all the data at once? Certainly it seems \nlike it would save memory in some cases.\n\nAaron\n\nOn 10 Oct 2002, Dave Cramer wrote:\n> Nick,\n> \n> Use a cursor, the current driver doesn't support caching, the backend\n> gives you everything you ask for, you can't just say you want a limited\n> set.\n> \n> So if you use cursors you can fetch a subset\n> \n> Dave\n> On Thu, 2002-10-10 at 11:24, Nick Fankhauser wrote:\n> > \n> > I'm selecting a huge ResultSet from our database- about one million rows,\n> > with one of the fields being varchar(500). I get an out of memory error from\n> > java.\n> > \n> > If the whole ResultSet gets stashed in memory, this isn't really surprising,\n> > but I'm wondering why this happens (if it does), rather than a subset around\n> > the current record being cached and other rows being retrieved as needed.\n> > \n> > If it turns out that there are good reasons for it to all be in memory, then\n> > my question is whether there is a better approach that people typically use\n> > in this situation. For now, I'm simply breaking up the select into smaller\n> > chunks, but that approach won't be satisfactory in the long run.\n> > \n> > Thanks\n> > \n> > -Nick\n> > \n> > --------------------------------------------------------------------------\n> > Nick Fankhauser nickf@ontko.com Phone 1.765.935.4283 Fax 1.765.962.9788\n> > Ray Ontko & Co. Software Consulting Services http://www.ontko.com/\n> > \n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> > \n> > http://www.postgresql.org/users-lounge/docs/faq.html\n> > \n> > \n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n", "msg_date": "Fri, 11 Oct 2002 10:30:49 -0400 (EDT)", "msg_from": "Aaron Mulder <ammulder@alumni.princeton.edu>", "msg_from_op": false, "msg_subject": "Re: Out of memory error on huge resultset" }, { "msg_contents": "On Fri, 2002-10-11 at 08:20, Antti Haapala wrote:\n> Quoted from one page\n> > Because we couldn't get vacuum() to work reliable with PostgreSQL 7.1.1,\n\nI have little respect for the MySQL advocacy guys. They purposely\nspread misinformation. They always compare their leading edge alpha\nsoftware against Postgres' year+ old stable versions. In some cases,\nI've seen them compare their alpha (4.x) software against 7.0. Very sad\nthat these people can't even attempt to be honest.\n\nIn the case above, since they are comparing 4.x, they should be\ncomparing it to 7.x at least. It's also very sad that their testers\ndon't seem to even understand something as simple as cron. If they\ncan't understand something as simple as cron, I fear any conclusions\nthey may arrive at throughout their testing (destined to be\nincorrect/invalid).\n\n> MySQL supports data compression between front and back ends. This could be\n> easily implemented, or is it already supported?\n\nMammoth has such a feature...or at least it's been in development for a\nwhile. If I understood them correctly, it will be donated back to core\nsometime in the 7.5 or 7.7 series. Last I heard, their results were\nabsolutely wonderful.\n\n> \n> I think all the other statements were misleading in the sense, that they\n> compared their newest product with PostgreSQL 7.1.1.\n\nYa, historically, they go out of their way to ensure unfair\ncomparisons. I have no respect for them.\n\n> \n> They could be provided one... ;-)\n\nIn other words, they need a list of features that they can one day hope\nto add to MySQL.\n\n> \n> > Upgrading MySQL Server is painless. When you are upgrading MySQL Server,\n> > you don't need to dump/restore your data, as you have to do with most\n> > PostgreSQL upgrades.\n> \n> Ok... this is true, but not so hard - yesterday I installed 7.3b2 onto my\n> linux box.\n> \n> Of course PostgreSQL isn't yet as fast as it could be. ;)\n> \n\nI consider this par for the course. This is something I've had to do\nwith Sybase, Oracle and MSSQL.\n\nGreg", "msg_date": "11 Oct 2002 09:42:12 -0500", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: MySQL vs PostgreSQL." }, { "msg_contents": "Dave Cramer <Dave@micro-automation.net> wrote:\n\n> No disadvantage, in fact that is what we would like to do.\n>\n>\n> setFetchSize(size) turns on cursor support, otherwise fetch normally\n>\n> Dave\n>\n> On Fri, 2002-10-11 at 10:30, Aaron Mulder wrote:\n> > What would be the disadvantage of making the JDBC driver use a\n> > cursor under the covers (always)? Is it significantly slower or more\n> > resource-intensive than fetching all the data at once? Certainly it\nseems\n> > like it would save memory in some cases.\n> >\n> > Aaron\n\nWell, using a cursor based result set *always* is not going to work. Cursors\nwill not be held over a commit, whereas a buffer result set will. So the\nsetFetchSize..\n\nRegards,\nMichael Paesold\n\n", "msg_date": "Fri, 11 Oct 2002 16:44:36 +0200", "msg_from": "\"Michael Paesold\" <mpaesold@gmx.at>", "msg_from_op": false, "msg_subject": "Re: Out of memory error on huge resultset" }, { "msg_contents": "Michael,\n\nYou are correct, commit will effectively close the cursor. \n\nThis is the only way to deal with large result sets however.\n\nDave\nOn Fri, 2002-10-11 at 10:44, Michael Paesold wrote:\n> Dave Cramer <Dave@micro-automation.net> wrote:\n> \n> > No disadvantage, in fact that is what we would like to do.\n> >\n> >\n> > setFetchSize(size) turns on cursor support, otherwise fetch normally\n> >\n> > Dave\n> >\n> > On Fri, 2002-10-11 at 10:30, Aaron Mulder wrote:\n> > > What would be the disadvantage of making the JDBC driver use a\n> > > cursor under the covers (always)? Is it significantly slower or more\n> > > resource-intensive than fetching all the data at once? Certainly it\n> seems\n> > > like it would save memory in some cases.\n> > >\n> > > Aaron\n> \n> Well, using a cursor based result set *always* is not going to work. Cursors\n> will not be held over a commit, whereas a buffer result set will. So the\n> setFetchSize..\n> \n> Regards,\n> Michael Paesold\n> \n> \n\n\n", "msg_date": "11 Oct 2002 10:48:41 -0400", "msg_from": "Dave Cramer <Dave@micro-automation.net>", "msg_from_op": false, "msg_subject": "Re: Out of memory error on huge resultset" }, { "msg_contents": "Dave Cramer <Dave@micro-automation.net> writes:\n\n> No disadvantage, in fact that is what we would like to do.\n> \n> \n> setFetchSize(size) turns on cursor support, otherwise fetch\n> normally\n\nI love PostgreSQL's default behaviour: it's great to know\nthat a server side cursor resource is probably not hanging around for\nsimple querys.\n\n\n\nNic\n\n", "msg_date": "11 Oct 2002 15:52:18 +0100", "msg_from": "nferrier@tapsellferrier.co.uk", "msg_from_op": false, "msg_subject": "Re: Out of memory error on huge resultset" }, { "msg_contents": "Just so you know this isn't implemented yet. My reference to\nsetFetchSize below was just a suggestion as to how to implement it\n\nDave\nOn Fri, 2002-10-11 at 10:52, nferrier@tapsellferrier.co.uk wrote:\n> Dave Cramer <Dave@micro-automation.net> writes:\n> \n> > No disadvantage, in fact that is what we would like to do.\n> > \n> > \n> > setFetchSize(size) turns on cursor support, otherwise fetch\n> > normally\n> \n> I love PostgreSQL's default behaviour: it's great to know\n> that a server side cursor resource is probably not hanging around for\n> simple querys.\n> \n> \n> \n> Nic\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n> \n\n\n", "msg_date": "11 Oct 2002 10:52:34 -0400", "msg_from": "Dave Cramer <Dave@micro-automation.net>", "msg_from_op": false, "msg_subject": "Re: Out of memory error on huge resultset" }, { "msg_contents": "Nic,\n\nThat would be great! \n\nDave\nOn Fri, 2002-10-11 at 10:57, nferrier@tapsellferrier.co.uk wrote:\n> Dave Cramer <Dave@micro-automation.net> writes:\n> \n> > Just so you know this isn't implemented yet. My reference to\n> > setFetchSize below was just a suggestion as to how to implement it\n> \n> I'll do it if you like.\n> \n> \n> Nic\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n> \n\n\n", "msg_date": "11 Oct 2002 10:55:54 -0400", "msg_from": "Dave Cramer <Dave@micro-automation.net>", "msg_from_op": false, "msg_subject": "Re: Out of memory error on huge resultset" }, { "msg_contents": "Rod Taylor wrote:\n> \n> On Fri, 2002-10-11 at 09:20, Antti Haapala wrote:\n> >\n> > Check out:\n> >\n> > http://www.mysql.com/doc/en/MySQL-PostgreSQL_features.html\n> >\n> > MySQL AB compares MySQL with PostgreSQL.\n> \n> I wouldn't look too far into these at all. I've tried to get\n> ' \" as identifier quote (ANSI SQL) ' corrected on the crash-me pages for\n> us a couple of times (they say we don't support it for some reason).\n\nIt's once again the typical MySQL propaganda. As usual they compare a\nfuture version of MySQL against an old release of PostgreSQL. And they\njust compare on buzzword level.\nDo their foreign keys have referential actions and deferrability? Is log\nbased master slave replication all there can be?\n\nAnd surely do we have something that compares to *their* roadmap. That\nthey cannot find it is because it's named HISTORY.\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n", "msg_date": "Fri, 11 Oct 2002 10:57:15 -0400", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: MySQL vs PostgreSQL." }, { "msg_contents": "Dave Cramer <Dave@micro-automation.net> writes:\n\n> Just so you know this isn't implemented yet. My reference to\n> setFetchSize below was just a suggestion as to how to implement it\n\nI'll do it if you like.\n\n\nNic\n\n", "msg_date": "11 Oct 2002 15:57:49 +0100", "msg_from": "nferrier@tapsellferrier.co.uk", "msg_from_op": false, "msg_subject": "Re: Out of memory error on huge resultset" }, { "msg_contents": "At 08:27 AM 10/11/2002, snpe wrote:\n>Barry,\n> Is it true ?\n>I create table with one column varchar(500) and enter 1 milion rows with\n>length 10-20 character.JDBC query 'select * from a' get error 'out of\n>memory', but psql not.\n>I insert 8 milion rows and psql work fine yet (slow, but work)\n\nThe way the code works in JDBC is, in my opinion, a little poor but \npossibly mandated by JDBC design specs.\n\nIt reads the entire result set from the database backend and caches it in a \nhorrible Vector (which should really be a List and which should at least \nmake an attempt to get the # of rows ahead of time to avoid all the \nresizing problems).\n\nThen, it doles it out from memory as you go through the ResultSet with the \nnext() method.\n\nI would have hoped (but was wrong) that it streamed - WITHOUT LOADING THE \nWHOLE THING - through the result set as each row is returned from the \nbackend, thus ensuring that you never use much more memory than one line. \nEVEN IF you have to keep the connection locked.\n\nThe latter is what I expected it to do. The former is what it does. So, it \nnecessitates you creating EVERY SELECT query which you think has more than \na few rows (or which you think COULD have more than a few rows, \"few\" being \ndefined by our VM memory limits) into a cursor based query. Really klugy. I \nintend to write a class to do that for every SELECT query for me automatically.\n\nCheers,\n\nDoug\n\n\n>In C library is 'execute query' without fetch - in jdbc execute fetch all \n>rows\n>and this is problem - I think that executequery must prepare query and fetch\n>(ResultSet.next or ...) must fetch only fetchSize rows.\n>I am not sure, but I think that is problem with jdbc, not postgresql\n>Hackers ?\n>Does psql fetch all rows and if not how many ?\n>Can I change fetch size in psql ?\n>CURSOR , FETCH and MOVE isn't solution.\n>If I use jdbc in third-party IDE, I can't force this solution\n>\n>regards\n>\n>On Thursday 10 October 2002 06:40 pm, Barry Lind wrote:\n> > Nick,\n> >\n> > This has been discussed before on this list many times. But the short\n> > answer is that that is how the postgres server handles queries. If you\n> > issue a query the server will return the entire result. (try the same\n> > query in psql and you will have the same problem). To work around this\n> > you can use explicit cursors (see the DECLARE CURSOR, FETCH, and MOVE\n> > sql commands for postgres).\n> >\n> > thanks,\n> > --Barry\n> >\n> > Nick Fankhauser wrote:\n> > > I'm selecting a huge ResultSet from our database- about one million rows,\n> > > with one of the fields being varchar(500). I get an out of memory error\n> > > from java.\n> > >\n> > > If the whole ResultSet gets stashed in memory, this isn't really\n> > > surprising, but I'm wondering why this happens (if it does), rather than\n> > > a subset around the current record being cached and other rows being\n> > > retrieved as needed.\n> > >\n> > > If it turns out that there are good reasons for it to all be in memory,\n> > > then my question is whether there is a better approach that people\n> > > typically use in this situation. For now, I'm simply breaking up the\n> > > select into smaller chunks, but that approach won't be satisfactory in\n> > > the long run.\n> > >\n> > > Thanks\n> > >\n> > > -Nick\n> > >\n> > > -------------------------------------------------------------------------\n> > >- Nick Fankhauser nickf@ontko.com Phone 1.765.935.4283 Fax\n> > > 1.765.962.9788 Ray Ontko & Co. Software Consulting Services\n> > > http://www.ontko.com/\n> > >\n> > >\n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 5: Have you checked our extensive FAQ?\n> > >\n> > > http://www.postgresql.org/users-lounge/docs/faq.html\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 6: Have you searched our list archives?\n> >\n> > http://archives.postgresql.org\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n\n", "msg_date": "Fri, 11 Oct 2002 11:44:23 -0400", "msg_from": "Doug Fields <dfields-postgres@pexicom.com>", "msg_from_op": false, "msg_subject": "Re: Out of memory error on huge resultset" }, { "msg_contents": "This really is an artifact of the way that postgres gives us the data. \n\nWhen you query the backend you get *all* of the results in the query,\nand there is no indication of how many results you are going to get. In\nsimple selects it would be possible to get some idea by using\ncount(field), but this wouldn't work nearly enough times to make it\nuseful. So that leaves us with using cursors, which still won't tell you\nhow many rows you are getting back, but at least you won't have the\nmemory problems.\n\nThis approach is far from trivial which is why it hasn't been\nimplemented as of yet, keep in mind that result sets support things like\nmove(n), first(), last(), the last of which will be the trickiest. Not\nto mention updateable result sets.\n\nAs it turns out there is a mechanism to get to the end move 0 in\n'cursor', which currently is being considered a bug.\n\nDave\n\nOn Fri, 2002-10-11 at 11:44, Doug Fields wrote:\n> At 08:27 AM 10/11/2002, snpe wrote:\n> >Barry,\n> > Is it true ?\n> >I create table with one column varchar(500) and enter 1 milion rows with\n> >length 10-20 character.JDBC query 'select * from a' get error 'out of\n> >memory', but psql not.\n> >I insert 8 milion rows and psql work fine yet (slow, but work)\n> \n> The way the code works in JDBC is, in my opinion, a little poor but \n> possibly mandated by JDBC design specs.\n> \n> It reads the entire result set from the database backend and caches it in a \n> horrible Vector (which should really be a List and which should at least \n> make an attempt to get the # of rows ahead of time to avoid all the \n> resizing problems).\n> \n> Then, it doles it out from memory as you go through the ResultSet with the \n> next() method.\n> \n> I would have hoped (but was wrong) that it streamed - WITHOUT LOADING THE \n> WHOLE THING - through the result set as each row is returned from the \n> backend, thus ensuring that you never use much more memory than one line. \n> EVEN IF you have to keep the connection locked.\n> \n> The latter is what I expected it to do. The former is what it does. So, it \n> necessitates you creating EVERY SELECT query which you think has more than \n> a few rows (or which you think COULD have more than a few rows, \"few\" being \n> defined by our VM memory limits) into a cursor based query. Really klugy. I \n> intend to write a class to do that for every SELECT query for me automatically.\n> \n> Cheers,\n> \n> Doug\n> \n> \n> >In C library is 'execute query' without fetch - in jdbc execute fetch all \n> >rows\n> >and this is problem - I think that executequery must prepare query and fetch\n> >(ResultSet.next or ...) must fetch only fetchSize rows.\n> >I am not sure, but I think that is problem with jdbc, not postgresql\n> >Hackers ?\n> >Does psql fetch all rows and if not how many ?\n> >Can I change fetch size in psql ?\n> >CURSOR , FETCH and MOVE isn't solution.\n> >If I use jdbc in third-party IDE, I can't force this solution\n> >\n> >regards\n> >\n> >On Thursday 10 October 2002 06:40 pm, Barry Lind wrote:\n> > > Nick,\n> > >\n> > > This has been discussed before on this list many times. But the short\n> > > answer is that that is how the postgres server handles queries. If you\n> > > issue a query the server will return the entire result. (try the same\n> > > query in psql and you will have the same problem). To work around this\n> > > you can use explicit cursors (see the DECLARE CURSOR, FETCH, and MOVE\n> > > sql commands for postgres).\n> > >\n> > > thanks,\n> > > --Barry\n> > >\n> > > Nick Fankhauser wrote:\n> > > > I'm selecting a huge ResultSet from our database- about one million rows,\n> > > > with one of the fields being varchar(500). I get an out of memory error\n> > > > from java.\n> > > >\n> > > > If the whole ResultSet gets stashed in memory, this isn't really\n> > > > surprising, but I'm wondering why this happens (if it does), rather than\n> > > > a subset around the current record being cached and other rows being\n> > > > retrieved as needed.\n> > > >\n> > > > If it turns out that there are good reasons for it to all be in memory,\n> > > > then my question is whether there is a better approach that people\n> > > > typically use in this situation. For now, I'm simply breaking up the\n> > > > select into smaller chunks, but that approach won't be satisfactory in\n> > > > the long run.\n> > > >\n> > > > Thanks\n> > > >\n> > > > -Nick\n> > > >\n> > > > -------------------------------------------------------------------------\n> > > >- Nick Fankhauser nickf@ontko.com Phone 1.765.935.4283 Fax\n> > > > 1.765.962.9788 Ray Ontko & Co. Software Consulting Services\n> > > > http://www.ontko.com/\n> > > >\n> > > >\n> > > > ---------------------------(end of broadcast)---------------------------\n> > > > TIP 5: Have you checked our extensive FAQ?\n> > > >\n> > > > http://www.postgresql.org/users-lounge/docs/faq.html\n> > >\n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 6: Have you searched our list archives?\n> > >\n> > > http://archives.postgresql.org\n> >\n> >\n> >---------------------------(end of broadcast)---------------------------\n> >TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n> \n\n\n\n", "msg_date": "11 Oct 2002 11:58:04 -0400", "msg_from": "Dave Cramer <Dave@micro-automation.net>", "msg_from_op": false, "msg_subject": "Re: Out of memory error on huge resultset" }, { "msg_contents": "Currently there is a TODO list item to have move 0 not position to the\nend of the cursor.\n\nMoving to the end of the cursor is useful, can we keep the behaviour and\nchange it to move end, or just leave it the way it is?\n\nDave\n\n\n", "msg_date": "11 Oct 2002 12:04:31 -0400", "msg_from": "Dave Cramer <dave@fastcrypt.com>", "msg_from_op": false, "msg_subject": "move 0 behaviour" }, { "msg_contents": "Hello,\n Does it mean that psql uses cursors ?\n\nregards\nHaris Peco\nOn Friday 11 October 2002 05:58 pm, Dave Cramer wrote:\n> This really is an artifact of the way that postgres gives us the data.\n>\n> When you query the backend you get *all* of the results in the query,\n> and there is no indication of how many results you are going to get. In\n> simple selects it would be possible to get some idea by using\n> count(field), but this wouldn't work nearly enough times to make it\n> useful. So that leaves us with using cursors, which still won't tell you\n> how many rows you are getting back, but at least you won't have the\n> memory problems.\n>\n> This approach is far from trivial which is why it hasn't been\n> implemented as of yet, keep in mind that result sets support things like\n> move(n), first(), last(), the last of which will be the trickiest. Not\n> to mention updateable result sets.\n>\n> As it turns out there is a mechanism to get to the end move 0 in\n> 'cursor', which currently is being considered a bug.\n>\n> Dave\n>\n> On Fri, 2002-10-11 at 11:44, Doug Fields wrote:\n> > At 08:27 AM 10/11/2002, snpe wrote:\n> > >Barry,\n> > > Is it true ?\n> > >I create table with one column varchar(500) and enter 1 milion rows with\n> > >length 10-20 character.JDBC query 'select * from a' get error 'out of\n> > >memory', but psql not.\n> > >I insert 8 milion rows and psql work fine yet (slow, but work)\n> >\n> > The way the code works in JDBC is, in my opinion, a little poor but\n> > possibly mandated by JDBC design specs.\n> >\n> > It reads the entire result set from the database backend and caches it in\n> > a horrible Vector (which should really be a List and which should at\n> > least make an attempt to get the # of rows ahead of time to avoid all the\n> > resizing problems).\n> >\n> > Then, it doles it out from memory as you go through the ResultSet with\n> > the next() method.\n> >\n> > I would have hoped (but was wrong) that it streamed - WITHOUT LOADING THE\n> > WHOLE THING - through the result set as each row is returned from the\n> > backend, thus ensuring that you never use much more memory than one line.\n> > EVEN IF you have to keep the connection locked.\n> >\n> > The latter is what I expected it to do. The former is what it does. So,\n> > it necessitates you creating EVERY SELECT query which you think has more\n> > than a few rows (or which you think COULD have more than a few rows,\n> > \"few\" being defined by our VM memory limits) into a cursor based query.\n> > Really klugy. I intend to write a class to do that for every SELECT query\n> > for me automatically.\n> >\n> > Cheers,\n> >\n> > Doug\n> >\n> > >In C library is 'execute query' without fetch - in jdbc execute fetch\n> > > all rows\n> > >and this is problem - I think that executequery must prepare query and\n> > > fetch (ResultSet.next or ...) must fetch only fetchSize rows.\n> > >I am not sure, but I think that is problem with jdbc, not postgresql\n> > >Hackers ?\n> > >Does psql fetch all rows and if not how many ?\n> > >Can I change fetch size in psql ?\n> > >CURSOR , FETCH and MOVE isn't solution.\n> > >If I use jdbc in third-party IDE, I can't force this solution\n> > >\n> > >regards\n> > >\n> > >On Thursday 10 October 2002 06:40 pm, Barry Lind wrote:\n> > > > Nick,\n> > > >\n> > > > This has been discussed before on this list many times. But the\n> > > > short answer is that that is how the postgres server handles queries.\n> > > > If you issue a query the server will return the entire result. (try\n> > > > the same query in psql and you will have the same problem). To work\n> > > > around this you can use explicit cursors (see the DECLARE CURSOR,\n> > > > FETCH, and MOVE sql commands for postgres).\n> > > >\n> > > > thanks,\n> > > > --Barry\n> > > >\n> > > > Nick Fankhauser wrote:\n> > > > > I'm selecting a huge ResultSet from our database- about one million\n> > > > > rows, with one of the fields being varchar(500). I get an out of\n> > > > > memory error from java.\n> > > > >\n> > > > > If the whole ResultSet gets stashed in memory, this isn't really\n> > > > > surprising, but I'm wondering why this happens (if it does), rather\n> > > > > than a subset around the current record being cached and other rows\n> > > > > being retrieved as needed.\n> > > > >\n> > > > > If it turns out that there are good reasons for it to all be in\n> > > > > memory, then my question is whether there is a better approach that\n> > > > > people typically use in this situation. For now, I'm simply\n> > > > > breaking up the select into smaller chunks, but that approach won't\n> > > > > be satisfactory in the long run.\n> > > > >\n> > > > > Thanks\n> > > > >\n> > > > > -Nick\n> > > > >\n> > > > > -------------------------------------------------------------------\n> > > > >------ - Nick Fankhauser nickf@ontko.com Phone 1.765.935.4283 Fax\n> > > > > 1.765.962.9788 Ray Ontko & Co. Software Consulting Services\n> > > > > http://www.ontko.com/\n> > > > >\n> > > > >\n> > > > > ---------------------------(end of\n> > > > > broadcast)--------------------------- TIP 5: Have you checked our\n> > > > > extensive FAQ?\n> > > > >\n> > > > > http://www.postgresql.org/users-lounge/docs/faq.html\n> > > >\n> > > > ---------------------------(end of\n> > > > broadcast)--------------------------- TIP 6: Have you searched our\n> > > > list archives?\n> > > >\n> > > > http://archives.postgresql.org\n> > >\n> > >---------------------------(end of broadcast)---------------------------\n> > >TIP 2: you can get off all lists at once with the unregister command\n> > > (send \"unregister YourEmailAddressHere\" to\n> > > majordomo@postgresql.org)\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 6: Have you searched our list archives?\n> >\n> > http://archives.postgresql.org\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n", "msg_date": "Fri, 11 Oct 2002 18:48:41 +0200", "msg_from": "snpe <snpe@snpe.co.yu>", "msg_from_op": false, "msg_subject": "Re: Out of memory error on huge resultset" }, { "msg_contents": "\n\nDoug Fields wrote:\n\n> \n> It reads the entire result set from the database backend and caches it \n> in a horrible Vector (which should really be a List and which should at \n> least make an attempt to get the # of rows ahead of time to avoid all \n> the resizing problems).\n> \n\nThe problem here is that we would then need two completely different \nimplementations for jdbc1 and jdbc2/3 since List is not part of jdk1.1. \n We could build our own List implementation that works on jdk1.1, but I \nam not sure the gain in performance is worth it. If you could do some \ntesting and come back with some numbers of the differences in \nperformance between ResultSets implemented with Vectors and Lists that \nwould probably give us enough information to guage how to proceed on \nthis suggested improvement.\n\n> Then, it doles it out from memory as you go through the ResultSet with \n> the next() method.\n> \n> I would have hoped (but was wrong) that it streamed - WITHOUT LOADING \n> THE WHOLE THING - through the result set as each row is returned from \n> the backend, thus ensuring that you never use much more memory than one \n> line. EVEN IF you have to keep the connection locked.\n> \n\nThis had actually been tried in the past (just getting the records from \nthe server connection as requested), but this behavior violates the spec \nand broke many peoples applications. The problem is that if you don't \nuse cursors, you end up tying up the connection until you finish \nfetching all rows. So code like the following no longer works:\n\nget result set\nwhile (rs.next()) {\n get some values from the result\n use them to update/insert some other table using a preparedstatement\n}\n\nSince the connection is locked until all the results are fetched, you \ncan't use the connection to perform the update/insert you want to do for \neach itteration of the loop.\n\n> The latter is what I expected it to do. The former is what it does. So, \n> it necessitates you creating EVERY SELECT query which you think has more \n> than a few rows (or which you think COULD have more than a few rows, \n> \"few\" being defined by our VM memory limits) into a cursor based query. \n> Really klugy. I intend to write a class to do that for every SELECT \n> query for me automatically.\n> \n> Cheers,\n> \n> Doug\n> \n\n--Barry\n\n\n", "msg_date": "Fri, 11 Oct 2002 09:55:57 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: Out of memory error on huge resultset" }, { "msg_contents": "\tIt wouldn't be bad to start with a naive implementation of \nlast()... If the only problem we have is that last() doesn't perform \nwell, we're probably making good progress. :)\n\tOn the other hand, I would think the updateable result sets would\nbe the most challenging; does the server provide any analogous features\nwith its cursors?\n\nAaron\n\nOn 11 Oct 2002, Dave Cramer wrote:\n> This really is an artifact of the way that postgres gives us the data. \n> \n> When you query the backend you get *all* of the results in the query,\n> and there is no indication of how many results you are going to get. In\n> simple selects it would be possible to get some idea by using\n> count(field), but this wouldn't work nearly enough times to make it\n> useful. So that leaves us with using cursors, which still won't tell you\n> how many rows you are getting back, but at least you won't have the\n> memory problems.\n> \n> This approach is far from trivial which is why it hasn't been\n> implemented as of yet, keep in mind that result sets support things like\n> move(n), first(), last(), the last of which will be the trickiest. Not\n> to mention updateable result sets.\n> \n> As it turns out there is a mechanism to get to the end move 0 in\n> 'cursor', which currently is being considered a bug.\n> \n> Dave\n> \n> On Fri, 2002-10-11 at 11:44, Doug Fields wrote:\n> > At 08:27 AM 10/11/2002, snpe wrote:\n> > >Barry,\n> > > Is it true ?\n> > >I create table with one column varchar(500) and enter 1 milion rows with\n> > >length 10-20 character.JDBC query 'select * from a' get error 'out of\n> > >memory', but psql not.\n> > >I insert 8 milion rows and psql work fine yet (slow, but work)\n> > \n> > The way the code works in JDBC is, in my opinion, a little poor but \n> > possibly mandated by JDBC design specs.\n> > \n> > It reads the entire result set from the database backend and caches it in a \n> > horrible Vector (which should really be a List and which should at least \n> > make an attempt to get the # of rows ahead of time to avoid all the \n> > resizing problems).\n> > \n> > Then, it doles it out from memory as you go through the ResultSet with the \n> > next() method.\n> > \n> > I would have hoped (but was wrong) that it streamed - WITHOUT LOADING THE \n> > WHOLE THING - through the result set as each row is returned from the \n> > backend, thus ensuring that you never use much more memory than one line. \n> > EVEN IF you have to keep the connection locked.\n> > \n> > The latter is what I expected it to do. The former is what it does. So, it \n> > necessitates you creating EVERY SELECT query which you think has more than \n> > a few rows (or which you think COULD have more than a few rows, \"few\" being \n> > defined by our VM memory limits) into a cursor based query. Really klugy. I \n> > intend to write a class to do that for every SELECT query for me automatically.\n> > \n> > Cheers,\n> > \n> > Doug\n> > \n> > \n> > >In C library is 'execute query' without fetch - in jdbc execute fetch all \n> > >rows\n> > >and this is problem - I think that executequery must prepare query and fetch\n> > >(ResultSet.next or ...) must fetch only fetchSize rows.\n> > >I am not sure, but I think that is problem with jdbc, not postgresql\n> > >Hackers ?\n> > >Does psql fetch all rows and if not how many ?\n> > >Can I change fetch size in psql ?\n> > >CURSOR , FETCH and MOVE isn't solution.\n> > >If I use jdbc in third-party IDE, I can't force this solution\n> > >\n> > >regards\n> > >\n> > >On Thursday 10 October 2002 06:40 pm, Barry Lind wrote:\n> > > > Nick,\n> > > >\n> > > > This has been discussed before on this list many times. But the short\n> > > > answer is that that is how the postgres server handles queries. If you\n> > > > issue a query the server will return the entire result. (try the same\n> > > > query in psql and you will have the same problem). To work around this\n> > > > you can use explicit cursors (see the DECLARE CURSOR, FETCH, and MOVE\n> > > > sql commands for postgres).\n> > > >\n> > > > thanks,\n> > > > --Barry\n> > > >\n> > > > Nick Fankhauser wrote:\n> > > > > I'm selecting a huge ResultSet from our database- about one million rows,\n> > > > > with one of the fields being varchar(500). I get an out of memory error\n> > > > > from java.\n> > > > >\n> > > > > If the whole ResultSet gets stashed in memory, this isn't really\n> > > > > surprising, but I'm wondering why this happens (if it does), rather than\n> > > > > a subset around the current record being cached and other rows being\n> > > > > retrieved as needed.\n> > > > >\n> > > > > If it turns out that there are good reasons for it to all be in memory,\n> > > > > then my question is whether there is a better approach that people\n> > > > > typically use in this situation. For now, I'm simply breaking up the\n> > > > > select into smaller chunks, but that approach won't be satisfactory in\n> > > > > the long run.\n> > > > >\n> > > > > Thanks\n> > > > >\n> > > > > -Nick\n> > > > >\n> > > > > -------------------------------------------------------------------------\n> > > > >- Nick Fankhauser nickf@ontko.com Phone 1.765.935.4283 Fax\n> > > > > 1.765.962.9788 Ray Ontko & Co. Software Consulting Services\n> > > > > http://www.ontko.com/\n> > > > >\n> > > > >\n> > > > > ---------------------------(end of broadcast)---------------------------\n> > > > > TIP 5: Have you checked our extensive FAQ?\n> > > > >\n> > > > > http://www.postgresql.org/users-lounge/docs/faq.html\n> > > >\n> > > > ---------------------------(end of broadcast)---------------------------\n> > > > TIP 6: Have you searched our list archives?\n> > > >\n> > > > http://archives.postgresql.org\n> > >\n> > >\n> > >---------------------------(end of broadcast)---------------------------\n> > >TIP 2: you can get off all lists at once with the unregister command\n> > > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> > \n> > \n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 6: Have you searched our list archives?\n> > \n> > http://archives.postgresql.org\n> > \n> > \n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n", "msg_date": "Fri, 11 Oct 2002 13:05:37 -0400 (EDT)", "msg_from": "Aaron Mulder <ammulder@alumni.princeton.edu>", "msg_from_op": false, "msg_subject": "Re: Out of memory error on huge resultset" }, { "msg_contents": "\nHi,\n\nI'm jumping in late into this discussion but ...\n\nIn my mind a lot of these features break the model. From an application\nprespective, if I want to do last, I do a count(*) and then I do a fetch\nwith limit; Not quite the same, but all these methods of fetching the\nwhole data locally and manipulating it to a large exten defeat the\npurpose. Let the backend do the work, instead of trying to replicate the\nfunctionality in JDBC.\n\nThat said I do understand that some of these are required by the JDBC 2.0 \nspec.\n\nDror\n\nOn Fri, Oct 11, 2002 at 01:05:37PM -0400, Aaron Mulder wrote:\n> \tIt wouldn't be bad to start with a naive implementation of \n> last()... If the only problem we have is that last() doesn't perform \n> well, we're probably making good progress. :)\n> \tOn the other hand, I would think the updateable result sets would\n> be the most challenging; does the server provide any analogous features\n> with its cursors?\n> \n> Aaron\n> \n> On 11 Oct 2002, Dave Cramer wrote:\n> > This really is an artifact of the way that postgres gives us the data. \n> > \n> > When you query the backend you get *all* of the results in the query,\n> > and there is no indication of how many results you are going to get. In\n> > simple selects it would be possible to get some idea by using\n> > count(field), but this wouldn't work nearly enough times to make it\n> > useful. So that leaves us with using cursors, which still won't tell you\n> > how many rows you are getting back, but at least you won't have the\n> > memory problems.\n> > \n> > This approach is far from trivial which is why it hasn't been\n> > implemented as of yet, keep in mind that result sets support things like\n> > move(n), first(), last(), the last of which will be the trickiest. Not\n> > to mention updateable result sets.\n> > \n> > As it turns out there is a mechanism to get to the end move 0 in\n> > 'cursor', which currently is being considered a bug.\n> > \n> > Dave\n> > \n> > On Fri, 2002-10-11 at 11:44, Doug Fields wrote:\n> > > At 08:27 AM 10/11/2002, snpe wrote:\n> > > >Barry,\n> > > > Is it true ?\n> > > >I create table with one column varchar(500) and enter 1 milion rows with\n> > > >length 10-20 character.JDBC query 'select * from a' get error 'out of\n> > > >memory', but psql not.\n> > > >I insert 8 milion rows and psql work fine yet (slow, but work)\n> > > \n> > > The way the code works in JDBC is, in my opinion, a little poor but \n> > > possibly mandated by JDBC design specs.\n> > > \n> > > It reads the entire result set from the database backend and caches it in a \n> > > horrible Vector (which should really be a List and which should at least \n> > > make an attempt to get the # of rows ahead of time to avoid all the \n> > > resizing problems).\n> > > \n> > > Then, it doles it out from memory as you go through the ResultSet with the \n> > > next() method.\n> > > \n> > > I would have hoped (but was wrong) that it streamed - WITHOUT LOADING THE \n> > > WHOLE THING - through the result set as each row is returned from the \n> > > backend, thus ensuring that you never use much more memory than one line. \n> > > EVEN IF you have to keep the connection locked.\n> > > \n> > > The latter is what I expected it to do. The former is what it does. So, it \n> > > necessitates you creating EVERY SELECT query which you think has more than \n> > > a few rows (or which you think COULD have more than a few rows, \"few\" being \n> > > defined by our VM memory limits) into a cursor based query. Really klugy. I \n> > > intend to write a class to do that for every SELECT query for me automatically.\n> > > \n> > > Cheers,\n> > > \n> > > Doug\n> > > \n> > > \n> > > >In C library is 'execute query' without fetch - in jdbc execute fetch all \n> > > >rows\n> > > >and this is problem - I think that executequery must prepare query and fetch\n> > > >(ResultSet.next or ...) must fetch only fetchSize rows.\n> > > >I am not sure, but I think that is problem with jdbc, not postgresql\n> > > >Hackers ?\n> > > >Does psql fetch all rows and if not how many ?\n> > > >Can I change fetch size in psql ?\n> > > >CURSOR , FETCH and MOVE isn't solution.\n> > > >If I use jdbc in third-party IDE, I can't force this solution\n> > > >\n> > > >regards\n> > > >\n> > > >On Thursday 10 October 2002 06:40 pm, Barry Lind wrote:\n> > > > > Nick,\n> > > > >\n> > > > > This has been discussed before on this list many times. But the short\n> > > > > answer is that that is how the postgres server handles queries. If you\n> > > > > issue a query the server will return the entire result. (try the same\n> > > > > query in psql and you will have the same problem). To work around this\n> > > > > you can use explicit cursors (see the DECLARE CURSOR, FETCH, and MOVE\n> > > > > sql commands for postgres).\n> > > > >\n> > > > > thanks,\n> > > > > --Barry\n> > > > >\n> > > > > Nick Fankhauser wrote:\n> > > > > > I'm selecting a huge ResultSet from our database- about one million rows,\n> > > > > > with one of the fields being varchar(500). I get an out of memory error\n> > > > > > from java.\n> > > > > >\n> > > > > > If the whole ResultSet gets stashed in memory, this isn't really\n> > > > > > surprising, but I'm wondering why this happens (if it does), rather than\n> > > > > > a subset around the current record being cached and other rows being\n> > > > > > retrieved as needed.\n> > > > > >\n> > > > > > If it turns out that there are good reasons for it to all be in memory,\n> > > > > > then my question is whether there is a better approach that people\n> > > > > > typically use in this situation. For now, I'm simply breaking up the\n> > > > > > select into smaller chunks, but that approach won't be satisfactory in\n> > > > > > the long run.\n> > > > > >\n> > > > > > Thanks\n> > > > > >\n> > > > > > -Nick\n> > > > > >\n> > > > > > -------------------------------------------------------------------------\n> > > > > >- Nick Fankhauser nickf@ontko.com Phone 1.765.935.4283 Fax\n> > > > > > 1.765.962.9788 Ray Ontko & Co. Software Consulting Services\n> > > > > > http://www.ontko.com/\n> > > > > >\n> > > > > >\n> > > > > > ---------------------------(end of broadcast)---------------------------\n> > > > > > TIP 5: Have you checked our extensive FAQ?\n> > > > > >\n> > > > > > http://www.postgresql.org/users-lounge/docs/faq.html\n> > > > >\n> > > > > ---------------------------(end of broadcast)---------------------------\n> > > > > TIP 6: Have you searched our list archives?\n> > > > >\n> > > > > http://archives.postgresql.org\n> > > >\n> > > >\n> > > >---------------------------(end of broadcast)---------------------------\n> > > >TIP 2: you can get off all lists at once with the unregister command\n> > > > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> > > \n> > > \n> > > \n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 6: Have you searched our list archives?\n> > > \n> > > http://archives.postgresql.org\n> > > \n> > > \n> > \n> > \n> > \n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> > \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \nDror Matalon\nZapatec Inc\n1700 MLK Way\nBerkeley, CA 94709\nhttp://www.zapatec.com\n", "msg_date": "Fri, 11 Oct 2002 10:12:31 -0700", "msg_from": "Dror Matalon <dror@zapatec.com>", "msg_from_op": false, "msg_subject": "Re: Out of memory error on huge resultset" }, { "msg_contents": "Agreed, but there are selects where count(*) won't work. Even so, what\nwe are talking about here is hiding the implementation of cursors behind\nthe result set. What I would envision is some sort of cacheing where\nwhen the user set's the fetchsize to 10 for instance we do the select,\nand when they ask for next() we check to see if we have these rows in\nthe cache, and go get them if necessary 10 at a time, possibly keeping\none set of ten behind where we are and one set of 10 ahead of where we\nare. So recalling that resultSets have absolute positioning, as well as\nfirst(), and last() positioning we need the ability to move with the\nminimum number of trips to the backend.\n\nAs it turns out the move command in postgres does support moving to the\nend (move 0 ); at the moment this is considered a bug, and is on the\ntodo list to be removed. I expect we can get some sort of implementation\nwhich allows us to move to the end ( move end )\n\nDave\n\nOn Fri, 2002-10-11 at 13:12, Dror Matalon wrote:\n> \n> Hi,\n> \n> I'm jumping in late into this discussion but ...\n> \n> In my mind a lot of these features break the model. From an application\n> prespective, if I want to do last, I do a count(*) and then I do a fetch\n> with limit; Not quite the same, but all these methods of fetching the\n> whole data locally and manipulating it to a large exten defeat the\n> purpose. Let the backend do the work, instead of trying to replicate the\n> functionality in JDBC.\n> \n> That said I do understand that some of these are required by the JDBC 2.0 \n> spec.\n> \n> Dror\n> \n> On Fri, Oct 11, 2002 at 01:05:37PM -0400, Aaron Mulder wrote:\n> > \tIt wouldn't be bad to start with a naive implementation of \n> > last()... If the only problem we have is that last() doesn't perform \n> > well, we're probably making good progress. :)\n> > \tOn the other hand, I would think the updateable result sets would\n> > be the most challenging; does the server provide any analogous features\n> > with its cursors?\n> > \n> > Aaron\n> > \n> > On 11 Oct 2002, Dave Cramer wrote:\n> > > This really is an artifact of the way that postgres gives us the data. \n> > > \n> > > When you query the backend you get *all* of the results in the query,\n> > > and there is no indication of how many results you are going to get. In\n> > > simple selects it would be possible to get some idea by using\n> > > count(field), but this wouldn't work nearly enough times to make it\n> > > useful. So that leaves us with using cursors, which still won't tell you\n> > > how many rows you are getting back, but at least you won't have the\n> > > memory problems.\n> > > \n> > > This approach is far from trivial which is why it hasn't been\n> > > implemented as of yet, keep in mind that result sets support things like\n> > > move(n), first(), last(), the last of which will be the trickiest. Not\n> > > to mention updateable result sets.\n> > > \n> > > As it turns out there is a mechanism to get to the end move 0 in\n> > > 'cursor', which currently is being considered a bug.\n> > > \n> > > Dave\n> > > \n> > > On Fri, 2002-10-11 at 11:44, Doug Fields wrote:\n> > > > At 08:27 AM 10/11/2002, snpe wrote:\n> > > > >Barry,\n> > > > > Is it true ?\n> > > > >I create table with one column varchar(500) and enter 1 milion rows with\n> > > > >length 10-20 character.JDBC query 'select * from a' get error 'out of\n> > > > >memory', but psql not.\n> > > > >I insert 8 milion rows and psql work fine yet (slow, but work)\n> > > > \n> > > > The way the code works in JDBC is, in my opinion, a little poor but \n> > > > possibly mandated by JDBC design specs.\n> > > > \n> > > > It reads the entire result set from the database backend and caches it in a \n> > > > horrible Vector (which should really be a List and which should at least \n> > > > make an attempt to get the # of rows ahead of time to avoid all the \n> > > > resizing problems).\n> > > > \n> > > > Then, it doles it out from memory as you go through the ResultSet with the \n> > > > next() method.\n> > > > \n> > > > I would have hoped (but was wrong) that it streamed - WITHOUT LOADING THE \n> > > > WHOLE THING - through the result set as each row is returned from the \n> > > > backend, thus ensuring that you never use much more memory than one line. \n> > > > EVEN IF you have to keep the connection locked.\n> > > > \n> > > > The latter is what I expected it to do. The former is what it does. So, it \n> > > > necessitates you creating EVERY SELECT query which you think has more than \n> > > > a few rows (or which you think COULD have more than a few rows, \"few\" being \n> > > > defined by our VM memory limits) into a cursor based query. Really klugy. I \n> > > > intend to write a class to do that for every SELECT query for me automatically.\n> > > > \n> > > > Cheers,\n> > > > \n> > > > Doug\n> > > > \n> > > > \n> > > > >In C library is 'execute query' without fetch - in jdbc execute fetch all \n> > > > >rows\n> > > > >and this is problem - I think that executequery must prepare query and fetch\n> > > > >(ResultSet.next or ...) must fetch only fetchSize rows.\n> > > > >I am not sure, but I think that is problem with jdbc, not postgresql\n> > > > >Hackers ?\n> > > > >Does psql fetch all rows and if not how many ?\n> > > > >Can I change fetch size in psql ?\n> > > > >CURSOR , FETCH and MOVE isn't solution.\n> > > > >If I use jdbc in third-party IDE, I can't force this solution\n> > > > >\n> > > > >regards\n> > > > >\n> > > > >On Thursday 10 October 2002 06:40 pm, Barry Lind wrote:\n> > > > > > Nick,\n> > > > > >\n> > > > > > This has been discussed before on this list many times. But the short\n> > > > > > answer is that that is how the postgres server handles queries. If you\n> > > > > > issue a query the server will return the entire result. (try the same\n> > > > > > query in psql and you will have the same problem). To work around this\n> > > > > > you can use explicit cursors (see the DECLARE CURSOR, FETCH, and MOVE\n> > > > > > sql commands for postgres).\n> > > > > >\n> > > > > > thanks,\n> > > > > > --Barry\n> > > > > >\n> > > > > > Nick Fankhauser wrote:\n> > > > > > > I'm selecting a huge ResultSet from our database- about one million rows,\n> > > > > > > with one of the fields being varchar(500). I get an out of memory error\n> > > > > > > from java.\n> > > > > > >\n> > > > > > > If the whole ResultSet gets stashed in memory, this isn't really\n> > > > > > > surprising, but I'm wondering why this happens (if it does), rather than\n> > > > > > > a subset around the current record being cached and other rows being\n> > > > > > > retrieved as needed.\n> > > > > > >\n> > > > > > > If it turns out that there are good reasons for it to all be in memory,\n> > > > > > > then my question is whether there is a better approach that people\n> > > > > > > typically use in this situation. For now, I'm simply breaking up the\n> > > > > > > select into smaller chunks, but that approach won't be satisfactory in\n> > > > > > > the long run.\n> > > > > > >\n> > > > > > > Thanks\n> > > > > > >\n> > > > > > > -Nick\n> > > > > > >\n> > > > > > > -------------------------------------------------------------------------\n> > > > > > >- Nick Fankhauser nickf@ontko.com Phone 1.765.935.4283 Fax\n> > > > > > > 1.765.962.9788 Ray Ontko & Co. Software Consulting Services\n> > > > > > > http://www.ontko.com/\n> > > > > > >\n> > > > > > >\n> > > > > > > ---------------------------(end of broadcast)---------------------------\n> > > > > > > TIP 5: Have you checked our extensive FAQ?\n> > > > > > >\n> > > > > > > http://www.postgresql.org/users-lounge/docs/faq.html\n> > > > > >\n> > > > > > ---------------------------(end of broadcast)---------------------------\n> > > > > > TIP 6: Have you searched our list archives?\n> > > > > >\n> > > > > > http://archives.postgresql.org\n> > > > >\n> > > > >\n> > > > >---------------------------(end of broadcast)---------------------------\n> > > > >TIP 2: you can get off all lists at once with the unregister command\n> > > > > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> > > > \n> > > > \n> > > > \n> > > > ---------------------------(end of broadcast)---------------------------\n> > > > TIP 6: Have you searched our list archives?\n> > > > \n> > > > http://archives.postgresql.org\n> > > > \n> > > > \n> > > \n> > > \n> > > \n> > > \n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> > > \n> > \n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to majordomo@postgresql.org so that your\n> > message can get through to the mailing list cleanly\n> \n> -- \n> Dror Matalon\n> Zapatec Inc\n> 1700 MLK Way\n> Berkeley, CA 94709\n> http://www.zapatec.com\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n> \n\n\n\n", "msg_date": "11 Oct 2002 13:59:41 -0400", "msg_from": "Dave Cramer <Dave@micro-automation.net>", "msg_from_op": false, "msg_subject": "Re: Out of memory error on huge resultset" }, { "msg_contents": "No,\n\nIt doesn't have to store them, only display them\n\nDave\nOn Fri, 2002-10-11 at 12:48, snpe wrote:\n> Hello,\n> Does it mean that psql uses cursors ?\n> \n> regards\n> Haris Peco\n> On Friday 11 October 2002 05:58 pm, Dave Cramer wrote:\n> > This really is an artifact of the way that postgres gives us the data.\n> >\n> > When you query the backend you get *all* of the results in the query,\n> > and there is no indication of how many results you are going to get. In\n> > simple selects it would be possible to get some idea by using\n> > count(field), but this wouldn't work nearly enough times to make it\n> > useful. So that leaves us with using cursors, which still won't tell you\n> > how many rows you are getting back, but at least you won't have the\n> > memory problems.\n> >\n> > This approach is far from trivial which is why it hasn't been\n> > implemented as of yet, keep in mind that result sets support things like\n> > move(n), first(), last(), the last of which will be the trickiest. Not\n> > to mention updateable result sets.\n> >\n> > As it turns out there is a mechanism to get to the end move 0 in\n> > 'cursor', which currently is being considered a bug.\n> >\n> > Dave\n> >\n> > On Fri, 2002-10-11 at 11:44, Doug Fields wrote:\n> > > At 08:27 AM 10/11/2002, snpe wrote:\n> > > >Barry,\n> > > > Is it true ?\n> > > >I create table with one column varchar(500) and enter 1 milion rows with\n> > > >length 10-20 character.JDBC query 'select * from a' get error 'out of\n> > > >memory', but psql not.\n> > > >I insert 8 milion rows and psql work fine yet (slow, but work)\n> > >\n> > > The way the code works in JDBC is, in my opinion, a little poor but\n> > > possibly mandated by JDBC design specs.\n> > >\n> > > It reads the entire result set from the database backend and caches it in\n> > > a horrible Vector (which should really be a List and which should at\n> > > least make an attempt to get the # of rows ahead of time to avoid all the\n> > > resizing problems).\n> > >\n> > > Then, it doles it out from memory as you go through the ResultSet with\n> > > the next() method.\n> > >\n> > > I would have hoped (but was wrong) that it streamed - WITHOUT LOADING THE\n> > > WHOLE THING - through the result set as each row is returned from the\n> > > backend, thus ensuring that you never use much more memory than one line.\n> > > EVEN IF you have to keep the connection locked.\n> > >\n> > > The latter is what I expected it to do. The former is what it does. So,\n> > > it necessitates you creating EVERY SELECT query which you think has more\n> > > than a few rows (or which you think COULD have more than a few rows,\n> > > \"few\" being defined by our VM memory limits) into a cursor based query.\n> > > Really klugy. I intend to write a class to do that for every SELECT query\n> > > for me automatically.\n> > >\n> > > Cheers,\n> > >\n> > > Doug\n> > >\n> > > >In C library is 'execute query' without fetch - in jdbc execute fetch\n> > > > all rows\n> > > >and this is problem - I think that executequery must prepare query and\n> > > > fetch (ResultSet.next or ...) must fetch only fetchSize rows.\n> > > >I am not sure, but I think that is problem with jdbc, not postgresql\n> > > >Hackers ?\n> > > >Does psql fetch all rows and if not how many ?\n> > > >Can I change fetch size in psql ?\n> > > >CURSOR , FETCH and MOVE isn't solution.\n> > > >If I use jdbc in third-party IDE, I can't force this solution\n> > > >\n> > > >regards\n> > > >\n> > > >On Thursday 10 October 2002 06:40 pm, Barry Lind wrote:\n> > > > > Nick,\n> > > > >\n> > > > > This has been discussed before on this list many times. But the\n> > > > > short answer is that that is how the postgres server handles queries.\n> > > > > If you issue a query the server will return the entire result. (try\n> > > > > the same query in psql and you will have the same problem). To work\n> > > > > around this you can use explicit cursors (see the DECLARE CURSOR,\n> > > > > FETCH, and MOVE sql commands for postgres).\n> > > > >\n> > > > > thanks,\n> > > > > --Barry\n> > > > >\n> > > > > Nick Fankhauser wrote:\n> > > > > > I'm selecting a huge ResultSet from our database- about one million\n> > > > > > rows, with one of the fields being varchar(500). I get an out of\n> > > > > > memory error from java.\n> > > > > >\n> > > > > > If the whole ResultSet gets stashed in memory, this isn't really\n> > > > > > surprising, but I'm wondering why this happens (if it does), rather\n> > > > > > than a subset around the current record being cached and other rows\n> > > > > > being retrieved as needed.\n> > > > > >\n> > > > > > If it turns out that there are good reasons for it to all be in\n> > > > > > memory, then my question is whether there is a better approach that\n> > > > > > people typically use in this situation. For now, I'm simply\n> > > > > > breaking up the select into smaller chunks, but that approach won't\n> > > > > > be satisfactory in the long run.\n> > > > > >\n> > > > > > Thanks\n> > > > > >\n> > > > > > -Nick\n> > > > > >\n> > > > > > -------------------------------------------------------------------\n> > > > > >------ - Nick Fankhauser nickf@ontko.com Phone 1.765.935.4283 Fax\n> > > > > > 1.765.962.9788 Ray Ontko & Co. Software Consulting Services\n> > > > > > http://www.ontko.com/\n> > > > > >\n> > > > > >\n> > > > > > ---------------------------(end of\n> > > > > > broadcast)--------------------------- TIP 5: Have you checked our\n> > > > > > extensive FAQ?\n> > > > > >\n> > > > > > http://www.postgresql.org/users-lounge/docs/faq.html\n> > > > >\n> > > > > ---------------------------(end of\n> > > > > broadcast)--------------------------- TIP 6: Have you searched our\n> > > > > list archives?\n> > > > >\n> > > > > http://archives.postgresql.org\n> > > >\n> > > >---------------------------(end of broadcast)---------------------------\n> > > >TIP 2: you can get off all lists at once with the unregister command\n> > > > (send \"unregister YourEmailAddressHere\" to\n> > > > majordomo@postgresql.org)\n> > >\n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 6: Have you searched our list archives?\n> > >\n> > > http://archives.postgresql.org\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n> \n\n\n\n", "msg_date": "11 Oct 2002 14:13:05 -0400", "msg_from": "Dave Cramer <Dave@micro-automation.net>", "msg_from_op": false, "msg_subject": "Re: Out of memory error on huge resultset" }, { "msg_contents": "\n>The problem here is that we would then need two completely different \n>implementations for jdbc1 and jdbc2/3 since List is not part of \n>jdk1.1. We could build our own List implementation that works on jdk1.1, \n>but I am not sure the gain in performance is worth it. If you could do \n>some testing and come back with some numbers of the differences in \n>performance between ResultSets implemented with Vectors and Lists that \n>would probably give us enough information to guage how to proceed on this \n>suggested improvement.\n\nIn the past, I have done this sort of thing.\n\nThe \"synchronized\" overhead of a \"synchronized method\" is about 7 times the \noverhead of a regular method call. I did many empirical tests of this on \nJDK 1.3 and 1.4 on Linux (2.2 and 2.4) due to the high performance demands \nof the software my firm uses. Now, that all depends on how many times you \ninvoke those methods and how fast they are otherwise. I'm unwilling to do \nthat for PostgreSQL, but I have to imagine that scrapping JDK 1.1 support \nwould not be a bad thing and may even be a good thing. Anyone still using \nJDK 1.1 is also probably using it in conjunction with other products from \nthat era, so having a modern product compatible with a very out of date \nproduct makes no sense in my estimation.\n\nI don't make policy, though - that seems to be your job generally, Barry. :)\n\n>This had actually been tried in the past (just getting the records from \n>the server connection as requested), but this behavior violates the spec \n>and broke many peoples applications. The problem is that if you don't use \n>cursors, you end up tying up the connection until you finish fetching all \n>rows. So code like the following no longer works:\n>\n>get result set\n>while (rs.next()) {\n> get some values from the result\n> use them to update/insert some other table using a preparedstatement\n>}\n>\n>Since the connection is locked until all the results are fetched, you \n>can't use the connection to perform the update/insert you want to do for \n>each itteration of the loop.\n\nAgreed on this point. However, nonetheless, and regardless of the fact that \nthis may break the spec and should not be the default behavior, this should \nbe an option, because the current way of the driver working is a horror for \nanyone who has to deal with large result sets (such as I do on a regular \nbasis).\n\nI don't mind keeping a connection locked for ages. I do mind running out of \nVM space for a large result set which is streamed FROM THE DATABASE SERVER. \nResult sets should have the ability to be streamed end to end, IMO - even \nif it's a non-standard extension or an option to the connection when \ncreated or the statement when created.\n\nAgain, I don't make policy, and I'm insufficiently motivated to do it \nmyself. Don't think it invalidates my opinion, but I won't kvetch about it \neither. I just designed a class which does the same thing by taking a query \nand turning it into a cursor-based query.\n\nCheers,\n\nDoug\n\n", "msg_date": "Fri, 11 Oct 2002 14:13:07 -0400", "msg_from": "Doug Fields <dfields-postgres@pexicom.com>", "msg_from_op": false, "msg_subject": "Re: Out of memory error on huge resultset" }, { "msg_contents": "AFAIK, it doesn't work unless the driver is running natively on linux.\nIt used to use a native library, but I did see something which looks\nlike a net protocol ?\n\ndave\nOn Fri, 2002-10-11 at 16:33, snpe wrote:\n> There is jxdbcon Postgresql jdbc driver with setFetchSize method.\n> Last version don't wokr with pgsql 7.3 and I don't test more.\n> I will try next day, when I download pgsql 7.2\n> \n> regards\n> Haris Peco\n> On Friday 11 October 2002 07:59 pm, Dave Cramer wrote:\n> > Agreed, but there are selects where count(*) won't work. Even so, what\n> > we are talking about here is hiding the implementation of cursors behind\n> > the result set. What I would envision is some sort of cacheing where\n> > when the user set's the fetchsize to 10 for instance we do the select,\n> > and when they ask for next() we check to see if we have these rows in\n> > the cache, and go get them if necessary 10 at a time, possibly keeping\n> > one set of ten behind where we are and one set of 10 ahead of where we\n> > are. So recalling that resultSets have absolute positioning, as well as\n> > first(), and last() positioning we need the ability to move with the\n> > minimum number of trips to the backend.\n> >\n> > As it turns out the move command in postgres does support moving to the\n> > end (move 0 ); at the moment this is considered a bug, and is on the\n> > todo list to be removed. I expect we can get some sort of implementation\n> > which allows us to move to the end ( move end )\n> >\n> > Dave\n> >\n> > On Fri, 2002-10-11 at 13:12, Dror Matalon wrote:\n> > > Hi,\n> > >\n> > > I'm jumping in late into this discussion but ...\n> > >\n> > > In my mind a lot of these features break the model. From an application\n> > > prespective, if I want to do last, I do a count(*) and then I do a fetch\n> > > with limit; Not quite the same, but all these methods of fetching the\n> > > whole data locally and manipulating it to a large exten defeat the\n> > > purpose. Let the backend do the work, instead of trying to replicate the\n> > > functionality in JDBC.\n> > >\n> > > That said I do understand that some of these are required by the JDBC 2.0\n> > > spec.\n> > >\n> > > Dror\n> > >\n> > > On Fri, Oct 11, 2002 at 01:05:37PM -0400, Aaron Mulder wrote:\n> > > > \tIt wouldn't be bad to start with a naive implementation of\n> > > > last()... If the only problem we have is that last() doesn't perform\n> > > > well, we're probably making good progress. :)\n> > > > \tOn the other hand, I would think the updateable result sets would\n> > > > be the most challenging; does the server provide any analogous features\n> > > > with its cursors?\n> > > >\n> > > > Aaron\n> > > >\n> > > > On 11 Oct 2002, Dave Cramer wrote:\n> > > > > This really is an artifact of the way that postgres gives us the\n> > > > > data.\n> > > > >\n> > > > > When you query the backend you get *all* of the results in the query,\n> > > > > and there is no indication of how many results you are going to get.\n> > > > > In simple selects it would be possible to get some idea by using\n> > > > > count(field), but this wouldn't work nearly enough times to make it\n> > > > > useful. So that leaves us with using cursors, which still won't tell\n> > > > > you how many rows you are getting back, but at least you won't have\n> > > > > the memory problems.\n> > > > >\n> > > > > This approach is far from trivial which is why it hasn't been\n> > > > > implemented as of yet, keep in mind that result sets support things\n> > > > > like move(n), first(), last(), the last of which will be the\n> > > > > trickiest. Not to mention updateable result sets.\n> > > > >\n> > > > > As it turns out there is a mechanism to get to the end move 0 in\n> > > > > 'cursor', which currently is being considered a bug.\n> > > > >\n> > > > > Dave\n> > > > >\n> > > > > On Fri, 2002-10-11 at 11:44, Doug Fields wrote:\n> > > > > > At 08:27 AM 10/11/2002, snpe wrote:\n> > > > > > >Barry,\n> > > > > > > Is it true ?\n> > > > > > >I create table with one column varchar(500) and enter 1 milion\n> > > > > > > rows with length 10-20 character.JDBC query 'select * from a' get\n> > > > > > > error 'out of memory', but psql not.\n> > > > > > >I insert 8 milion rows and psql work fine yet (slow, but work)\n> > > > > >\n> > > > > > The way the code works in JDBC is, in my opinion, a little poor but\n> > > > > > possibly mandated by JDBC design specs.\n> > > > > >\n> > > > > > It reads the entire result set from the database backend and caches\n> > > > > > it in a horrible Vector (which should really be a List and which\n> > > > > > should at least make an attempt to get the # of rows ahead of time\n> > > > > > to avoid all the resizing problems).\n> > > > > >\n> > > > > > Then, it doles it out from memory as you go through the ResultSet\n> > > > > > with the next() method.\n> > > > > >\n> > > > > > I would have hoped (but was wrong) that it streamed - WITHOUT\n> > > > > > LOADING THE WHOLE THING - through the result set as each row is\n> > > > > > returned from the backend, thus ensuring that you never use much\n> > > > > > more memory than one line. EVEN IF you have to keep the connection\n> > > > > > locked.\n> > > > > >\n> > > > > > The latter is what I expected it to do. The former is what it does.\n> > > > > > So, it necessitates you creating EVERY SELECT query which you think\n> > > > > > has more than a few rows (or which you think COULD have more than a\n> > > > > > few rows, \"few\" being defined by our VM memory limits) into a\n> > > > > > cursor based query. Really klugy. I intend to write a class to do\n> > > > > > that for every SELECT query for me automatically.\n> > > > > >\n> > > > > > Cheers,\n> > > > > >\n> > > > > > Doug\n> > > > > >\n> > > > > > >In C library is 'execute query' without fetch - in jdbc execute\n> > > > > > > fetch all rows\n> > > > > > >and this is problem - I think that executequery must prepare query\n> > > > > > > and fetch (ResultSet.next or ...) must fetch only fetchSize rows.\n> > > > > > >I am not sure, but I think that is problem with jdbc, not\n> > > > > > > postgresql Hackers ?\n> > > > > > >Does psql fetch all rows and if not how many ?\n> > > > > > >Can I change fetch size in psql ?\n> > > > > > >CURSOR , FETCH and MOVE isn't solution.\n> > > > > > >If I use jdbc in third-party IDE, I can't force this solution\n> > > > > > >\n> > > > > > >regards\n> > > > > > >\n> > > > > > >On Thursday 10 October 2002 06:40 pm, Barry Lind wrote:\n> > > > > > > > Nick,\n> > > > > > > >\n> > > > > > > > This has been discussed before on this list many times. But\n> > > > > > > > the short answer is that that is how the postgres server\n> > > > > > > > handles queries. If you issue a query the server will return\n> > > > > > > > the entire result. (try the same query in psql and you will\n> > > > > > > > have the same problem). To work around this you can use\n> > > > > > > > explicit cursors (see the DECLARE CURSOR, FETCH, and MOVE sql\n> > > > > > > > commands for postgres).\n> > > > > > > >\n> > > > > > > > thanks,\n> > > > > > > > --Barry\n> > > > > > > >\n> > > > > > > > Nick Fankhauser wrote:\n> > > > > > > > > I'm selecting a huge ResultSet from our database- about one\n> > > > > > > > > million rows, with one of the fields being varchar(500). I\n> > > > > > > > > get an out of memory error from java.\n> > > > > > > > >\n> > > > > > > > > If the whole ResultSet gets stashed in memory, this isn't\n> > > > > > > > > really surprising, but I'm wondering why this happens (if it\n> > > > > > > > > does), rather than a subset around the current record being\n> > > > > > > > > cached and other rows being retrieved as needed.\n> > > > > > > > >\n> > > > > > > > > If it turns out that there are good reasons for it to all be\n> > > > > > > > > in memory, then my question is whether there is a better\n> > > > > > > > > approach that people typically use in this situation. For\n> > > > > > > > > now, I'm simply breaking up the select into smaller chunks,\n> > > > > > > > > but that approach won't be satisfactory in the long run.\n> > > > > > > > >\n> > > > > > > > > Thanks\n> > > > > > > > >\n> > > > > > > > > -Nick\n> > > > > > > > >\n> > > > > > > > > -------------------------------------------------------------\n> > > > > > > > >------------ - Nick Fankhauser nickf@ontko.com Phone\n> > > > > > > > > 1.765.935.4283 Fax 1.765.962.9788 Ray Ontko & Co. \n> > > > > > > > > Software Consulting Services http://www.ontko.com/\n> > > > > > > > >\n> > > > > > > > >\n> > > > > > > > > ---------------------------(end of\n> > > > > > > > > broadcast)--------------------------- TIP 5: Have you checked\n> > > > > > > > > our extensive FAQ?\n> > > > > > > > >\n> > > > > > > > > http://www.postgresql.org/users-lounge/docs/faq.html\n> > > > > > > >\n> > > > > > > > ---------------------------(end of\n> > > > > > > > broadcast)--------------------------- TIP 6: Have you searched\n> > > > > > > > our list archives?\n> > > > > > > >\n> > > > > > > > http://archives.postgresql.org\n> > > > > > >\n> > > > > > >---------------------------(end of\n> > > > > > > broadcast)--------------------------- TIP 2: you can get off all\n> > > > > > > lists at once with the unregister command (send \"unregister\n> > > > > > > YourEmailAddressHere\" to majordomo@postgresql.org)\n> > > > > >\n> > > > > > ---------------------------(end of\n> > > > > > broadcast)--------------------------- TIP 6: Have you searched our\n> > > > > > list archives?\n> > > > > >\n> > > > > > http://archives.postgresql.org\n> > > > >\n> > > > > ---------------------------(end of\n> > > > > broadcast)--------------------------- TIP 1: subscribe and\n> > > > > unsubscribe commands go to majordomo@postgresql.org\n> > > >\n> > > > ---------------------------(end of\n> > > > broadcast)--------------------------- TIP 3: if posting/reading through\n> > > > Usenet, please send an appropriate subscribe-nomail command to\n> > > > majordomo@postgresql.org so that your message can get through to the\n> > > > mailing list cleanly\n> > >\n> > > --\n> > > Dror Matalon\n> > > Zapatec Inc\n> > > 1700 MLK Way\n> > > Berkeley, CA 94709\n> > > http://www.zapatec.com\n> > >\n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n> \n\n\n\n", "msg_date": "11 Oct 2002 16:26:25 -0400", "msg_from": "Dave Cramer <Dave@micro-automation.net>", "msg_from_op": false, "msg_subject": "Re: Out of memory error on huge resultset" }, { "msg_contents": "There is jxdbcon Postgresql jdbc driver with setFetchSize method.\nLast version don't wokr with pgsql 7.3 and I don't test more.\nI will try next day, when I download pgsql 7.2\n\nregards\nHaris Peco\nOn Friday 11 October 2002 07:59 pm, Dave Cramer wrote:\n> Agreed, but there are selects where count(*) won't work. Even so, what\n> we are talking about here is hiding the implementation of cursors behind\n> the result set. What I would envision is some sort of cacheing where\n> when the user set's the fetchsize to 10 for instance we do the select,\n> and when they ask for next() we check to see if we have these rows in\n> the cache, and go get them if necessary 10 at a time, possibly keeping\n> one set of ten behind where we are and one set of 10 ahead of where we\n> are. So recalling that resultSets have absolute positioning, as well as\n> first(), and last() positioning we need the ability to move with the\n> minimum number of trips to the backend.\n>\n> As it turns out the move command in postgres does support moving to the\n> end (move 0 ); at the moment this is considered a bug, and is on the\n> todo list to be removed. I expect we can get some sort of implementation\n> which allows us to move to the end ( move end )\n>\n> Dave\n>\n> On Fri, 2002-10-11 at 13:12, Dror Matalon wrote:\n> > Hi,\n> >\n> > I'm jumping in late into this discussion but ...\n> >\n> > In my mind a lot of these features break the model. From an application\n> > prespective, if I want to do last, I do a count(*) and then I do a fetch\n> > with limit; Not quite the same, but all these methods of fetching the\n> > whole data locally and manipulating it to a large exten defeat the\n> > purpose. Let the backend do the work, instead of trying to replicate the\n> > functionality in JDBC.\n> >\n> > That said I do understand that some of these are required by the JDBC 2.0\n> > spec.\n> >\n> > Dror\n> >\n> > On Fri, Oct 11, 2002 at 01:05:37PM -0400, Aaron Mulder wrote:\n> > > \tIt wouldn't be bad to start with a naive implementation of\n> > > last()... If the only problem we have is that last() doesn't perform\n> > > well, we're probably making good progress. :)\n> > > \tOn the other hand, I would think the updateable result sets would\n> > > be the most challenging; does the server provide any analogous features\n> > > with its cursors?\n> > >\n> > > Aaron\n> > >\n> > > On 11 Oct 2002, Dave Cramer wrote:\n> > > > This really is an artifact of the way that postgres gives us the\n> > > > data.\n> > > >\n> > > > When you query the backend you get *all* of the results in the query,\n> > > > and there is no indication of how many results you are going to get.\n> > > > In simple selects it would be possible to get some idea by using\n> > > > count(field), but this wouldn't work nearly enough times to make it\n> > > > useful. So that leaves us with using cursors, which still won't tell\n> > > > you how many rows you are getting back, but at least you won't have\n> > > > the memory problems.\n> > > >\n> > > > This approach is far from trivial which is why it hasn't been\n> > > > implemented as of yet, keep in mind that result sets support things\n> > > > like move(n), first(), last(), the last of which will be the\n> > > > trickiest. Not to mention updateable result sets.\n> > > >\n> > > > As it turns out there is a mechanism to get to the end move 0 in\n> > > > 'cursor', which currently is being considered a bug.\n> > > >\n> > > > Dave\n> > > >\n> > > > On Fri, 2002-10-11 at 11:44, Doug Fields wrote:\n> > > > > At 08:27 AM 10/11/2002, snpe wrote:\n> > > > > >Barry,\n> > > > > > Is it true ?\n> > > > > >I create table with one column varchar(500) and enter 1 milion\n> > > > > > rows with length 10-20 character.JDBC query 'select * from a' get\n> > > > > > error 'out of memory', but psql not.\n> > > > > >I insert 8 milion rows and psql work fine yet (slow, but work)\n> > > > >\n> > > > > The way the code works in JDBC is, in my opinion, a little poor but\n> > > > > possibly mandated by JDBC design specs.\n> > > > >\n> > > > > It reads the entire result set from the database backend and caches\n> > > > > it in a horrible Vector (which should really be a List and which\n> > > > > should at least make an attempt to get the # of rows ahead of time\n> > > > > to avoid all the resizing problems).\n> > > > >\n> > > > > Then, it doles it out from memory as you go through the ResultSet\n> > > > > with the next() method.\n> > > > >\n> > > > > I would have hoped (but was wrong) that it streamed - WITHOUT\n> > > > > LOADING THE WHOLE THING - through the result set as each row is\n> > > > > returned from the backend, thus ensuring that you never use much\n> > > > > more memory than one line. EVEN IF you have to keep the connection\n> > > > > locked.\n> > > > >\n> > > > > The latter is what I expected it to do. The former is what it does.\n> > > > > So, it necessitates you creating EVERY SELECT query which you think\n> > > > > has more than a few rows (or which you think COULD have more than a\n> > > > > few rows, \"few\" being defined by our VM memory limits) into a\n> > > > > cursor based query. Really klugy. I intend to write a class to do\n> > > > > that for every SELECT query for me automatically.\n> > > > >\n> > > > > Cheers,\n> > > > >\n> > > > > Doug\n> > > > >\n> > > > > >In C library is 'execute query' without fetch - in jdbc execute\n> > > > > > fetch all rows\n> > > > > >and this is problem - I think that executequery must prepare query\n> > > > > > and fetch (ResultSet.next or ...) must fetch only fetchSize rows.\n> > > > > >I am not sure, but I think that is problem with jdbc, not\n> > > > > > postgresql Hackers ?\n> > > > > >Does psql fetch all rows and if not how many ?\n> > > > > >Can I change fetch size in psql ?\n> > > > > >CURSOR , FETCH and MOVE isn't solution.\n> > > > > >If I use jdbc in third-party IDE, I can't force this solution\n> > > > > >\n> > > > > >regards\n> > > > > >\n> > > > > >On Thursday 10 October 2002 06:40 pm, Barry Lind wrote:\n> > > > > > > Nick,\n> > > > > > >\n> > > > > > > This has been discussed before on this list many times. But\n> > > > > > > the short answer is that that is how the postgres server\n> > > > > > > handles queries. If you issue a query the server will return\n> > > > > > > the entire result. (try the same query in psql and you will\n> > > > > > > have the same problem). To work around this you can use\n> > > > > > > explicit cursors (see the DECLARE CURSOR, FETCH, and MOVE sql\n> > > > > > > commands for postgres).\n> > > > > > >\n> > > > > > > thanks,\n> > > > > > > --Barry\n> > > > > > >\n> > > > > > > Nick Fankhauser wrote:\n> > > > > > > > I'm selecting a huge ResultSet from our database- about one\n> > > > > > > > million rows, with one of the fields being varchar(500). I\n> > > > > > > > get an out of memory error from java.\n> > > > > > > >\n> > > > > > > > If the whole ResultSet gets stashed in memory, this isn't\n> > > > > > > > really surprising, but I'm wondering why this happens (if it\n> > > > > > > > does), rather than a subset around the current record being\n> > > > > > > > cached and other rows being retrieved as needed.\n> > > > > > > >\n> > > > > > > > If it turns out that there are good reasons for it to all be\n> > > > > > > > in memory, then my question is whether there is a better\n> > > > > > > > approach that people typically use in this situation. For\n> > > > > > > > now, I'm simply breaking up the select into smaller chunks,\n> > > > > > > > but that approach won't be satisfactory in the long run.\n> > > > > > > >\n> > > > > > > > Thanks\n> > > > > > > >\n> > > > > > > > -Nick\n> > > > > > > >\n> > > > > > > > -------------------------------------------------------------\n> > > > > > > >------------ - Nick Fankhauser nickf@ontko.com Phone\n> > > > > > > > 1.765.935.4283 Fax 1.765.962.9788 Ray Ontko & Co. \n> > > > > > > > Software Consulting Services http://www.ontko.com/\n> > > > > > > >\n> > > > > > > >\n> > > > > > > > ---------------------------(end of\n> > > > > > > > broadcast)--------------------------- TIP 5: Have you checked\n> > > > > > > > our extensive FAQ?\n> > > > > > > >\n> > > > > > > > http://www.postgresql.org/users-lounge/docs/faq.html\n> > > > > > >\n> > > > > > > ---------------------------(end of\n> > > > > > > broadcast)--------------------------- TIP 6: Have you searched\n> > > > > > > our list archives?\n> > > > > > >\n> > > > > > > http://archives.postgresql.org\n> > > > > >\n> > > > > >---------------------------(end of\n> > > > > > broadcast)--------------------------- TIP 2: you can get off all\n> > > > > > lists at once with the unregister command (send \"unregister\n> > > > > > YourEmailAddressHere\" to majordomo@postgresql.org)\n> > > > >\n> > > > > ---------------------------(end of\n> > > > > broadcast)--------------------------- TIP 6: Have you searched our\n> > > > > list archives?\n> > > > >\n> > > > > http://archives.postgresql.org\n> > > >\n> > > > ---------------------------(end of\n> > > > broadcast)--------------------------- TIP 1: subscribe and\n> > > > unsubscribe commands go to majordomo@postgresql.org\n> > >\n> > > ---------------------------(end of\n> > > broadcast)--------------------------- TIP 3: if posting/reading through\n> > > Usenet, please send an appropriate subscribe-nomail command to\n> > > majordomo@postgresql.org so that your message can get through to the\n> > > mailing list cleanly\n> >\n> > --\n> > Dror Matalon\n> > Zapatec Inc\n> > 1700 MLK Way\n> > Berkeley, CA 94709\n> > http://www.zapatec.com\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n", "msg_date": "Fri, 11 Oct 2002 22:33:24 +0200", "msg_from": "snpe <snpe@snpe.co.yu>", "msg_from_op": false, "msg_subject": "Re: Out of memory error on huge resultset" }, { "msg_contents": "Looking at their code, default fetch size is 1000?\n\nAnyways, I think there is sufficient interest in this that we should\nhave something running soon here\n\nDave\nOn Fri, 2002-10-11 at 17:02, snpe wrote:\n> I am tried with jxdbcon - it don't work with large table, too.\n> 'out of memory' is when executeQuery()\n> \n> regards\n> Haris Peco\n> On Friday 11 October 2002 10:33 pm, snpe wrote:\n> > There is jxdbcon Postgresql jdbc driver with setFetchSize method.\n> > Last version don't wokr with pgsql 7.3 and I don't test more.\n> > I will try next day, when I download pgsql 7.2\n> >\n> > regards\n> > Haris Peco\n> >\n> > On Friday 11 October 2002 07:59 pm, Dave Cramer wrote:\n> > > Agreed, but there are selects where count(*) won't work. Even so, what\n> > > we are talking about here is hiding the implementation of cursors behind\n> > > the result set. What I would envision is some sort of cacheing where\n> > > when the user set's the fetchsize to 10 for instance we do the select,\n> > > and when they ask for next() we check to see if we have these rows in\n> > > the cache, and go get them if necessary 10 at a time, possibly keeping\n> > > one set of ten behind where we are and one set of 10 ahead of where we\n> > > are. So recalling that resultSets have absolute positioning, as well as\n> > > first(), and last() positioning we need the ability to move with the\n> > > minimum number of trips to the backend.\n> > >\n> > > As it turns out the move command in postgres does support moving to the\n> > > end (move 0 ); at the moment this is considered a bug, and is on the\n> > > todo list to be removed. I expect we can get some sort of implementation\n> > > which allows us to move to the end ( move end )\n> > >\n> > > Dave\n> > >\n> > > On Fri, 2002-10-11 at 13:12, Dror Matalon wrote:\n> > > > Hi,\n> > > >\n> > > > I'm jumping in late into this discussion but ...\n> > > >\n> > > > In my mind a lot of these features break the model. From an application\n> > > > prespective, if I want to do last, I do a count(*) and then I do a\n> > > > fetch with limit; Not quite the same, but all these methods of fetching\n> > > > the whole data locally and manipulating it to a large exten defeat the\n> > > > purpose. Let the backend do the work, instead of trying to replicate\n> > > > the functionality in JDBC.\n> > > >\n> > > > That said I do understand that some of these are required by the JDBC\n> > > > 2.0 spec.\n> > > >\n> > > > Dror\n> > > >\n> > > > On Fri, Oct 11, 2002 at 01:05:37PM -0400, Aaron Mulder wrote:\n> > > > > \tIt wouldn't be bad to start with a naive implementation of\n> > > > > last()... If the only problem we have is that last() doesn't perform\n> > > > > well, we're probably making good progress. :)\n> > > > > \tOn the other hand, I would think the updateable result sets would\n> > > > > be the most challenging; does the server provide any analogous\n> > > > > features with its cursors?\n> > > > >\n> > > > > Aaron\n> > > > >\n> > > > > On 11 Oct 2002, Dave Cramer wrote:\n> > > > > > This really is an artifact of the way that postgres gives us the\n> > > > > > data.\n> > > > > >\n> > > > > > When you query the backend you get *all* of the results in the\n> > > > > > query, and there is no indication of how many results you are going\n> > > > > > to get. In simple selects it would be possible to get some idea by\n> > > > > > using count(field), but this wouldn't work nearly enough times to\n> > > > > > make it useful. So that leaves us with using cursors, which still\n> > > > > > won't tell you how many rows you are getting back, but at least you\n> > > > > > won't have the memory problems.\n> > > > > >\n> > > > > > This approach is far from trivial which is why it hasn't been\n> > > > > > implemented as of yet, keep in mind that result sets support things\n> > > > > > like move(n), first(), last(), the last of which will be the\n> > > > > > trickiest. Not to mention updateable result sets.\n> > > > > >\n> > > > > > As it turns out there is a mechanism to get to the end move 0 in\n> > > > > > 'cursor', which currently is being considered a bug.\n> > > > > >\n> > > > > > Dave\n> > > > > >\n> > > > > > On Fri, 2002-10-11 at 11:44, Doug Fields wrote:\n> > > > > > > At 08:27 AM 10/11/2002, snpe wrote:\n> > > > > > > >Barry,\n> > > > > > > > Is it true ?\n> > > > > > > >I create table with one column varchar(500) and enter 1 milion\n> > > > > > > > rows with length 10-20 character.JDBC query 'select * from a'\n> > > > > > > > get error 'out of memory', but psql not.\n> > > > > > > >I insert 8 milion rows and psql work fine yet (slow, but work)\n> > > > > > >\n> > > > > > > The way the code works in JDBC is, in my opinion, a little poor\n> > > > > > > but possibly mandated by JDBC design specs.\n> > > > > > >\n> > > > > > > It reads the entire result set from the database backend and\n> > > > > > > caches it in a horrible Vector (which should really be a List and\n> > > > > > > which should at least make an attempt to get the # of rows ahead\n> > > > > > > of time to avoid all the resizing problems).\n> > > > > > >\n> > > > > > > Then, it doles it out from memory as you go through the ResultSet\n> > > > > > > with the next() method.\n> > > > > > >\n> > > > > > > I would have hoped (but was wrong) that it streamed - WITHOUT\n> > > > > > > LOADING THE WHOLE THING - through the result set as each row is\n> > > > > > > returned from the backend, thus ensuring that you never use much\n> > > > > > > more memory than one line. EVEN IF you have to keep the\n> > > > > > > connection locked.\n> > > > > > >\n> > > > > > > The latter is what I expected it to do. The former is what it\n> > > > > > > does. So, it necessitates you creating EVERY SELECT query which\n> > > > > > > you think has more than a few rows (or which you think COULD have\n> > > > > > > more than a few rows, \"few\" being defined by our VM memory\n> > > > > > > limits) into a cursor based query. Really klugy. I intend to\n> > > > > > > write a class to do that for every SELECT query for me\n> > > > > > > automatically.\n> > > > > > >\n> > > > > > > Cheers,\n> > > > > > >\n> > > > > > > Doug\n> > > > > > >\n> > > > > > > >In C library is 'execute query' without fetch - in jdbc execute\n> > > > > > > > fetch all rows\n> > > > > > > >and this is problem - I think that executequery must prepare\n> > > > > > > > query and fetch (ResultSet.next or ...) must fetch only\n> > > > > > > > fetchSize rows. I am not sure, but I think that is problem with\n> > > > > > > > jdbc, not postgresql Hackers ?\n> > > > > > > >Does psql fetch all rows and if not how many ?\n> > > > > > > >Can I change fetch size in psql ?\n> > > > > > > >CURSOR , FETCH and MOVE isn't solution.\n> > > > > > > >If I use jdbc in third-party IDE, I can't force this solution\n> > > > > > > >\n> > > > > > > >regards\n> > > > > > > >\n> > > > > > > >On Thursday 10 October 2002 06:40 pm, Barry Lind wrote:\n> > > > > > > > > Nick,\n> > > > > > > > >\n> > > > > > > > > This has been discussed before on this list many times. But\n> > > > > > > > > the short answer is that that is how the postgres server\n> > > > > > > > > handles queries. If you issue a query the server will return\n> > > > > > > > > the entire result. (try the same query in psql and you will\n> > > > > > > > > have the same problem). To work around this you can use\n> > > > > > > > > explicit cursors (see the DECLARE CURSOR, FETCH, and MOVE sql\n> > > > > > > > > commands for postgres).\n> > > > > > > > >\n> > > > > > > > > thanks,\n> > > > > > > > > --Barry\n> > > > > > > > >\n> > > > > > > > > Nick Fankhauser wrote:\n> > > > > > > > > > I'm selecting a huge ResultSet from our database- about one\n> > > > > > > > > > million rows, with one of the fields being varchar(500). I\n> > > > > > > > > > get an out of memory error from java.\n> > > > > > > > > >\n> > > > > > > > > > If the whole ResultSet gets stashed in memory, this isn't\n> > > > > > > > > > really surprising, but I'm wondering why this happens (if\n> > > > > > > > > > it does), rather than a subset around the current record\n> > > > > > > > > > being cached and other rows being retrieved as needed.\n> > > > > > > > > >\n> > > > > > > > > > If it turns out that there are good reasons for it to all\n> > > > > > > > > > be in memory, then my question is whether there is a better\n> > > > > > > > > > approach that people typically use in this situation. For\n> > > > > > > > > > now, I'm simply breaking up the select into smaller chunks,\n> > > > > > > > > > but that approach won't be satisfactory in the long run.\n> > > > > > > > > >\n> > > > > > > > > > Thanks\n> > > > > > > > > >\n> > > > > > > > > > -Nick\n> > > > > > > > > >\n> > > > > > > > > > -----------------------------------------------------------\n> > > > > > > > > >-- ------------ - Nick Fankhauser nickf@ontko.com Phone\n> > > > > > > > > > 1.765.935.4283 Fax 1.765.962.9788 Ray Ontko & Co. Software\n> > > > > > > > > > Consulting Services http://www.ontko.com/\n> > > > > > > > > >\n> > > > > > > > > >\n> > > > > > > > > > ---------------------------(end of\n> > > > > > > > > > broadcast)--------------------------- TIP 5: Have you\n> > > > > > > > > > checked our extensive FAQ?\n> > > > > > > > > >\n> > > > > > > > > > http://www.postgresql.org/users-lounge/docs/faq.html\n> > > > > > > > >\n> > > > > > > > > ---------------------------(end of\n> > > > > > > > > broadcast)--------------------------- TIP 6: Have you\n> > > > > > > > > searched our list archives?\n> > > > > > > > >\n> > > > > > > > > http://archives.postgresql.org\n> > > > > > > >\n> > > > > > > >---------------------------(end of\n> > > > > > > > broadcast)--------------------------- TIP 2: you can get off\n> > > > > > > > all lists at once with the unregister command (send \"unregister\n> > > > > > > > YourEmailAddressHere\" to majordomo@postgresql.org)\n> > > > > > >\n> > > > > > > ---------------------------(end of\n> > > > > > > broadcast)--------------------------- TIP 6: Have you searched\n> > > > > > > our list archives?\n> > > > > > >\n> > > > > > > http://archives.postgresql.org\n> > > > > >\n> > > > > > ---------------------------(end of\n> > > > > > broadcast)--------------------------- TIP 1: subscribe and\n> > > > > > unsubscribe commands go to majordomo@postgresql.org\n> > > > >\n> > > > > ---------------------------(end of\n> > > > > broadcast)--------------------------- TIP 3: if posting/reading\n> > > > > through Usenet, please send an appropriate subscribe-nomail command\n> > > > > to majordomo@postgresql.org so that your message can get through to\n> > > > > the mailing list cleanly\n> > > >\n> > > > --\n> > > > Dror Matalon\n> > > > Zapatec Inc\n> > > > 1700 MLK Way\n> > > > Berkeley, CA 94709\n> > > > http://www.zapatec.com\n> > > >\n> > > > ---------------------------(end of\n> > > > broadcast)--------------------------- TIP 1: subscribe and unsubscribe\n> > > > commands go to majordomo@postgresql.org\n> > >\n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 2: you can get off all lists at once with the unregister command\n> > > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to majordomo@postgresql.org so that your\n> > message can get through to the mailing list cleanly\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n> \n\n\n\n", "msg_date": "11 Oct 2002 16:38:35 -0400", "msg_from": "Dave Cramer <Dave@micro-automation.net>", "msg_from_op": false, "msg_subject": "Re: Out of memory error on huge resultset" }, { "msg_contents": "I am tried with jxdbcon - it don't work with large table, too.\n'out of memory' is when executeQuery()\n\nregards\nHaris Peco\nOn Friday 11 October 2002 10:33 pm, snpe wrote:\n> There is jxdbcon Postgresql jdbc driver with setFetchSize method.\n> Last version don't wokr with pgsql 7.3 and I don't test more.\n> I will try next day, when I download pgsql 7.2\n>\n> regards\n> Haris Peco\n>\n> On Friday 11 October 2002 07:59 pm, Dave Cramer wrote:\n> > Agreed, but there are selects where count(*) won't work. Even so, what\n> > we are talking about here is hiding the implementation of cursors behind\n> > the result set. What I would envision is some sort of cacheing where\n> > when the user set's the fetchsize to 10 for instance we do the select,\n> > and when they ask for next() we check to see if we have these rows in\n> > the cache, and go get them if necessary 10 at a time, possibly keeping\n> > one set of ten behind where we are and one set of 10 ahead of where we\n> > are. So recalling that resultSets have absolute positioning, as well as\n> > first(), and last() positioning we need the ability to move with the\n> > minimum number of trips to the backend.\n> >\n> > As it turns out the move command in postgres does support moving to the\n> > end (move 0 ); at the moment this is considered a bug, and is on the\n> > todo list to be removed. I expect we can get some sort of implementation\n> > which allows us to move to the end ( move end )\n> >\n> > Dave\n> >\n> > On Fri, 2002-10-11 at 13:12, Dror Matalon wrote:\n> > > Hi,\n> > >\n> > > I'm jumping in late into this discussion but ...\n> > >\n> > > In my mind a lot of these features break the model. From an application\n> > > prespective, if I want to do last, I do a count(*) and then I do a\n> > > fetch with limit; Not quite the same, but all these methods of fetching\n> > > the whole data locally and manipulating it to a large exten defeat the\n> > > purpose. Let the backend do the work, instead of trying to replicate\n> > > the functionality in JDBC.\n> > >\n> > > That said I do understand that some of these are required by the JDBC\n> > > 2.0 spec.\n> > >\n> > > Dror\n> > >\n> > > On Fri, Oct 11, 2002 at 01:05:37PM -0400, Aaron Mulder wrote:\n> > > > \tIt wouldn't be bad to start with a naive implementation of\n> > > > last()... If the only problem we have is that last() doesn't perform\n> > > > well, we're probably making good progress. :)\n> > > > \tOn the other hand, I would think the updateable result sets would\n> > > > be the most challenging; does the server provide any analogous\n> > > > features with its cursors?\n> > > >\n> > > > Aaron\n> > > >\n> > > > On 11 Oct 2002, Dave Cramer wrote:\n> > > > > This really is an artifact of the way that postgres gives us the\n> > > > > data.\n> > > > >\n> > > > > When you query the backend you get *all* of the results in the\n> > > > > query, and there is no indication of how many results you are going\n> > > > > to get. In simple selects it would be possible to get some idea by\n> > > > > using count(field), but this wouldn't work nearly enough times to\n> > > > > make it useful. So that leaves us with using cursors, which still\n> > > > > won't tell you how many rows you are getting back, but at least you\n> > > > > won't have the memory problems.\n> > > > >\n> > > > > This approach is far from trivial which is why it hasn't been\n> > > > > implemented as of yet, keep in mind that result sets support things\n> > > > > like move(n), first(), last(), the last of which will be the\n> > > > > trickiest. Not to mention updateable result sets.\n> > > > >\n> > > > > As it turns out there is a mechanism to get to the end move 0 in\n> > > > > 'cursor', which currently is being considered a bug.\n> > > > >\n> > > > > Dave\n> > > > >\n> > > > > On Fri, 2002-10-11 at 11:44, Doug Fields wrote:\n> > > > > > At 08:27 AM 10/11/2002, snpe wrote:\n> > > > > > >Barry,\n> > > > > > > Is it true ?\n> > > > > > >I create table with one column varchar(500) and enter 1 milion\n> > > > > > > rows with length 10-20 character.JDBC query 'select * from a'\n> > > > > > > get error 'out of memory', but psql not.\n> > > > > > >I insert 8 milion rows and psql work fine yet (slow, but work)\n> > > > > >\n> > > > > > The way the code works in JDBC is, in my opinion, a little poor\n> > > > > > but possibly mandated by JDBC design specs.\n> > > > > >\n> > > > > > It reads the entire result set from the database backend and\n> > > > > > caches it in a horrible Vector (which should really be a List and\n> > > > > > which should at least make an attempt to get the # of rows ahead\n> > > > > > of time to avoid all the resizing problems).\n> > > > > >\n> > > > > > Then, it doles it out from memory as you go through the ResultSet\n> > > > > > with the next() method.\n> > > > > >\n> > > > > > I would have hoped (but was wrong) that it streamed - WITHOUT\n> > > > > > LOADING THE WHOLE THING - through the result set as each row is\n> > > > > > returned from the backend, thus ensuring that you never use much\n> > > > > > more memory than one line. EVEN IF you have to keep the\n> > > > > > connection locked.\n> > > > > >\n> > > > > > The latter is what I expected it to do. The former is what it\n> > > > > > does. So, it necessitates you creating EVERY SELECT query which\n> > > > > > you think has more than a few rows (or which you think COULD have\n> > > > > > more than a few rows, \"few\" being defined by our VM memory\n> > > > > > limits) into a cursor based query. Really klugy. I intend to\n> > > > > > write a class to do that for every SELECT query for me\n> > > > > > automatically.\n> > > > > >\n> > > > > > Cheers,\n> > > > > >\n> > > > > > Doug\n> > > > > >\n> > > > > > >In C library is 'execute query' without fetch - in jdbc execute\n> > > > > > > fetch all rows\n> > > > > > >and this is problem - I think that executequery must prepare\n> > > > > > > query and fetch (ResultSet.next or ...) must fetch only\n> > > > > > > fetchSize rows. I am not sure, but I think that is problem with\n> > > > > > > jdbc, not postgresql Hackers ?\n> > > > > > >Does psql fetch all rows and if not how many ?\n> > > > > > >Can I change fetch size in psql ?\n> > > > > > >CURSOR , FETCH and MOVE isn't solution.\n> > > > > > >If I use jdbc in third-party IDE, I can't force this solution\n> > > > > > >\n> > > > > > >regards\n> > > > > > >\n> > > > > > >On Thursday 10 October 2002 06:40 pm, Barry Lind wrote:\n> > > > > > > > Nick,\n> > > > > > > >\n> > > > > > > > This has been discussed before on this list many times. But\n> > > > > > > > the short answer is that that is how the postgres server\n> > > > > > > > handles queries. If you issue a query the server will return\n> > > > > > > > the entire result. (try the same query in psql and you will\n> > > > > > > > have the same problem). To work around this you can use\n> > > > > > > > explicit cursors (see the DECLARE CURSOR, FETCH, and MOVE sql\n> > > > > > > > commands for postgres).\n> > > > > > > >\n> > > > > > > > thanks,\n> > > > > > > > --Barry\n> > > > > > > >\n> > > > > > > > Nick Fankhauser wrote:\n> > > > > > > > > I'm selecting a huge ResultSet from our database- about one\n> > > > > > > > > million rows, with one of the fields being varchar(500). I\n> > > > > > > > > get an out of memory error from java.\n> > > > > > > > >\n> > > > > > > > > If the whole ResultSet gets stashed in memory, this isn't\n> > > > > > > > > really surprising, but I'm wondering why this happens (if\n> > > > > > > > > it does), rather than a subset around the current record\n> > > > > > > > > being cached and other rows being retrieved as needed.\n> > > > > > > > >\n> > > > > > > > > If it turns out that there are good reasons for it to all\n> > > > > > > > > be in memory, then my question is whether there is a better\n> > > > > > > > > approach that people typically use in this situation. For\n> > > > > > > > > now, I'm simply breaking up the select into smaller chunks,\n> > > > > > > > > but that approach won't be satisfactory in the long run.\n> > > > > > > > >\n> > > > > > > > > Thanks\n> > > > > > > > >\n> > > > > > > > > -Nick\n> > > > > > > > >\n> > > > > > > > > -----------------------------------------------------------\n> > > > > > > > >-- ------------ - Nick Fankhauser nickf@ontko.com Phone\n> > > > > > > > > 1.765.935.4283 Fax 1.765.962.9788 Ray Ontko & Co. Software\n> > > > > > > > > Consulting Services http://www.ontko.com/\n> > > > > > > > >\n> > > > > > > > >\n> > > > > > > > > ---------------------------(end of\n> > > > > > > > > broadcast)--------------------------- TIP 5: Have you\n> > > > > > > > > checked our extensive FAQ?\n> > > > > > > > >\n> > > > > > > > > http://www.postgresql.org/users-lounge/docs/faq.html\n> > > > > > > >\n> > > > > > > > ---------------------------(end of\n> > > > > > > > broadcast)--------------------------- TIP 6: Have you\n> > > > > > > > searched our list archives?\n> > > > > > > >\n> > > > > > > > http://archives.postgresql.org\n> > > > > > >\n> > > > > > >---------------------------(end of\n> > > > > > > broadcast)--------------------------- TIP 2: you can get off\n> > > > > > > all lists at once with the unregister command (send \"unregister\n> > > > > > > YourEmailAddressHere\" to majordomo@postgresql.org)\n> > > > > >\n> > > > > > ---------------------------(end of\n> > > > > > broadcast)--------------------------- TIP 6: Have you searched\n> > > > > > our list archives?\n> > > > > >\n> > > > > > http://archives.postgresql.org\n> > > > >\n> > > > > ---------------------------(end of\n> > > > > broadcast)--------------------------- TIP 1: subscribe and\n> > > > > unsubscribe commands go to majordomo@postgresql.org\n> > > >\n> > > > ---------------------------(end of\n> > > > broadcast)--------------------------- TIP 3: if posting/reading\n> > > > through Usenet, please send an appropriate subscribe-nomail command\n> > > > to majordomo@postgresql.org so that your message can get through to\n> > > > the mailing list cleanly\n> > >\n> > > --\n> > > Dror Matalon\n> > > Zapatec Inc\n> > > 1700 MLK Way\n> > > Berkeley, CA 94709\n> > > http://www.zapatec.com\n> > >\n> > > ---------------------------(end of\n> > > broadcast)--------------------------- TIP 1: subscribe and unsubscribe\n> > > commands go to majordomo@postgresql.org\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n", "msg_date": "Fri, 11 Oct 2002 23:02:03 +0200", "msg_from": "snpe <snpe@snpe.co.yu>", "msg_from_op": false, "msg_subject": "Re: Out of memory error on huge resultset" }, { "msg_contents": "> They also state that they have more sophisticated ALTER TABLE...\n>\n> Only usable feature in their ALTER TABLE that doesn't (yet) exist in\n> PostgreSQL was changing column order (ok, the order by in table creation\n> could be nice), and that's still almost purely cosmetic. Anyway, I could\n> have used that command yesterday. Could this be added to pgsql.\n>\n\nI agree with your message except for that statement. MySQL alter table \nprovides the ability to change column types and cast the records \nautomatically. I remember that feature as really the only thing from MySQL \nthat I've ever missed. \n\nOf course, it's not that wonderful in theory. During development you can \neasily drop/recreate the tables and reload the test data; during production \nyou don't change the data types of your attributes.\n\nBut in practice, during development it's handy sometimes. \n\nRegards,\n\tJeff\n\n\n\n", "msg_date": "Fri, 11 Oct 2002 14:05:46 -0700", "msg_from": "Jeff Davis <list-pgsql-hackers@empires.org>", "msg_from_op": false, "msg_subject": "Re: MySQL vs PostgreSQL." }, { "msg_contents": "On Fri, 11 Oct 2002, Jeff Davis wrote:\n\n> > They also state that they have more sophisticated ALTER TABLE...\n> >\n> > Only usable feature in their ALTER TABLE that doesn't (yet) exist in\n> > PostgreSQL was changing column order (ok, the order by in table creation\n> > could be nice), and that's still almost purely cosmetic. Anyway, I could\n> > have used that command yesterday. Could this be added to pgsql.\n> >\n> \n> I agree with your message except for that statement. MySQL alter table \n> provides the ability to change column types and cast the records \n> automatically. I remember that feature as really the only thing from MySQL \n> that I've ever missed. \n> \n> Of course, it's not that wonderful in theory. During development you can \n> easily drop/recreate the tables and reload the test data; during production \n> you don't change the data types of your attributes.\n> \n> But in practice, during development it's handy sometimes. \n\nI still remember a post from somebody on the phpbuilder site that had \nchanged a field from varchar to date and all the dates he had got changed \nto 0000-00-00.\n\nHe most unimpressed, especially since he (being typical of a lot of MySQL \nusers) didn't have a backup.\n\n", "msg_date": "Fri, 11 Oct 2002 15:14:16 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: MySQL vs PostgreSQL." }, { "msg_contents": "I test Oracle JDeveloper and jdbc driver for postgresql work fine now\nMeanwhile, for production systems I have to have setFetchSize for large tables\nI think that it is same with any Java IDE.\n\nBest solution is that we have only n rows from backend, but I don't know is it \npossible\nregards\nHaris Peco\n\nOn Friday 11 October 2002 10:38 pm, Dave Cramer wrote:\n> Looking at their code, default fetch size is 1000?\n>\n> Anyways, I think there is sufficient interest in this that we should\n> have something running soon here\n>\n> Dave\n>\n> On Fri, 2002-10-11 at 17:02, snpe wrote:\n> > I am tried with jxdbcon - it don't work with large table, too.\n> > 'out of memory' is when executeQuery()\n> >\n> > regards\n> > Haris Peco\n> >\n> > On Friday 11 October 2002 10:33 pm, snpe wrote:\n> > > There is jxdbcon Postgresql jdbc driver with setFetchSize method.\n> > > Last version don't wokr with pgsql 7.3 and I don't test more.\n> > > I will try next day, when I download pgsql 7.2\n> > >\n> > > regards\n> > > Haris Peco\n> > >\n> > > On Friday 11 October 2002 07:59 pm, Dave Cramer wrote:\n> > > > Agreed, but there are selects where count(*) won't work. Even so,\n> > > > what we are talking about here is hiding the implementation of\n> > > > cursors behind the result set. What I would envision is some sort of\n> > > > cacheing where when the user set's the fetchsize to 10 for instance\n> > > > we do the select, and when they ask for next() we check to see if we\n> > > > have these rows in the cache, and go get them if necessary 10 at a\n> > > > time, possibly keeping one set of ten behind where we are and one set\n> > > > of 10 ahead of where we are. So recalling that resultSets have\n> > > > absolute positioning, as well as first(), and last() positioning we\n> > > > need the ability to move with the minimum number of trips to the\n> > > > backend.\n> > > >\n> > > > As it turns out the move command in postgres does support moving to\n> > > > the end (move 0 ); at the moment this is considered a bug, and is on\n> > > > the todo list to be removed. I expect we can get some sort of\n> > > > implementation which allows us to move to the end ( move end )\n> > > >\n> > > > Dave\n> > > >\n> > > > On Fri, 2002-10-11 at 13:12, Dror Matalon wrote:\n> > > > > Hi,\n> > > > >\n> > > > > I'm jumping in late into this discussion but ...\n> > > > >\n> > > > > In my mind a lot of these features break the model. From an\n> > > > > application prespective, if I want to do last, I do a count(*) and\n> > > > > then I do a fetch with limit; Not quite the same, but all these\n> > > > > methods of fetching the whole data locally and manipulating it to a\n> > > > > large exten defeat the purpose. Let the backend do the work,\n> > > > > instead of trying to replicate the functionality in JDBC.\n> > > > >\n> > > > > That said I do understand that some of these are required by the\n> > > > > JDBC 2.0 spec.\n> > > > >\n> > > > > Dror\n> > > > >\n> > > > > On Fri, Oct 11, 2002 at 01:05:37PM -0400, Aaron Mulder wrote:\n> > > > > > \tIt wouldn't be bad to start with a naive implementation of\n> > > > > > last()... If the only problem we have is that last() doesn't\n> > > > > > perform well, we're probably making good progress. :)\n> > > > > > \tOn the other hand, I would think the updateable result sets\n> > > > > > would be the most challenging; does the server provide any\n> > > > > > analogous features with its cursors?\n> > > > > >\n> > > > > > Aaron\n> > > > > >\n> > > > > > On 11 Oct 2002, Dave Cramer wrote:\n> > > > > > > This really is an artifact of the way that postgres gives us\n> > > > > > > the data.\n> > > > > > >\n> > > > > > > When you query the backend you get *all* of the results in the\n> > > > > > > query, and there is no indication of how many results you are\n> > > > > > > going to get. In simple selects it would be possible to get\n> > > > > > > some idea by using count(field), but this wouldn't work nearly\n> > > > > > > enough times to make it useful. So that leaves us with using\n> > > > > > > cursors, which still won't tell you how many rows you are\n> > > > > > > getting back, but at least you won't have the memory problems.\n> > > > > > >\n> > > > > > > This approach is far from trivial which is why it hasn't been\n> > > > > > > implemented as of yet, keep in mind that result sets support\n> > > > > > > things like move(n), first(), last(), the last of which will be\n> > > > > > > the trickiest. Not to mention updateable result sets.\n> > > > > > >\n> > > > > > > As it turns out there is a mechanism to get to the end move 0\n> > > > > > > in 'cursor', which currently is being considered a bug.\n> > > > > > >\n> > > > > > > Dave\n> > > > > > >\n> > > > > > > On Fri, 2002-10-11 at 11:44, Doug Fields wrote:\n> > > > > > > > At 08:27 AM 10/11/2002, snpe wrote:\n> > > > > > > > >Barry,\n> > > > > > > > > Is it true ?\n> > > > > > > > >I create table with one column varchar(500) and enter 1\n> > > > > > > > > milion rows with length 10-20 character.JDBC query 'select\n> > > > > > > > > * from a' get error 'out of memory', but psql not.\n> > > > > > > > >I insert 8 milion rows and psql work fine yet (slow, but\n> > > > > > > > > work)\n> > > > > > > >\n> > > > > > > > The way the code works in JDBC is, in my opinion, a little\n> > > > > > > > poor but possibly mandated by JDBC design specs.\n> > > > > > > >\n> > > > > > > > It reads the entire result set from the database backend and\n> > > > > > > > caches it in a horrible Vector (which should really be a List\n> > > > > > > > and which should at least make an attempt to get the # of\n> > > > > > > > rows ahead of time to avoid all the resizing problems).\n> > > > > > > >\n> > > > > > > > Then, it doles it out from memory as you go through the\n> > > > > > > > ResultSet with the next() method.\n> > > > > > > >\n> > > > > > > > I would have hoped (but was wrong) that it streamed - WITHOUT\n> > > > > > > > LOADING THE WHOLE THING - through the result set as each row\n> > > > > > > > is returned from the backend, thus ensuring that you never\n> > > > > > > > use much more memory than one line. EVEN IF you have to keep\n> > > > > > > > the connection locked.\n> > > > > > > >\n> > > > > > > > The latter is what I expected it to do. The former is what it\n> > > > > > > > does. So, it necessitates you creating EVERY SELECT query\n> > > > > > > > which you think has more than a few rows (or which you think\n> > > > > > > > COULD have more than a few rows, \"few\" being defined by our\n> > > > > > > > VM memory limits) into a cursor based query. Really klugy. I\n> > > > > > > > intend to write a class to do that for every SELECT query for\n> > > > > > > > me automatically.\n> > > > > > > >\n> > > > > > > > Cheers,\n> > > > > > > >\n> > > > > > > > Doug\n> > > > > > > >\n> > > > > > > > >In C library is 'execute query' without fetch - in jdbc\n> > > > > > > > > execute fetch all rows\n> > > > > > > > >and this is problem - I think that executequery must prepare\n> > > > > > > > > query and fetch (ResultSet.next or ...) must fetch only\n> > > > > > > > > fetchSize rows. I am not sure, but I think that is problem\n> > > > > > > > > with jdbc, not postgresql Hackers ?\n> > > > > > > > >Does psql fetch all rows and if not how many ?\n> > > > > > > > >Can I change fetch size in psql ?\n> > > > > > > > >CURSOR , FETCH and MOVE isn't solution.\n> > > > > > > > >If I use jdbc in third-party IDE, I can't force this\n> > > > > > > > > solution\n> > > > > > > > >\n> > > > > > > > >regards\n> > > > > > > > >\n> > > > > > > > >On Thursday 10 October 2002 06:40 pm, Barry Lind wrote:\n> > > > > > > > > > Nick,\n> > > > > > > > > >\n> > > > > > > > > > This has been discussed before on this list many times. \n> > > > > > > > > > But the short answer is that that is how the postgres\n> > > > > > > > > > server handles queries. If you issue a query the server\n> > > > > > > > > > will return the entire result. (try the same query in\n> > > > > > > > > > psql and you will have the same problem). To work around\n> > > > > > > > > > this you can use explicit cursors (see the DECLARE\n> > > > > > > > > > CURSOR, FETCH, and MOVE sql commands for postgres).\n> > > > > > > > > >\n> > > > > > > > > > thanks,\n> > > > > > > > > > --Barry\n> > > > > > > > > >\n> > > > > > > > > > Nick Fankhauser wrote:\n> > > > > > > > > > > I'm selecting a huge ResultSet from our database- about\n> > > > > > > > > > > one million rows, with one of the fields being\n> > > > > > > > > > > varchar(500). I get an out of memory error from java.\n> > > > > > > > > > >\n> > > > > > > > > > > If the whole ResultSet gets stashed in memory, this\n> > > > > > > > > > > isn't really surprising, but I'm wondering why this\n> > > > > > > > > > > happens (if it does), rather than a subset around the\n> > > > > > > > > > > current record being cached and other rows being\n> > > > > > > > > > > retrieved as needed.\n> > > > > > > > > > >\n> > > > > > > > > > > If it turns out that there are good reasons for it to\n> > > > > > > > > > > all be in memory, then my question is whether there is\n> > > > > > > > > > > a better approach that people typically use in this\n> > > > > > > > > > > situation. For now, I'm simply breaking up the select\n> > > > > > > > > > > into smaller chunks, but that approach won't be\n> > > > > > > > > > > satisfactory in the long run.\n> > > > > > > > > > >\n> > > > > > > > > > > Thanks\n> > > > > > > > > > >\n> > > > > > > > > > > -Nick\n> > > > > > > > > > >\n> > > > > > > > > > > -------------------------------------------------------\n> > > > > > > > > > >---- -- ------------ - Nick Fankhauser nickf@ontko.com \n> > > > > > > > > > > Phone 1.765.935.4283 Fax 1.765.962.9788 Ray Ontko &\n> > > > > > > > > > > Co. Software Consulting Services http://www.ontko.com/\n> > > > > > > > > > >\n> > > > > > > > > > >\n> > > > > > > > > > > ---------------------------(end of\n> > > > > > > > > > > broadcast)--------------------------- TIP 5: Have you\n> > > > > > > > > > > checked our extensive FAQ?\n> > > > > > > > > > >\n> > > > > > > > > > > http://www.postgresql.org/users-lounge/docs/faq.html\n> > > > > > > > > >\n> > > > > > > > > > ---------------------------(end of\n> > > > > > > > > > broadcast)--------------------------- TIP 6: Have you\n> > > > > > > > > > searched our list archives?\n> > > > > > > > > >\n> > > > > > > > > > http://archives.postgresql.org\n> > > > > > > > >\n> > > > > > > > >---------------------------(end of\n> > > > > > > > > broadcast)--------------------------- TIP 2: you can get\n> > > > > > > > > off all lists at once with the unregister command (send\n> > > > > > > > > \"unregister YourEmailAddressHere\" to\n> > > > > > > > > majordomo@postgresql.org)\n> > > > > > > >\n> > > > > > > > ---------------------------(end of\n> > > > > > > > broadcast)--------------------------- TIP 6: Have you\n> > > > > > > > searched our list archives?\n> > > > > > > >\n> > > > > > > > http://archives.postgresql.org\n> > > > > > >\n> > > > > > > ---------------------------(end of\n> > > > > > > broadcast)--------------------------- TIP 1: subscribe and\n> > > > > > > unsubscribe commands go to majordomo@postgresql.org\n> > > > > >\n> > > > > > ---------------------------(end of\n> > > > > > broadcast)--------------------------- TIP 3: if posting/reading\n> > > > > > through Usenet, please send an appropriate subscribe-nomail\n> > > > > > command to majordomo@postgresql.org so that your message can get\n> > > > > > through to the mailing list cleanly\n> > > > >\n> > > > > --\n> > > > > Dror Matalon\n> > > > > Zapatec Inc\n> > > > > 1700 MLK Way\n> > > > > Berkeley, CA 94709\n> > > > > http://www.zapatec.com\n> > > > >\n> > > > > ---------------------------(end of\n> > > > > broadcast)--------------------------- TIP 1: subscribe and\n> > > > > unsubscribe commands go to majordomo@postgresql.org\n> > > >\n> > > > ---------------------------(end of\n> > > > broadcast)--------------------------- TIP 2: you can get off all\n> > > > lists at once with the unregister command (send \"unregister\n> > > > YourEmailAddressHere\" to majordomo@postgresql.org)\n> > >\n> > > ---------------------------(end of\n> > > broadcast)--------------------------- TIP 3: if posting/reading through\n> > > Usenet, please send an appropriate subscribe-nomail command to\n> > > majordomo@postgresql.org so that your message can get through to the\n> > > mailing list cleanly\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> >\n> > http://www.postgresql.org/users-lounge/docs/faq.html\n\n", "msg_date": "Fri, 11 Oct 2002 23:18:03 +0200", "msg_from": "snpe <snpe@snpe.co.yu>", "msg_from_op": false, "msg_subject": "Re: Out of memory error on huge resultset" }, { "msg_contents": "Can You do this :\n We save 1000 (or fetchSize rows) first from beginning\n If table have < 1000 rows we save all rows, but if table have more rows\nand user request 1001 we fetch 1000 (again from begining, but skip 1000 rows \nor maybe continue fetching, if it possible) \n When user request last we fetch all rows, but save only last 1000 etc\n\n We save only fetchSize rows and seek from begining when user request \nbackward (or maybe seek always when user request out our 'fetchSize' window)\n\n This is slow for large tables, but this is solution until developer get us \nbetter solution from backend.If table have < fetchSize rows this is same \ncurrent solution and we can fix minimal fetchSize for better performance with\nsmall tables.\n\nregards\nHaris Peco\nOn Friday 11 October 2002 08:13 pm, Dave Cramer wrote:\n> No,\n>\n> It doesn't have to store them, only display them\n>\n> Dave\n>\n> On Fri, 2002-10-11 at 12:48, snpe wrote:\n> > Hello,\n> > Does it mean that psql uses cursors ?\n> >\n> > regards\n> > Haris Peco\n> >\n> > On Friday 11 October 2002 05:58 pm, Dave Cramer wrote:\n> > > This really is an artifact of the way that postgres gives us the data.\n> > >\n> > > When you query the backend you get *all* of the results in the query,\n> > > and there is no indication of how many results you are going to get. In\n> > > simple selects it would be possible to get some idea by using\n> > > count(field), but this wouldn't work nearly enough times to make it\n> > > useful. So that leaves us with using cursors, which still won't tell\n> > > you how many rows you are getting back, but at least you won't have the\n> > > memory problems.\n> > >\n> > > This approach is far from trivial which is why it hasn't been\n> > > implemented as of yet, keep in mind that result sets support things\n> > > like move(n), first(), last(), the last of which will be the trickiest.\n> > > Not to mention updateable result sets.\n> > >\n> > > As it turns out there is a mechanism to get to the end move 0 in\n> > > 'cursor', which currently is being considered a bug.\n> > >\n> > > Dave\n> > >\n> > > On Fri, 2002-10-11 at 11:44, Doug Fields wrote:\n> > > > At 08:27 AM 10/11/2002, snpe wrote:\n> > > > >Barry,\n> > > > > Is it true ?\n> > > > >I create table with one column varchar(500) and enter 1 milion rows\n> > > > > with length 10-20 character.JDBC query 'select * from a' get error\n> > > > > 'out of memory', but psql not.\n> > > > >I insert 8 milion rows and psql work fine yet (slow, but work)\n> > > >\n> > > > The way the code works in JDBC is, in my opinion, a little poor but\n> > > > possibly mandated by JDBC design specs.\n> > > >\n> > > > It reads the entire result set from the database backend and caches\n> > > > it in a horrible Vector (which should really be a List and which\n> > > > should at least make an attempt to get the # of rows ahead of time to\n> > > > avoid all the resizing problems).\n> > > >\n> > > > Then, it doles it out from memory as you go through the ResultSet\n> > > > with the next() method.\n> > > >\n> > > > I would have hoped (but was wrong) that it streamed - WITHOUT LOADING\n> > > > THE WHOLE THING - through the result set as each row is returned from\n> > > > the backend, thus ensuring that you never use much more memory than\n> > > > one line. EVEN IF you have to keep the connection locked.\n> > > >\n> > > > The latter is what I expected it to do. The former is what it does.\n> > > > So, it necessitates you creating EVERY SELECT query which you think\n> > > > has more than a few rows (or which you think COULD have more than a\n> > > > few rows, \"few\" being defined by our VM memory limits) into a cursor\n> > > > based query. Really klugy. I intend to write a class to do that for\n> > > > every SELECT query for me automatically.\n> > > >\n> > > > Cheers,\n> > > >\n> > > > Doug\n> > > >\n> > > > >In C library is 'execute query' without fetch - in jdbc execute\n> > > > > fetch all rows\n> > > > >and this is problem - I think that executequery must prepare query\n> > > > > and fetch (ResultSet.next or ...) must fetch only fetchSize rows. I\n> > > > > am not sure, but I think that is problem with jdbc, not postgresql\n> > > > > Hackers ?\n> > > > >Does psql fetch all rows and if not how many ?\n> > > > >Can I change fetch size in psql ?\n> > > > >CURSOR , FETCH and MOVE isn't solution.\n> > > > >If I use jdbc in third-party IDE, I can't force this solution\n> > > > >\n> > > > >regards\n> > > > >\n> > > > >On Thursday 10 October 2002 06:40 pm, Barry Lind wrote:\n> > > > > > Nick,\n> > > > > >\n> > > > > > This has been discussed before on this list many times. But the\n> > > > > > short answer is that that is how the postgres server handles\n> > > > > > queries. If you issue a query the server will return the entire\n> > > > > > result. (try the same query in psql and you will have the same\n> > > > > > problem). To work around this you can use explicit cursors (see\n> > > > > > the DECLARE CURSOR, FETCH, and MOVE sql commands for postgres).\n> > > > > >\n> > > > > > thanks,\n> > > > > > --Barry\n> > > > > >\n> > > > > > Nick Fankhauser wrote:\n> > > > > > > I'm selecting a huge ResultSet from our database- about one\n> > > > > > > million rows, with one of the fields being varchar(500). I get\n> > > > > > > an out of memory error from java.\n> > > > > > >\n> > > > > > > If the whole ResultSet gets stashed in memory, this isn't\n> > > > > > > really surprising, but I'm wondering why this happens (if it\n> > > > > > > does), rather than a subset around the current record being\n> > > > > > > cached and other rows being retrieved as needed.\n> > > > > > >\n> > > > > > > If it turns out that there are good reasons for it to all be in\n> > > > > > > memory, then my question is whether there is a better approach\n> > > > > > > that people typically use in this situation. For now, I'm\n> > > > > > > simply breaking up the select into smaller chunks, but that\n> > > > > > > approach won't be satisfactory in the long run.\n> > > > > > >\n> > > > > > > Thanks\n> > > > > > >\n> > > > > > > -Nick\n> > > > > > >\n> > > > > > > ---------------------------------------------------------------\n> > > > > > >---- ------ - Nick Fankhauser nickf@ontko.com Phone\n> > > > > > > 1.765.935.4283 Fax 1.765.962.9788 Ray Ontko & Co. Software\n> > > > > > > Consulting Services http://www.ontko.com/\n> > > > > > >\n> > > > > > >\n> > > > > > > ---------------------------(end of\n> > > > > > > broadcast)--------------------------- TIP 5: Have you checked\n> > > > > > > our extensive FAQ?\n> > > > > > >\n> > > > > > > http://www.postgresql.org/users-lounge/docs/faq.html\n> > > > > >\n> > > > > > ---------------------------(end of\n> > > > > > broadcast)--------------------------- TIP 6: Have you searched\n> > > > > > our list archives?\n> > > > > >\n> > > > > > http://archives.postgresql.org\n> > > > >\n> > > > >---------------------------(end of\n> > > > > broadcast)--------------------------- TIP 2: you can get off all\n> > > > > lists at once with the unregister command (send \"unregister\n> > > > > YourEmailAddressHere\" to\n> > > > > majordomo@postgresql.org)\n> > > >\n> > > > ---------------------------(end of\n> > > > broadcast)--------------------------- TIP 6: Have you searched our\n> > > > list archives?\n> > > >\n> > > > http://archives.postgresql.org\n> > >\n> > > ---------------------------(end of\n> > > broadcast)--------------------------- TIP 1: subscribe and unsubscribe\n> > > commands go to majordomo@postgresql.org\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> >\n> > http://www.postgresql.org/users-lounge/docs/faq.html\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n", "msg_date": "Fri, 11 Oct 2002 23:42:59 +0200", "msg_from": "snpe <snpe@snpe.co.yu>", "msg_from_op": false, "msg_subject": "Re: Out of memory error on huge resultset" }, { "msg_contents": "scott.marlowe wrote:\n> On Fri, 11 Oct 2002, Jeff Davis wrote:\n> \n>>I agree with your message except for that statement. MySQL alter table \n>>provides the ability to change column types and cast the records \n>>automatically. I remember that feature as really the only thing from MySQL \n>>that I've ever missed. \n>>\n>>Of course, it's not that wonderful in theory. During development you can \n>>easily drop/recreate the tables and reload the test data; during production \n>>you don't change the data types of your attributes.\n>>\n>>But in practice, during development it's handy sometimes. \n> \n> \n> I still remember a post from somebody on the phpbuilder site that had \n> changed a field from varchar to date and all the dates he had got changed \n> to 0000-00-00.\n> \n> He most unimpressed, especially since he (being typical of a lot of MySQL \n> users) didn't have a backup.\n\nCouldn't he just do ROLLBACK? ;-)\n\n(for the humor impaired, that's a joke...)\n\nMike Mascari\nmascarm@mascari.com\n\n\n\n", "msg_date": "Fri, 11 Oct 2002 18:38:44 -0400", "msg_from": "Mike Mascari <mascarm@mascari.com>", "msg_from_op": false, "msg_subject": "Re: MySQL vs PostgreSQL." }, { "msg_contents": ">\n> I still remember a post from somebody on the phpbuilder site that had\n> changed a field from varchar to date and all the dates he had got changed\n> to 0000-00-00.\n>\n> He most unimpressed, especially since he (being typical of a lot of MySQL\n> users) didn't have a backup.\n>\n\nAh, yes. Classic.\n\nI was talking about a development scenario. Anyone who changes a huge amount \nof important data to a new form without a clearly defined algorithm is not \nmaking a wise choice. That's kind of like if you have a perl script operating \non an important file: you don't want it to just kill all your data, so you do \na few tests first.\n\nAnd it really is a minor matter of convenience. I end up dropping and \nrecreating all my tables a lot in the early stages of development, which is \nmildly annoying. Certainly not as bad, I suppose, as if you're led to believe \nthat a feature does something safely, and it kills all your data.\n\nSo, you're right. It's probably better that it's never implemented.\n\nRegards,\n\tJeff\n\n", "msg_date": "Fri, 11 Oct 2002 19:08:18 -0700", "msg_from": "Jeff Davis <list-pgsql-hackers@empires.org>", "msg_from_op": false, "msg_subject": "Re: MySQL vs PostgreSQL." }, { "msg_contents": "On Fri, Oct 11, 2002 at 07:08:18PM -0700, Jeff Davis wrote:\n\n> And it really is a minor matter of convenience. I end up dropping and \n> recreating all my tables a lot in the early stages of development, which is \n> mildly annoying. Certainly not as bad, I suppose, as if you're led to believe \n> that a feature does something safely, and it kills all your data.\n\nNow that ALTER TABLE DROP COLUMN is implemented, there probably isn't\nany more the need to do such frequent drop/create of tables.\n\nAnd things just keep getting better and better. This is really amazing.\n\n-- \nAlvaro Herrera (<alvherre[a]dcc.uchile.cl>)\n\"We are who we choose to be\", sang the goldfinch\nwhen the sun is high (Sandman)\n", "msg_date": "Fri, 11 Oct 2002 22:16:24 -0400", "msg_from": "Alvaro Herrera <alvherre@dcc.uchile.cl>", "msg_from_op": false, "msg_subject": "Re: MySQL vs PostgreSQL." }, { "msg_contents": "\nOh yes, I agree. ALTER TABLE ... DROP COLUMN helps out a lot. I actually don't \nuse that for much yet because 7.3 is still in beta. However, I certainly \ncan't complain to the developers for it since it's already developed :)\n\nI am consistantly amazed by every minor version release. If postgres had a \nmarketing team it would be at version 37.3 by now. In my last email I agreed \nwith Scott Marlowe that postgres is better off without the casting of an \nentire column, since that's kind of a dangeous procedure and can be completed \nin a round-about (read: explicit) way by postgres anyway, that doesn't lose \nyour data until after you've had a chance to look at the new stuff.\n\nRegards,\n\tJeff\n\nOn Friday 11 October 2002 07:16 pm, you wrote:\n> On Fri, Oct 11, 2002 at 07:08:18PM -0700, Jeff Davis wrote:\n> > And it really is a minor matter of convenience. I end up dropping and\n> > recreating all my tables a lot in the early stages of development, which\n> > is mildly annoying. Certainly not as bad, I suppose, as if you're led to\n> > believe that a feature does something safely, and it kills all your data.\n>\n> Now that ALTER TABLE DROP COLUMN is implemented, there probably isn't\n> any more the need to do such frequent drop/create of tables.\n>\n> And things just keep getting better and better. This is really amazing.\n\n", "msg_date": "Fri, 11 Oct 2002 21:18:23 -0700", "msg_from": "Jeff Davis <list-pgsql-hackers@empires.org>", "msg_from_op": false, "msg_subject": "Re: MySQL vs PostgreSQL." }, { "msg_contents": "On 12 Oct 2002, Hannu Krosing wrote:\n\n> Alvaro Herrera kirjutas L, 12.10.2002 kell 04:16:\n> > On Fri, Oct 11, 2002 at 07:08:18PM -0700, Jeff Davis wrote:\n> > \n> > > And it really is a minor matter of convenience. I end up dropping and \n> > > recreating all my tables a lot in the early stages of development, which is \n> > > mildly annoying. Certainly not as bad, I suppose, as if you're led to believe \n> > > that a feature does something safely, and it kills all your data.\n> > \n> > Now that ALTER TABLE DROP COLUMN is implemented, there probably isn't\n> > any more the need to do such frequent drop/create of tables.\n> \n> Did attlognum's (for changing column order) get implemented for 7.2 ?\n\nI cannot think of any reason why changing column order should be\nimplemented in Postgres. Seems like a waste of time/more code bloat for\nsomething which is strictly asthetic.\n\nRegardless, I do have collegues/clients who ask when such a feature will\nbe implemented. Why is this useful?\n\nGavin\n\n", "msg_date": "Sat, 12 Oct 2002 18:37:08 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Changing Column Order (Was Re: MySQL vs PostgreSQL.)" }, { "msg_contents": "Alvaro Herrera kirjutas L, 12.10.2002 kell 04:16:\n> On Fri, Oct 11, 2002 at 07:08:18PM -0700, Jeff Davis wrote:\n> \n> > And it really is a minor matter of convenience. I end up dropping and \n> > recreating all my tables a lot in the early stages of development, which is \n> > mildly annoying. Certainly not as bad, I suppose, as if you're led to believe \n> > that a feature does something safely, and it kills all your data.\n> \n> Now that ALTER TABLE DROP COLUMN is implemented, there probably isn't\n> any more the need to do such frequent drop/create of tables.\n\nDid attlognum's (for changing column order) get implemented for 7.2 ?\n\n------------\nHannu\n\n", "msg_date": "12 Oct 2002 10:57:32 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: MySQL vs PostgreSQL." }, { "msg_contents": "\n> I cannot think of any reason why changing column order should be\n> implemented in Postgres. Seems like a waste of time/more code bloat for\n> something which is strictly asthetic.\n\nWhat about copy? AFAIK, copy doesn't allow column names being specified,\nso it's not purely aesthetic...\n\n", "msg_date": "Sat, 12 Oct 2002 12:43:37 +0300 (EEST)", "msg_from": "Antti Haapala <antti.haapala@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Changing Column Order (Was Re: MySQL vs PostgreSQL.)" }, { "msg_contents": "> >\n> > Did attlognum's (for changing column order) get implemented for 7.2 ?\n>\n> I cannot think of any reason why changing column order should be\n> implemented in Postgres. Seems like a waste of time/more code bloat for\n> something which is strictly asthetic.\n>\n> Regardless, I do have collegues/clients who ask when such a feature will\n> be implemented. Why is this useful?\n>\n\nI think even \"asthetic\" might go too far. It seems mostly irrelevent except \nfor people who are obsessive compulsive and operate in interactive psql a \nlot. It's marginally simpler to get the columns ordered the way you want so \nthat you can just do \"SELECT * ...\" rather than \"SELECT att0,att1,... ...\" at \nthe interactive psql prompt, and still get the columns in your favorite \norder.\n\nAs far as I can tell, the order the attributes are returned makes no \ndifference in a client application, unless you're referencing attributes by \nnumber. All applications that I've made or seen all use the name instead, and \nI've never heard otherwise, or heard any advantage to using numbers to \nreference columns. \n\nWhen someone asks, ask them \"why?\". I'd be interested to know if they have \nsome other reason. I would think that if they absolutely wanted to fine-tune \nthe order of columns they'd use a view (seems a little easier than \ncontinually changing order around by individual SQL statements). \n\nRegards,\n\tJeff\n", "msg_date": "Sat, 12 Oct 2002 02:54:20 -0700", "msg_from": "Jeff Davis <list-pgsql-hackers@empires.org>", "msg_from_op": false, "msg_subject": "Re: Changing Column Order (Was Re: MySQL vs PostgreSQL.)" }, { "msg_contents": "On 12 Oct 2002 at 2:54, Jeff Davis wrote:\n\n> As far as I can tell, the order the attributes are returned makes no \n> difference in a client application, unless you're referencing attributes by \n> number. All applications that I've made or seen all use the name instead, and \n> I've never heard otherwise, or heard any advantage to using numbers to \n> reference columns. \n\nEven in that case you can obtain field number for a given name and vise versa..\n\n> When someone asks, ask them \"why?\". I'd be interested to know if they have \n> some other reason. I would think that if they absolutely wanted to fine-tune \n> the order of columns they'd use a view (seems a little easier than \n> continually changing order around by individual SQL statements). \n\nSounds fine but what is about that \"continually changing\"? A view needs a \nchange only if it alters fields selected/tables to select from/selection \ncriteria. Field order does not figure in there..\n\nBye\n Shridhar\n\n--\nQOTD:\t\"A child of 5 could understand this! Fetch me a child of 5.\"\n\n", "msg_date": "Sat, 12 Oct 2002 16:20:03 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Re: Changing Column Order (Was Re: MySQL vs PostgreSQL.)" }, { "msg_contents": "Hannu Krosing wrote:\n> Alvaro Herrera kirjutas L, 12.10.2002 kell 04:16:\n> > On Fri, Oct 11, 2002 at 07:08:18PM -0700, Jeff Davis wrote:\n> > \n> > > And it really is a minor matter of convenience. I end up dropping and \n> > > recreating all my tables a lot in the early stages of development, which is \n> > > mildly annoying. Certainly not as bad, I suppose, as if you're led to believe \n> > > that a feature does something safely, and it kills all your data.\n> > \n> > Now that ALTER TABLE DROP COLUMN is implemented, there probably isn't\n> > any more the need to do such frequent drop/create of tables.\n> \n> Did attlognum's (for changing column order) get implemented for 7.2 ?\n\nNo, changing column order isn't even on the TODO list.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sat, 12 Oct 2002 10:37:32 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: MySQL vs PostgreSQL." }, { "msg_contents": "On Sat, Oct 12, 2002 at 12:43:37 +0300,\n Antti Haapala <antti.haapala@iki.fi> wrote:\n> \n> > I cannot think of any reason why changing column order should be\n> > implemented in Postgres. Seems like a waste of time/more code bloat for\n> > something which is strictly asthetic.\n> \n> What about copy? AFAIK, copy doesn't allow column names being specified,\n> so it's not purely aesthetic...\n\nThe SQL COPY command does (at least in 7.3). The \\copy psql command\ndoesn't seem to allow this though.\n", "msg_date": "Sat, 12 Oct 2002 10:45:13 -0500", "msg_from": "Bruno Wolff III <bruno@wolff.to>", "msg_from_op": false, "msg_subject": "Re: Changing Column Order (Was Re: MySQL vs PostgreSQL.)" }, { "msg_contents": "On Saturday 12 October 2002 09:02, Shridhar Daithankar wrote:\n> On 12 Oct 2002 at 11:36, Darko Prenosil wrote:\n> > On Friday 11 October 2002 12:38, Shridhar Daithankar wrote:\n> > > On 11 Oct 2002 at 16:20, Antti Haapala wrote:\n> > > > Check out:\n> > > > http://www.mysql.com/doc/en/MySQL-PostgreSQL_features.html\n> > >\n> > > Well, I guess there are many threads on this. You can dig around\n> > > archives..\n> > >\n> > > > > Upgrading MySQL Server is painless. When you are upgrading MySQL\n> > > > > Server, you don't need to dump/restore your data, as you have to do\n> > > > > with most PostgreSQL upgrades.\n> > > >\n> > > > Ok... this is true, but not so hard - yesterday I installed 7.3b2\n> > > > onto my linux box.\n> > >\n> > > Well, that remains as a point. Imagine a 100GB database on a 150GB disk\n> > > array. How do you dump and reload? In place conversion of data is an\n> > > absolute necessary feature and it's already on TODO.\n> >\n> > From PostgreSQL 7.3 Documentation :\n> >\n> > Use compressed dumps. Use your favorite compression program, for example\n> > gzip. pg_dump dbname | gzip > filename.gz\n>\n> Yes. but that may not be enough. Strech the situation. 300GB database 350GB\n> space. GZip can't compress better than 3:1. And don't think it's\n> imagination. I am preparing a database of 600GB in near future. Don't want\n> to provide 1TB of space to include redump.\n>\nWhere You store Your regular backup (The one You use for security reasons, not \nfor version change)? Or You are not doing backup at all ???\n", "msg_date": "Sat, 12 Oct 2002 17:58:53 -0100", "msg_from": "Darko Prenosil <darko.prenosil@finteh.hr>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] MySQL vs PostgreSQL." }, { "msg_contents": "Bruno Wolff III <bruno@wolff.to> writes:\n> On Sat, Oct 12, 2002 at 12:43:37 +0300,\n> Antti Haapala <antti.haapala@iki.fi> wrote:\n>> What about copy? AFAIK, copy doesn't allow column names being specified,\n>> so it's not purely aesthetic...\n\n> The SQL COPY command does (at least in 7.3). The \\copy psql command\n> doesn't seem to allow this though.\n\nThat's an oversight; \\copy should have been fixed for 7.3.\n\nDo we want to look at this as a bug (okay to fix for 7.3) or a new\nfeature (wait for 7.4)?\n\nI see something that I think is a must-fix omission in the same code:\nit should allow a schema-qualified table name. So I'm inclined to fix\nboth problems now.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 12 Oct 2002 16:36:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "\\copy needs work (was Re: Changing Column Order)" }, { "msg_contents": "Tom Lane wrote:\n> Bruno Wolff III <bruno@wolff.to> writes:\n> > On Sat, Oct 12, 2002 at 12:43:37 +0300,\n> > Antti Haapala <antti.haapala@iki.fi> wrote:\n> >> What about copy? AFAIK, copy doesn't allow column names being specified,\n> >> so it's not purely aesthetic...\n> \n> > The SQL COPY command does (at least in 7.3). The \\copy psql command\n> > doesn't seem to allow this though.\n> \n> That's an oversight; \\copy should have been fixed for 7.3.\n> \n> Do we want to look at this as a bug (okay to fix for 7.3) or a new\n> feature (wait for 7.4)?\n> \n> I see something that I think is a must-fix omission in the same code:\n> it should allow a schema-qualified table name. So I'm inclined to fix\n> both problems now.\n\nI don't think we can say \\copy missing columns is a bug; we never had\nit in previous release. Seems like a missing feature. The COPY schema\nnames seems valid.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 13 Oct 2002 23:45:36 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: \\copy needs work (was Re: Changing Column Order)" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Tom Lane wrote:\n>> Do we want to look at this as a bug (okay to fix for 7.3) or a new\n>> feature (wait for 7.4)?\n\n> I don't think we can say \\copy missing columns is a bug; we never had\n> it in previous release. Seems like a missing feature. The COPY schema\n> names seems valid.\n\nWell, we never had schema names in previous releases either. So I'm not\nsure that I see a bright line between these items. The real issue is\nthat psql's \\copy has failed to track the capabilities of backend COPY.\nI think we should just fix it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 13 Oct 2002 23:53:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: \\copy needs work (was Re: Changing Column Order) " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Tom Lane wrote:\n> >> Do we want to look at this as a bug (okay to fix for 7.3) or a new\n> >> feature (wait for 7.4)?\n> \n> > I don't think we can say \\copy missing columns is a bug; we never had\n> > it in previous release. Seems like a missing feature. The COPY schema\n> > names seems valid.\n> \n> Well, we never had schema names in previous releases either. So I'm not\n> sure that I see a bright line between these items. The real issue is\n> that psql's \\copy has failed to track the capabilities of backend COPY.\n> I think we should just fix it.\n\nOK, I added it to the open items list.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 14 Oct 2002 00:12:10 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: \\copy needs work (was Re: Changing Column Order)" }, { "msg_contents": "On 12 Oct 2002 at 17:58, Darko Prenosil wrote:\n\n> On Saturday 12 October 2002 09:02, Shridhar Daithankar wrote:\n> > Yes. but that may not be enough. Strech the situation. 300GB database 350GB\n> > space. GZip can't compress better than 3:1. And don't think it's\n> > imagination. I am preparing a database of 600GB in near future. Don't want\n> > to provide 1TB of space to include redump.\n> >\n> Where You store Your regular backup (The one You use for security reasons, not \n> for version change)? Or You are not doing backup at all ???\n\nNo regular backups. Data gets recycled in fixed intervals. It's not stored \npermannently anyway. And this is not single machine database. It's a cluster \nwith redundant components like RAID etc. So risk goes further down..\n\nLucky me.. didn't have to devise regular backup scheme for such a database..\n\n\n\nBye\n Shridhar\n\n--\nReal Time, adj.:\tHere and now, as opposed to fake time, which only occurs there \nand then.\n\n", "msg_date": "Mon, 14 Oct 2002 11:43:46 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] MySQL vs PostgreSQL." }, { "msg_contents": "On Sat, 2002-10-12 at 11:37, Gavin Sherry wrote:\n\n> I cannot think of any reason why changing column order should be\n> implemented in Postgres. Seems like a waste of time/more code bloat for\n> something which is strictly asthetic.\n> \n> Regardless, I do have collegues/clients who ask when such a feature will\n> be implemented. Why is this useful?\n\nHas column ordering any effect on the physical tuple disposition? I've\nheard discussions about keeping fixed-size fields at the beginning of\nthe tuple and similar.\n\nSorry for the lame question. :-)\n\n-- \nAlessio F. Bragadini\t\talessio@albourne.com\nAPL Financial Services\t\thttp://village.albourne.com\nNicosia, Cyprus\t\t \tphone: +357-22-755750\n\n\"It is more complicated than you think\"\n\t\t-- The Eighth Networking Truth from RFC 1925\n\n", "msg_date": "14 Oct 2002 16:39:37 +0300", "msg_from": "Alessio Bragadini <alessio@albourne.com>", "msg_from_op": false, "msg_subject": "Re: Changing Column Order (Was Re: MySQL vs PostgreSQL.)" }, { "msg_contents": "Alessio Bragadini wrote:\n> On Sat, 2002-10-12 at 11:37, Gavin Sherry wrote:\n> \n> > I cannot think of any reason why changing column order should be\n> > implemented in Postgres. Seems like a waste of time/more code bloat for\n> > something which is strictly asthetic.\n> > \n> > Regardless, I do have collegues/clients who ask when such a feature will\n> > be implemented. Why is this useful?\n> \n> Has column ordering any effect on the physical tuple disposition? I've\n> heard discussions about keeping fixed-size fields at the beginning of\n> the tuple and similar.\n> \n> Sorry for the lame question. :-)\n\nYes, column ordering matches physical column ordering in the file, and\nyes, there is a small penalty for accessing any columns after the first\nvariable-length column (pg_type.typlen < 0). CHAR() used to be a fixed\nlength column, but with TOAST (large offline storage) it became variable\nlength too. I don't think there is much of a performance hit, though.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 14 Oct 2002 11:04:07 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Changing Column Order (Was Re: MySQL vs PostgreSQL.)" }, { "msg_contents": "On Mon, Oct 14, 2002 at 11:04:07AM -0400, Bruce Momjian wrote:\n> Alessio Bragadini wrote:\n> > On Sat, 2002-10-12 at 11:37, Gavin Sherry wrote:\n> > \n> > > I cannot think of any reason why changing column order should be\n> > > implemented in Postgres. Seems like a waste of time/more code bloat for\n> > > something which is strictly asthetic.\n> > \n> > Has column ordering any effect on the physical tuple disposition? I've\n> > heard discussions about keeping fixed-size fields at the beginning of\n> > the tuple and similar.\n> \n> Yes, column ordering matches physical column ordering in the file, and\n> yes, there is a small penalty for accessing any columns after the first\n> variable-length column (pg_type.typlen < 0).\n\nAnd note that if column ordering was to be implemented through the use\nof attlognum or something similar, the physical ordering would not be\naffected. The only way to physically reoder the columns would be to\ncompletely rebuild the table.\n\n-- \nAlvaro Herrera (<alvherre[a]dcc.uchile.cl>)\n\"Aprende a avergonzarte mas ante ti que ante los demas\" (Democrito)\n", "msg_date": "Mon, 14 Oct 2002 12:15:59 -0300", "msg_from": "Alvaro Herrera <alvherre@dcc.uchile.cl>", "msg_from_op": false, "msg_subject": "Re: Changing Column Order (Was Re: MySQL vs PostgreSQL.)" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Alessio Bragadini wrote:\n> > On Sat, 2002-10-12 at 11:37, Gavin Sherry wrote:\n> >\n> > > I cannot think of any reason why changing column order should be\n> > > implemented in Postgres. Seems like a waste of time/more code bloat for\n> > > something which is strictly asthetic.\n> > >\n> > > Regardless, I do have collegues/clients who ask when such a feature will\n> > > be implemented. Why is this useful?\n> >\n> > Has column ordering any effect on the physical tuple disposition? I've\n> > heard discussions about keeping fixed-size fields at the beginning of\n> > the tuple and similar.\n> >\n> > Sorry for the lame question. :-)\n> \n> Yes, column ordering matches physical column ordering in the file, and\n> yes, there is a small penalty for accessing any columns after the first\n> variable-length column (pg_type.typlen < 0). CHAR() used to be a fixed\n> length column, but with TOAST (large offline storage) it became variable\n> length too. I don't think there is much of a performance hit, though.\n\nWhen was char() fixed size? We had fixed size things like char, char2,\nchar4 ... char16. But char() is internally bpchar() and has allways been\nvariable-length.\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n", "msg_date": "Tue, 15 Oct 2002 10:37:41 -0400", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Changing Column Order (Was Re: MySQL vs PostgreSQL.)" }, { "msg_contents": "Jan Wieck wrote:\n> Bruce Momjian wrote:\n> > \n> > Alessio Bragadini wrote:\n> > > On Sat, 2002-10-12 at 11:37, Gavin Sherry wrote:\n> > >\n> > > > I cannot think of any reason why changing column order should be\n> > > > implemented in Postgres. Seems like a waste of time/more code bloat for\n> > > > something which is strictly asthetic.\n> > > >\n> > > > Regardless, I do have collegues/clients who ask when such a feature will\n> > > > be implemented. Why is this useful?\n> > >\n> > > Has column ordering any effect on the physical tuple disposition? I've\n> > > heard discussions about keeping fixed-size fields at the beginning of\n> > > the tuple and similar.\n> > >\n> > > Sorry for the lame question. :-)\n> > \n> > Yes, column ordering matches physical column ordering in the file, and\n> > yes, there is a small penalty for accessing any columns after the first\n> > variable-length column (pg_type.typlen < 0). CHAR() used to be a fixed\n> > length column, but with TOAST (large offline storage) it became variable\n> > length too. I don't think there is much of a performance hit, though.\n> \n> When was char() fixed size? We had fixed size things like char, char2,\n> char4 ... char16. But char() is internally bpchar() and has allways been\n> variable-length.\n\nchar() was fixed size only in that you could cache the column offsets\nfor char() becuase it was always the same width on disk before TOAST.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 15 Oct 2002 21:18:34 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Changing Column Order (Was Re: MySQL vs PostgreSQL.)" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Jan Wieck wrote:\n>> When was char() fixed size?\n\n> char() was fixed size only in that you could cache the column offsets\n> for char() becuase it was always the same width on disk before TOAST.\n\nBut that was already broken by MULTIBYTE.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 15 Oct 2002 23:15:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Changing Column Order (Was Re: MySQL vs PostgreSQL.) " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Jan Wieck wrote:\n> >> When was char() fixed size?\n> \n> > char() was fixed size only in that you could cache the column offsets\n> > for char() becuase it was always the same width on disk before TOAST.\n> \n> But that was already broken by MULTIBYTE.\n\nYes, I think there was conditional code that had the optimization only\nfor non-multibyte servers. Of course, now multibyte is default.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 15 Oct 2002 23:19:09 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Changing Column Order (Was Re: MySQL vs PostgreSQL.)" }, { "msg_contents": "Dave Cramer wrote:\n> Currently there is a TODO list item to have move 0 not position to the\n> end of the cursor.\n> \n> Moving to the end of the cursor is useful, can we keep the behaviour and\n> change it to move end, or just leave it the way it is?\n\nI did some research on this. It turns out the parser uses 0 for ALL, so\nwhen you do a FETCH ALL it is passing zero. Now, when you do MOVE 0,\nyou are really asking for FETCH ALL and all the tuples are thrown away\nbecause of the MOVE.\n\nSo, that is why MOVE 0 goes to the end of the cursor. One idea would be\nfor MOVE 0 to actually move nothing, but jdbc and others need the\nability to move the end of the cursor, perhaps to then back up a certain\namount and read from there. Seems MOVE 0 is the logical way to do that.\n(I can't think of another reasonable value).\n\nI have the following patch which just documents the fact that MOVE 0\ngoes to the end of the cursor. It does not change any behavior, just\ndocument it.\n\nIf/when I apply the patch, I will remove the TODO item. Another idea\nwould be to require MOVE END to move to the end.\n\nComments?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\nIndex: doc/src/sgml/ref/move.sgml\n===================================================================\nRCS file: /cvsroot/pgsql-server/doc/src/sgml/ref/move.sgml,v\nretrieving revision 1.13\ndiff -c -c -r1.13 move.sgml\n*** doc/src/sgml/ref/move.sgml\t21 Apr 2002 19:02:39 -0000\t1.13\n--- doc/src/sgml/ref/move.sgml\t26 Oct 2002 20:01:15 -0000\n***************\n*** 37,44 ****\n <command>MOVE</command> allows a user to move cursor position a specified\n number of rows.\n <command>MOVE</command> works like the <command>FETCH</command> command,\n! but only positions the cursor and does\n! not return rows.\n </para>\n <para>\n Refer to \n--- 37,44 ----\n <command>MOVE</command> allows a user to move cursor position a specified\n number of rows.\n <command>MOVE</command> works like the <command>FETCH</command> command,\n! but only positions the cursor and does not return rows. The special\n! direction <literal>0</> moves to the end of the cursor.\n </para>\n <para>\n Refer to \nIndex: src/backend/executor/execMain.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/backend/executor/execMain.c,v\nretrieving revision 1.180\ndiff -c -c -r1.180 execMain.c\n*** src/backend/executor/execMain.c\t14 Oct 2002 16:51:30 -0000\t1.180\n--- src/backend/executor/execMain.c\t26 Oct 2002 20:01:20 -0000\n***************\n*** 1119,1125 ****\n \n \t\t/*\n \t\t * check our tuple count.. if we've processed the proper number\n! \t\t * then quit, else loop again and process more tuples..\n \t\t */\n \t\tcurrent_tuple_count++;\n \t\tif (numberTuples == current_tuple_count)\n--- 1119,1127 ----\n \n \t\t/*\n \t\t * check our tuple count.. if we've processed the proper number\n! \t\t * then quit, else loop again and process more tuples.\n! \t\t * If numberTuples is zero, it means we have done MOVE 0\n! \t\t * or FETCH ALL and we want to go to the end of the portal.\n \t\t */\n \t\tcurrent_tuple_count++;\n \t\tif (numberTuples == current_tuple_count)\nIndex: src/backend/tcop/utility.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/backend/tcop/utility.c,v\nretrieving revision 1.180\ndiff -c -c -r1.180 utility.c\n*** src/backend/tcop/utility.c\t21 Oct 2002 20:31:52 -0000\t1.180\n--- src/backend/tcop/utility.c\t26 Oct 2002 20:01:29 -0000\n***************\n*** 263,270 ****\n \n \t\t\t\t/*\n \t\t\t\t * parser ensures that count is >= 0 and 'fetch ALL' -> 0\n \t\t\t\t */\n- \n \t\t\t\tcount = stmt->howMany;\n \t\t\t\tPerformPortalFetch(portalName, forward, count,\n \t\t\t\t\t\t\t\t (stmt->ismove) ? None : dest,\n--- 263,270 ----\n \n \t\t\t\t/*\n \t\t\t\t * parser ensures that count is >= 0 and 'fetch ALL' -> 0\n+ \t\t\t\t * MOVE 0 is equivalent to fetch ALL with no returned tuples.\n \t\t\t\t */\n \t\t\t\tcount = stmt->howMany;\n \t\t\t\tPerformPortalFetch(portalName, forward, count,\n \t\t\t\t\t\t\t\t (stmt->ismove) ? None : dest,", "msg_date": "Wed, 30 Oct 2002 00:04:15 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: move 0 behaviour" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I did some research on this. It turns out the parser uses 0 for ALL, so\n> when you do a FETCH ALL it is passing zero. Now, when you do MOVE 0,\n> you are really asking for FETCH ALL and all the tuples are thrown away\n> because of the MOVE.\n\nYeah. I think this is a bug and \"MOVE 0\" ought to be a no-op ... but\nchanging it requires a different parsetree representation for MOVE ALL,\nwhich is tedious enough that it hasn't gotten done yet.\n\n> I have the following patch which just documents the fact that MOVE 0\n> goes to the end of the cursor. It does not change any behavior, just\n> document it.\n\nIt should be documented as behavior that is likely to change. Also,\nI believe FETCH 0 has the same issue.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 30 Oct 2002 13:19:27 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: move 0 behaviour " }, { "msg_contents": "Bruce Momjian writes:\n\n> So, that is why MOVE 0 goes to the end of the cursor. One idea would be\n> for MOVE 0 to actually move nothing, but jdbc and others need the\n> ability to move the end of the cursor, perhaps to then back up a certain\n> amount and read from there. Seems MOVE 0 is the logical way to do that.\n> (I can't think of another reasonable value).\n\nIt would seem more logical and reasonable for MOVE 0 to do nothing and\nhave some special syntax such as MOVE LAST to move to the end. (MOVE LAST\nwould actually be consistent with the standard syntax FETCH LAST.)\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Wed, 30 Oct 2002 19:32:12 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: move 0 behaviour" }, { "msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > So, that is why MOVE 0 goes to the end of the cursor. One idea would be\n> > for MOVE 0 to actually move nothing, but jdbc and others need the\n> > ability to move the end of the cursor, perhaps to then back up a certain\n> > amount and read from there. Seems MOVE 0 is the logical way to do that.\n> > (I can't think of another reasonable value).\n> \n> It would seem more logical and reasonable for MOVE 0 to do nothing and\n> have some special syntax such as MOVE LAST to move to the end. (MOVE LAST\n> would actually be consistent with the standard syntax FETCH LAST.)\n\nYea, I started thinking and we need to get MOVE/FETCH to make sense. \nThe following patch makes FETCH/MOVE 0 do nothing, and FETCH LAST move\nto the end. I was going to use the word END, but if LAST is more\nstandard, we will use that. It uses INT_MAX in the grammar for FETCH\nALL/MOVE LAST, but maps that to zero so it is consistent in the\n/executor code.\n\nI will keep this patch for 7.4.\n\nJDBC folks, I realize you need this. Seems you will have to use MOVE 0\nfor 7,3 and MOVE LAST for 7.4.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\nIndex: doc/src/sgml/ref/move.sgml\n===================================================================\nRCS file: /cvsroot/pgsql-server/doc/src/sgml/ref/move.sgml,v\nretrieving revision 1.13\ndiff -c -c -r1.13 move.sgml\n*** doc/src/sgml/ref/move.sgml\t21 Apr 2002 19:02:39 -0000\t1.13\n--- doc/src/sgml/ref/move.sgml\t31 Oct 2002 01:15:42 -0000\n***************\n*** 21,27 ****\n <date>1999-07-20</date>\n </refsynopsisdivinfo>\n <synopsis>\n! MOVE [ <replaceable class=\"PARAMETER\">direction</replaceable> ] [ <replaceable class=\"PARAMETER\">count</replaceable> ] \n { IN | FROM } <replaceable class=\"PARAMETER\">cursor</replaceable>\n </synopsis>\n </refsynopsisdiv>\n--- 21,28 ----\n <date>1999-07-20</date>\n </refsynopsisdivinfo>\n <synopsis>\n! MOVE [ <replaceable class=\"PARAMETER\">direction</replaceable> ] \n! {<replaceable class=\"PARAMETER\">count</replaceable> | LAST }\n { IN | FROM } <replaceable class=\"PARAMETER\">cursor</replaceable>\n </synopsis>\n </refsynopsisdiv>\nIndex: src/backend/commands/portalcmds.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/backend/commands/portalcmds.c,v\nretrieving revision 1.3\ndiff -c -c -r1.3 portalcmds.c\n*** src/backend/commands/portalcmds.c\t4 Sep 2002 20:31:15 -0000\t1.3\n--- src/backend/commands/portalcmds.c\t31 Oct 2002 01:15:44 -0000\n***************\n*** 15,20 ****\n--- 15,22 ----\n \n #include \"postgres.h\"\n \n+ #include <limits.h>\n+ \n #include \"commands/portalcmds.h\"\n #include \"executor/executor.h\"\n \n***************\n*** 55,61 ****\n *\n *\tname: name of portal\n *\tforward: forward or backward fetch?\n! *\tcount: # of tuples to fetch (0 implies all)\n *\tdest: where to send results\n *\tcompletionTag: points to a buffer of size COMPLETION_TAG_BUFSIZE\n *\t\tin which to store a command completion status string.\n--- 57,63 ----\n *\n *\tname: name of portal\n *\tforward: forward or backward fetch?\n! *\tcount: # of tuples to fetch\n *\tdest: where to send results\n *\tcompletionTag: points to a buffer of size COMPLETION_TAG_BUFSIZE\n *\t\tin which to store a command completion status string.\n***************\n*** 100,105 ****\n--- 102,115 ----\n \t\treturn;\n \t}\n \n+ \t/* If zero count, we are done */\n+ \tif (count == 0)\n+ \t\treturn;\n+ \n+ \t/* Internally, zero count processes all portal rows */\n+ \tif (count == INT_MAX)\n+ \t\tcount = 0;\n+ \t\t\n \t/*\n \t * switch into the portal context\n \t */\nIndex: src/backend/executor/execMain.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/backend/executor/execMain.c,v\nretrieving revision 1.180\ndiff -c -c -r1.180 execMain.c\n*** src/backend/executor/execMain.c\t14 Oct 2002 16:51:30 -0000\t1.180\n--- src/backend/executor/execMain.c\t31 Oct 2002 01:15:50 -0000\n***************\n*** 1119,1125 ****\n \n \t\t/*\n \t\t * check our tuple count.. if we've processed the proper number\n! \t\t * then quit, else loop again and process more tuples..\n \t\t */\n \t\tcurrent_tuple_count++;\n \t\tif (numberTuples == current_tuple_count)\n--- 1119,1126 ----\n \n \t\t/*\n \t\t * check our tuple count.. if we've processed the proper number\n! \t\t * then quit, else loop again and process more tuples. Zero\n! \t\t * number_tuples means no limit.\n \t\t */\n \t\tcurrent_tuple_count++;\n \t\tif (numberTuples == current_tuple_count)\nIndex: src/backend/parser/gram.y\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/backend/parser/gram.y,v\nretrieving revision 2.370\ndiff -c -c -r2.370 gram.y\n*** src/backend/parser/gram.y\t22 Sep 2002 21:44:43 -0000\t2.370\n--- src/backend/parser/gram.y\t31 Oct 2002 01:16:14 -0000\n***************\n*** 49,54 ****\n--- 49,55 ----\n #include \"postgres.h\"\n \n #include <ctype.h>\n+ #include <limits.h>\n \n #include \"access/htup.h\"\n #include \"catalog/index.h\"\n***************\n*** 357,363 ****\n \tJOIN\n \tKEY\n \n! \tLANCOMPILER LANGUAGE LEADING LEFT LEVEL LIKE LIMIT\n \tLISTEN LOAD LOCAL LOCALTIME LOCALTIMESTAMP LOCATION\n \tLOCK_P\n \n--- 358,364 ----\n \tJOIN\n \tKEY\n \n! \tLANCOMPILER LANGUAGE LAST LEADING LEFT LEVEL LIKE LIMIT\n \tLISTEN LOAD LOCAL LOCALTIME LOCALTIMESTAMP LOCATION\n \tLOCK_P\n \n***************\n*** 2644,2650 ****\n \t\t\t\t\tif ($3 < 0)\n \t\t\t\t\t{\n \t\t\t\t\t\t$3 = -$3;\n! \t\t\t\t\t\t$2 = (($2 == FORWARD)? BACKWARD: FORWARD);\n \t\t\t\t\t}\n \t\t\t\t\tn->direction = $2;\n \t\t\t\t\tn->howMany = $3;\n--- 2645,2651 ----\n \t\t\t\t\tif ($3 < 0)\n \t\t\t\t\t{\n \t\t\t\t\t\t$3 = -$3;\n! \t\t\t\t\t\t$2 = (($2 == FORWARD) ? BACKWARD: FORWARD);\n \t\t\t\t\t}\n \t\t\t\t\tn->direction = $2;\n \t\t\t\t\tn->howMany = $3;\n***************\n*** 2712,2719 ****\n fetch_how_many:\n \t\t\tIconst\t\t\t\t\t\t\t\t\t{ $$ = $1; }\n \t\t\t| '-' Iconst\t\t\t\t\t\t\t{ $$ = - $2; }\n! \t\t\t\t\t\t\t\t\t\t\t/* 0 means fetch all tuples*/\n! \t\t\t| ALL\t\t\t\t\t\t\t\t\t{ $$ = 0; }\n \t\t\t| NEXT\t\t\t\t\t\t\t\t\t{ $$ = 1; }\n \t\t\t| PRIOR\t\t\t\t\t\t\t\t\t{ $$ = -1; }\n \t\t;\n--- 2713,2720 ----\n fetch_how_many:\n \t\t\tIconst\t\t\t\t\t\t\t\t\t{ $$ = $1; }\n \t\t\t| '-' Iconst\t\t\t\t\t\t\t{ $$ = - $2; }\n! \t\t\t| ALL\t\t\t\t\t\t\t\t\t{ $$ = INT_MAX; }\n! \t\t\t| LAST\t\t\t\t\t\t\t\t\t{ $$ = INT_MAX; }\n \t\t\t| NEXT\t\t\t\t\t\t\t\t\t{ $$ = 1; }\n \t\t\t| PRIOR\t\t\t\t\t\t\t\t\t{ $$ = -1; }\n \t\t;\n***************\n*** 7098,7103 ****\n--- 7099,7105 ----\n \t\t\t| KEY\n \t\t\t| LANGUAGE\n \t\t\t| LANCOMPILER\n+ \t\t\t| LAST\n \t\t\t| LEVEL\n \t\t\t| LISTEN\n \t\t\t| LOAD\nIndex: src/backend/parser/keywords.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/backend/parser/keywords.c,v\nretrieving revision 1.127\ndiff -c -c -r1.127 keywords.c\n*** src/backend/parser/keywords.c\t18 Sep 2002 21:35:22 -0000\t1.127\n--- src/backend/parser/keywords.c\t31 Oct 2002 01:16:15 -0000\n***************\n*** 172,177 ****\n--- 172,178 ----\n \t{\"key\", KEY},\n \t{\"lancompiler\", LANCOMPILER},\n \t{\"language\", LANGUAGE},\n+ \t{\"last\", LAST},\n \t{\"leading\", LEADING},\n \t{\"left\", LEFT},\n \t{\"level\", LEVEL},\nIndex: src/backend/tcop/utility.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/backend/tcop/utility.c,v\nretrieving revision 1.180\ndiff -c -c -r1.180 utility.c\n*** src/backend/tcop/utility.c\t21 Oct 2002 20:31:52 -0000\t1.180\n--- src/backend/tcop/utility.c\t31 Oct 2002 01:16:18 -0000\n***************\n*** 262,270 ****\n \t\t\t\tforward = (bool) (stmt->direction == FORWARD);\n \n \t\t\t\t/*\n! \t\t\t\t * parser ensures that count is >= 0 and 'fetch ALL' -> 0\n \t\t\t\t */\n- \n \t\t\t\tcount = stmt->howMany;\n \t\t\t\tPerformPortalFetch(portalName, forward, count,\n \t\t\t\t\t\t\t\t (stmt->ismove) ? None : dest,\n--- 262,269 ----\n \t\t\t\tforward = (bool) (stmt->direction == FORWARD);\n \n \t\t\t\t/*\n! \t\t\t\t * parser ensures that count is >= 0\n \t\t\t\t */\n \t\t\t\tcount = stmt->howMany;\n \t\t\t\tPerformPortalFetch(portalName, forward, count,\n \t\t\t\t\t\t\t\t (stmt->ismove) ? None : dest,", "msg_date": "Wed, 30 Oct 2002 23:52:28 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: move 0 behaviour" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> The following patch makes FETCH/MOVE 0 do nothing, and FETCH LAST move\n> to the end.\n\nDo not hack up PerformPortalFetch; put the special case for INT_MAX in\nutility.c's FetchStmt code, instead. As-is, you probably broke other\ncallers of PerformPortalFetch.\n\nBTW, there's a comment in parsenodes.h that needs to be fixed too:\n\n int howMany; /* amount to fetch (\"ALL\" --> 0) */\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 31 Oct 2002 00:16:47 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: move 0 behaviour " }, { "msg_contents": "On Fri, Nov 01, 2002 at 12:43:48PM +0200, am@fx.ro wrote:\n> Hello everyone!\n> \n> I have 2 questions:\n> \n> --1-- Some days ago, I've been trying to get the number of tuples\n> that FETCH ALL would return, *before* fetching anything.\n> (the program is written in C++, using libpq ; PostgreSQL 7.2.3).\n\nWell, to get an answer, the server needs to execute the entire query. It\nwon't do that unless you explicitly ask for it.\n\n> The solution i've found was something like:\n> \n> int nr_tuples;\n> \n> res = PQexec(conn, \"MOVE ALL in CURS\"); \n> sscanf(PQcmdStatus(res),\"MOVE %i\",&nr_tuples);\n> PQclear(res);\n\nThat would work. But why do you need to know the total beforehand? You could\njust do a FETCH ALL and then use PQntuples to get the number. If you're\nusing it to decide whether to provide a Next link, just FETCH one more item\nthan you intend to display and if you get it you display the link.\n\n> I'm wondering: is there any better way to get that number?\n> \n> ( just an idea: maybe it would be useful to make PQcmdTuples\n> work for MOVE commands ... ? )\n\nInteresting idea. I'm not sure whether MOVE actually executes the query or\nnot.\n\n> --2-- I found out that if i reach the end of the cursor, and want\n> to move backwards, i have to increase the MOVE command's argument by 1:\n\nNo idea, the cursor has probably moved off the end to indicate the query is\ndone. So you need the extra one to move it back. That's just a guess though.\n\nHope this helps,\n-- \nMartijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/\n> There are 10 kinds of people in the world, those that can do binary\n> arithmetic and those that can't.", "msg_date": "Fri, 1 Nov 2002 19:23:29 +1100", "msg_from": "Martijn van Oosterhout <kleptog@svana.org>", "msg_from_op": false, "msg_subject": "Re: Cursors: getting the number of tuples; moving backwards" }, { "msg_contents": "Hello everyone!\n\nI have 2 questions:\n\n--1-- Some days ago, I've been trying to get the number of tuples\nthat FETCH ALL would return, *before* fetching anything.\n(the program is written in C++, using libpq ; PostgreSQL 7.2.3).\n\nThe solution i've found was something like:\n\n int nr_tuples;\n\n res = PQexec(conn, \"MOVE ALL in CURS\"); \n sscanf(PQcmdStatus(res),\"MOVE %i\",&nr_tuples);\n PQclear(res);\n \nI'm wondering: is there any better way to get that number?\n\n( just an idea: maybe it would be useful to make PQcmdTuples\n work for MOVE commands ... ? )\n\n\n--2-- I found out that if i reach the end of the cursor, and want\nto move backwards, i have to increase the MOVE command's argument by 1:\n \n MOVE ALL in CURS --> i get the number of tuples: 590\n\n MOVE -590 in CURS\n FETCH ALL --> i get all tuples except the first one\n\n MOVE -591 in CURS\n FETCH ALL --> i get all the tuples\n\n MOVE -1 in CURS\n FETCH ALL --> i get nothing !\n\n MOVE -2 in CURS \n FETCH ALL --> i get the last tuple\n\nThis happens only if the current position is at the end of the cursor. \n \nIs this the normal behaviour?\n \n\n\nBest regards,\nAdrian Maier\n(am@fx.ro)\n", "msg_date": "Fri, 1 Nov 2002 12:43:48 +0200", "msg_from": "am@fx.ro", "msg_from_op": false, "msg_subject": "Cursors: getting the number of tuples; moving backwards" }, { "msg_contents": "On Fri, Nov 01, 2002 at 07:23:29PM +1100, Martijn van Oosterhout wrote:\n> > The solution i've found was something like:\n> > \n> > int nr_tuples;\n> > \n> > res = PQexec(conn, \"MOVE ALL in CURS\"); \n> > sscanf(PQcmdStatus(res),\"MOVE %i\",&nr_tuples);\n> > PQclear(res);\n> \n> That would work. But why do you need to know the total beforehand? You could\n> just do a FETCH ALL and then use PQntuples to get the number. \n\nIf the table has, let's say, 10000 rows, it's unlikely that the user\nwill ever browse all of them ( my program permits the user to set some\nfilters ; the interface is ncurses-based). Fetching everything \nwould be unnecessary. \n\nSo, for speed reasons, i prefer to fetch maximum 500 rows.\nBut i want to display in the screen's corner the total number\nof rows . \n\n> > I'm wondering: is there any better way to get that number?\n> > \n> > ( just an idea: maybe it would be useful to make PQcmdTuples\n> > work for MOVE commands ... ? )\n> \n> Interesting idea. I'm not sure whether MOVE actually executes the query or\n> not.\n\nI guess it doesn't execute the whole query. MOVE ALL is *much*\nfaster than FETCH ALL + PQcmdTuples\n\n> > --2-- I found out that if i reach the end of the cursor, and want\n> > to move backwards, i have to increase the MOVE command's argument by 1:\n> \n> No idea, the cursor has probably moved off the end to indicate the query is\n> done. So you need the extra one to move it back. That's just a guess though.\n \nYeah, this could be the explanation.\n\n\nThanks for your answer\n\nAdrian Maier\n\n\n", "msg_date": "Fri, 1 Nov 2002 20:14:33 +0200", "msg_from": "am@fx.ro", "msg_from_op": false, "msg_subject": "Re: Cursors: getting the number of tuples; moving backwards" }, { "msg_contents": "On Fri, Nov 01, 2002 at 08:14:33PM +0200, am@fx.ro wrote:\n> On Fri, Nov 01, 2002 at 07:23:29PM +1100, Martijn van Oosterhout wrote:\n> > That would work. But why do you need to know the total beforehand? You could\n> > just do a FETCH ALL and then use PQntuples to get the number. \n> \n> If the table has, let's say, 10000 rows, it's unlikely that the user\n> will ever browse all of them ( my program permits the user to set some\n> filters ; the interface is ncurses-based). Fetching everything \n> would be unnecessary. \n> \n> So, for speed reasons, i prefer to fetch maximum 500 rows.\n> But i want to display in the screen's corner the total number\n> of rows . \n\nMaybe do what google does. If there's lots of rows, give an estimate. I\ndon't know how they do it but if there are more than 1000 rows then the user\nprobably won't care if you wrote 1000, 2000 or a million.\n\nMaybe some whacky curve fitting. If there's still a 98% match after 100\nmatches, there must be around 5000 matches.\n\n> > Interesting idea. I'm not sure whether MOVE actually executes the query or\n> > not.\n> \n> I guess it doesn't execute the whole query. MOVE ALL is *much*\n> faster than FETCH ALL + PQcmdTuples\n\nCurious. I wonder how it does it then.\n\n-- \nMartijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/\n> There are 10 kinds of people in the world, those that can do binary\n> arithmetic and those that can't.", "msg_date": "Sat, 2 Nov 2002 13:39:16 +1100", "msg_from": "Martijn van Oosterhout <kleptog@svana.org>", "msg_from_op": false, "msg_subject": "Re: Cursors: getting the number of tuples; moving backwards" }, { "msg_contents": "Martijn van Oosterhout <kleptog@svana.org> writes:\n>> I guess it doesn't execute the whole query. MOVE ALL is *much*\n>> faster than FETCH ALL + PQcmdTuples\n\n> Curious. I wonder how it does it then.\n\nMOVE does execute the query, it just doesn't ship the tuples to the\nclient. This would save some formatting overhead (no need to run\nthe datatype I/O conversion procedures), but unless you have a slow\nnetwork link between client and server I would not expect it to be\n\"much\" faster ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 01 Nov 2002 22:03:17 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Cursors: getting the number of tuples; moving backwards " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > The following patch makes FETCH/MOVE 0 do nothing, and FETCH LAST move\n> > to the end.\n> \n> Do not hack up PerformPortalFetch; put the special case for INT_MAX in\n> utility.c's FetchStmt code, instead. As-is, you probably broke other\n> callers of PerformPortalFetch.\n\nI thought about that, but I need to fail if the cursor name is invalid. \nThose tests are done in PerformPortalFetch(). The good news is that no\none else call it. Other ideas?\n \n> BTW, there's a comment in parsenodes.h that needs to be fixed too:\n> \n> int howMany; /* amount to fetch (\"ALL\" --> 0) */\n\nDone.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 1 Nov 2002 22:14:56 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: move 0 behaviour" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Tom Lane wrote:\n>> Do not hack up PerformPortalFetch; put the special case for INT_MAX in\n>> utility.c's FetchStmt code, instead. As-is, you probably broke other\n>> callers of PerformPortalFetch.\n\n> I thought about that, but I need to fail if the cursor name is invalid. \n\nWhat has that got to do with it?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 01 Nov 2002 23:49:32 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: move 0 behaviour " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Tom Lane wrote:\n> >> Do not hack up PerformPortalFetch; put the special case for INT_MAX in\n> >> utility.c's FetchStmt code, instead. As-is, you probably broke other\n> >> callers of PerformPortalFetch.\n> \n> > I thought about that, but I need to fail if the cursor name is invalid. \n> \n> What has that got to do with it?\n\nIf I put the 'return' for 0 MOVE/FETCH in utility.c's FetchStmt code, I\nwill not get the checks for invalid cursor names, and I will not get the\nproper return tag. I don't see how to do anything in utility.c. I\nassume this is the code you want to move to utility.c:\n\t\n\t+ /* If zero count, we are done */\n\t+ if (count == 0)\n\t+ return;\n\t+ \n\t+ /* Internally, zero count processes all portal rows */\n\t+ if (count == INT_MAX)\n\t+ count = 0;\n\t+ \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sat, 2 Nov 2002 00:41:28 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: move 0 behaviour" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Tom Lane wrote:\n>> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>>> I thought about that, but I need to fail if the cursor name is invalid. \n>> \n>> What has that got to do with it?\n\n> If I put the 'return' for 0 MOVE/FETCH in utility.c's FetchStmt code, I\n> will not get the checks for invalid cursor names, and I will not get the\n> proper return tag.\n\nOh, I see. Yeah, you're probably right, we have to change the calling\nconvention for PerformPortalFetch.\n\nBTW, portalcmds.h also contains a comment that would need to be fixed.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 02 Nov 2002 10:06:35 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: move 0 behaviour " }, { "msg_contents": "On Fri, Nov 01, 2002 at 10:03:17PM -0500, Tom Lane wrote:\n> MOVE does execute the query, it just doesn't ship the tuples to the\n> client. This would save some formatting overhead (no need to run\n> the datatype I/O conversion procedures), but unless you have a slow\n> network link between client and server I would not expect it to be\n> \"much\" faster ...\n\nIt must be the fact that the computer is quite old : Cyrix 6x86 166Mhz.\n( this is not the deplyoment machine ).\n\nUsing MOVE is about 5 times faster in my case :\nFor 150784 tuples in the table, FETCH-ing took about 1m30 ,\nwhile MOVE-ing took only about 17sec. \n\n | Real | User | Sys\n-------------------------------------------------------------------\nselect * from PRODTEST | 1m30.843s | 0m42.960s | 0m1.720s\n-------------------------------------------------------------------\ndeclare cursor... + FETCH | 1m32.835s | 0m42.680s | 0m1.780s\n-------------------------------------------------------------------\ndeclare cursor... + MOVE | 0m17.215s | 0m0.030s | 0m0.030s\n-------------------------------------------------------------------\n( i used commands like: time psql -f test.sql db_rex \n to get those timings )\n\n\nThe difference must be smaller on fast machines.\n\nSo i guess that my computer is pretty good when it comes to finding\nperformance problems in applications ;-)\n\n\nBye,\nAdrian Maier\n(am@fx.ro)\n", "msg_date": "Sat, 2 Nov 2002 21:33:43 +0200", "msg_from": "am@fx.ro", "msg_from_op": false, "msg_subject": "Re: Cursors: getting the number of tuples; moving backwards" }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Tom Lane wrote:\n> >> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >>> I thought about that, but I need to fail if the cursor name is invalid. \n> >> \n> >> What has that got to do with it?\n> \n> > If I put the 'return' for 0 MOVE/FETCH in utility.c's FetchStmt code, I\n> > will not get the checks for invalid cursor names, and I will not get the\n> > proper return tag.\n> \n> Oh, I see. Yeah, you're probably right, we have to change the calling\n> convention for PerformPortalFetch.\n> \n> BTW, portalcmds.h also contains a comment that would need to be fixed.\n\nUpdated.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sat, 2 Nov 2002 20:21:11 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: move 0 behaviour" } ]
[ { "msg_contents": "Did anything come of this discussion on whether SET initiates a \ntransaction or not?\n\nIn summary what is the right way to deal with setting autocommit in clients?\n\nthanks,\n--Barry\n\n\n-------- Original Message --------\nSubject: Re: [JDBC] Patch for handling \"autocommit=false\" in postgresql.conf\nDate: Tue, 17 Sep 2002 10:26:14 -0400\nFrom: Tom Lane <tgl@sss.pgh.pa.us>\nTo: snpe <snpe@snpe.co.yu>\nCC: pgsql-jdbc <pgsql-jdbc@postgresql.org>\nReferences: <200209171425.50940.snpe@snpe.co.yu>\n\nsnpe <snpe@snpe.co.yu> writes:\n > + // handle autocommit=false in postgresql.conf\n > + if (haveMinimumServerVersion(\"7.3\")) {\n > + ExecSQL(\"set autocommit to on; commit;\");\n > + }\n\nThe above will fill people's logs with\n\tWARNING: COMMIT: no transaction in progress\nif they don't have autocommit off.\n\nUse\n\tbegin; set autocommit to on; commit;\ninstead.\n\nI would recommend holding off on this patch altogether, actually,\nuntil we decide whether SET will be a transaction-initiating\ncommand or not. I would still like to persuade the hackers community\nthat it should not be.\n\n\t\t\tregards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: Have you searched our list archives?\n\nhttp://archives.postgresql.org\n\n\n\n\n", "msg_date": "Thu, 10 Oct 2002 17:57:55 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": true, "msg_subject": "[Fwd: Re: [JDBC] Patch for handling \"autocommit=false\" in\n\tpostgresql.conf]" }, { "msg_contents": "Barry Lind wrote:\n> Did anything come of this discussion on whether SET initiates a \n> transaction or not?\n\nSET does not start a multi-statement transaction when autocommit is off.\n\n> In summary what is the right way to deal with setting autocommit in clients?\n\nI guess just 'set autocommit to on' will do it.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 11 Oct 2002 01:17:02 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [Fwd: Re: [JDBC] Patch for handling \"autocommit=false\"" }, { "msg_contents": "Barry,\nNever mind.\nPatch with 'begin;set autocommit to on;commit' work fine for JDBC spec.\n\nregards,\nHaris Peco \nOn Friday 11 October 2002 02:57 am, Barry Lind wrote:\n> Did anything come of this discussion on whether SET initiates a\n> transaction or not?\n>\n> In summary what is the right way to deal with setting autocommit in\n> clients?\n>\n> thanks,\n> --Barry\n>\n>\n> -------- Original Message --------\n> Subject: Re: [JDBC] Patch for handling \"autocommit=false\" in\n> postgresql.conf Date: Tue, 17 Sep 2002 10:26:14 -0400\n> From: Tom Lane <tgl@sss.pgh.pa.us>\n> To: snpe <snpe@snpe.co.yu>\n> CC: pgsql-jdbc <pgsql-jdbc@postgresql.org>\n> References: <200209171425.50940.snpe@snpe.co.yu>\n>\n> snpe <snpe@snpe.co.yu> writes:\n> > + // handle autocommit=false in postgresql.conf\n> > + if (haveMinimumServerVersion(\"7.3\")) {\n> > + ExecSQL(\"set autocommit to on;\n> > commit;\"); + }\n>\n> The above will fill people's logs with\n> \tWARNING: COMMIT: no transaction in progress\n> if they don't have autocommit off.\n>\n> Use\n> \tbegin; set autocommit to on; commit;\n> instead.\n>\n> I would recommend holding off on this patch altogether, actually,\n> until we decide whether SET will be a transaction-initiating\n> command or not. I would still like to persuade the hackers community\n> that it should not be.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n", "msg_date": "Fri, 11 Oct 2002 14:37:31 +0200", "msg_from": "snpe <snpe@snpe.co.yu>", "msg_from_op": false, "msg_subject": "Re: [Fwd: Re: [JDBC] Patch for handling \"autocommit=false\" in\n\tpostgresql.conf]" } ]
[ { "msg_contents": "Denis A Ustimenko wrote:\n> Hello Bruce!\n> \n> You have patched fe-connect.c and dropeed out one check in line 1078:\n> \n> < while (rp == NULL || remains.tv_sec > 0 || (remains.tv_sec == 0 && remains.tv_usec > 0))\n> ---\n> > while (rp == NULL || remains.tv_sec > 0 || remains.tv_usec > 0)\n> \n> As I understand it is dangerous. The remains.tv_usec can be greater than zero while remains.tv_sec is below zero. It must exit form the loop in that conditions.\n\n[ CC to hackers.]\n\nWell, I can see how it could go negative, but if that happens, we have a\nbigger problem. tv_sec on my system is an unsigned int, so I think the\nvalue will show as huge rather than negative. If you want negative\nvalues, I think you are going to need to use a real signed integer. \nWould you send a context diff (diff -c) against CVS with a fix?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 10 Oct 2002 23:48:54 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Diff for src/interfaces/libpq/fe-connect.c between version 1.195" }, { "msg_contents": "OK, I have applied the following patch which should fix the problem by\npreventing tv_sec from becoming negative.\n\n---------------------------------------------------------------------------\n\nBruce Momjian wrote:\n> Denis A Ustimenko wrote:\n> > Hello Bruce!\n> > \n> > You have patched fe-connect.c and dropeed out one check in line 1078:\n> > \n> > < while (rp == NULL || remains.tv_sec > 0 || (remains.tv_sec == 0 && remains.tv_usec > 0))\n> > ---\n> > > while (rp == NULL || remains.tv_sec > 0 || remains.tv_usec > 0)\n> > \n> > As I understand it is dangerous. The remains.tv_usec can be greater than zero while remains.tv_sec is below zero. It must exit form the loop in that conditions.\n> \n> [ CC to hackers.]\n> \n> Well, I can see how it could go negative, but if that happens, we have a\n> bigger problem. tv_sec on my system is an unsigned int, so I think the\n> value will show as huge rather than negative. If you want negative\n> values, I think you are going to need to use a real signed integer. \n> Would you send a context diff (diff -c) against CVS with a fix?\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\nIndex: doc/src/FAQ/FAQ.html\n===================================================================\nRCS file: /cvsroot/pgsql-server/doc/src/FAQ/FAQ.html,v\nretrieving revision 1.155\ndiff -c -c -r1.155 FAQ.html\n*** doc/src/FAQ/FAQ.html\t10 Oct 2002 03:15:19 -0000\t1.155\n--- doc/src/FAQ/FAQ.html\t11 Oct 2002 04:05:56 -0000\n***************\n*** 143,148 ****\n--- 143,149 ----\n from a function?<BR>\n <A href=\"#4.26\">4.26</A>) Why can't I reliably create/drop\n temporary tables in PL/PgSQL functions?<BR>\n+ <A href=\"#4.27\">4.27</A>) What replication options are available?<BR>\n \n \n <H2 align=\"center\">Extending PostgreSQL</H2>\n***************\n*** 1346,1357 ****\n <H4><A name=\"4.24\">4.24</A>) How do I perform queries using\n multiple databases?</H4>\n \n! <P>There is no way to query any database except the current one.\n Because PostgreSQL loads database-specific system catalogs, it is\n uncertain how a cross-database query should even behave.</P>\n \n! <P>Of course, a client can make simultaneous connections to\n! different databases and merge the information that way.</P>\n \n <H4><A name=\"4.25\">4.25</A>) How do I return multiple rows or\n columns from a function?</H4>\n--- 1347,1360 ----\n <H4><A name=\"4.24\">4.24</A>) How do I perform queries using\n multiple databases?</H4>\n \n! <P>There is no way to query a database other than the current one.\n Because PostgreSQL loads database-specific system catalogs, it is\n uncertain how a cross-database query should even behave.</P>\n \n! <P><I>/contrib/dblink</I> allows cross-database queries using\n! function calls. Of course, a client can make simultaneous\n! connections to different databases and merge the results on the\n! client side.</P>\n \n <H4><A name=\"4.25\">4.25</A>) How do I return multiple rows or\n columns from a function?</H4>\n***************\n*** 1364,1376 ****\n \n <H4><A name=\"4.26\">4.26</A>) Why can't I reliably create/drop\n temporary tables in PL/PgSQL functions?</H4>\n! PL/PgSQL caches function contents, and an unfortunate side effect\n is that if a PL/PgSQL function accesses a temporary table, and that\n table is later dropped and recreated, and the function called\n again, the function will fail because the cached function contents\n still point to the old temporary table. The solution is to use\n <SMALL>EXECUTE</SMALL> for temporary table access in PL/PgSQL. This\n! will cause the query to be reparsed every time. \n \n <HR>\n \n--- 1367,1385 ----\n \n <H4><A name=\"4.26\">4.26</A>) Why can't I reliably create/drop\n temporary tables in PL/PgSQL functions?</H4>\n! <P>PL/PgSQL caches function contents, and an unfortunate side effect\n is that if a PL/PgSQL function accesses a temporary table, and that\n table is later dropped and recreated, and the function called\n again, the function will fail because the cached function contents\n still point to the old temporary table. The solution is to use\n <SMALL>EXECUTE</SMALL> for temporary table access in PL/PgSQL. This\n! will cause the query to be reparsed every time.</P>\n! \n! <H4><A name=\"4.27\">4.27</A>) What replication options are available?\n! </H4>\n! <P>There are several master/slave replication solutions available.\n! These allow only one server to make database changes and the slave \n! merely allow database reading.\n \n <HR>\n \nIndex: src/backend/nodes/nodes.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/backend/nodes/nodes.c,v\nretrieving revision 1.15\ndiff -c -c -r1.15 nodes.c\n*** src/backend/nodes/nodes.c\t20 Jun 2002 20:29:29 -0000\t1.15\n--- src/backend/nodes/nodes.c\t11 Oct 2002 04:05:58 -0000\n***************\n*** 28,42 ****\n *\t macro makeNode. eg. to create a Resdom node, use makeNode(Resdom)\n *\n */\n! Node *\n! newNode(Size size, NodeTag tag)\n! {\n! \tNode\t *newNode;\n \n- \tAssert(size >= sizeof(Node));\t\t/* need the tag, at least */\n- \n- \tnewNode = (Node *) palloc(size);\n- \tMemSet((char *) newNode, 0, size);\n- \tnewNode->type = tag;\n- \treturn newNode;\n- }\n--- 28,32 ----\n *\t macro makeNode. eg. to create a Resdom node, use makeNode(Resdom)\n *\n */\n! Node *newNodeMacroHolder;\n \nIndex: src/backend/utils/mmgr/mcxt.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/backend/utils/mmgr/mcxt.c,v\nretrieving revision 1.32\ndiff -c -c -r1.32 mcxt.c\n*** src/backend/utils/mmgr/mcxt.c\t12 Aug 2002 00:36:12 -0000\t1.32\n--- src/backend/utils/mmgr/mcxt.c\t11 Oct 2002 04:06:02 -0000\n***************\n*** 453,458 ****\n--- 453,481 ----\n }\n \n /*\n+ * MemoryContextAllocZero\n+ *\t\tLike MemoryContextAlloc, but clears allocated memory\n+ *\n+ *\tWe could just call MemoryContextAlloc then clear the memory, but this\n+ *\tfunction is called too many times, so we have a separate version.\n+ */\n+ void *\n+ MemoryContextAllocZero(MemoryContext context, Size size)\n+ {\n+ \tvoid *ret;\n+ \n+ \tAssertArg(MemoryContextIsValid(context));\n+ \n+ \tif (!AllocSizeIsValid(size))\n+ \t\telog(ERROR, \"MemoryContextAllocZero: invalid request size %lu\",\n+ \t\t\t (unsigned long) size);\n+ \n+ \tret = (*context->methods->alloc) (context, size);\n+ \tMemSet(ret, 0, size);\n+ \treturn ret;\n+ }\n+ \n+ /*\n * pfree\n *\t\tRelease an allocated chunk.\n */\nIndex: src/include/nodes/nodes.h\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/include/nodes/nodes.h,v\nretrieving revision 1.118\ndiff -c -c -r1.118 nodes.h\n*** src/include/nodes/nodes.h\t31 Aug 2002 22:10:47 -0000\t1.118\n--- src/include/nodes/nodes.h\t11 Oct 2002 04:06:05 -0000\n***************\n*** 261,266 ****\n--- 261,284 ----\n \n #define nodeTag(nodeptr)\t\t(((Node*)(nodeptr))->type)\n \n+ /*\n+ *\tThere is no way to dereference the palloc'ed pointer to assign the\n+ *\ttag, and return the pointer itself, so we need a holder variable.\n+ *\tFortunately, this function isn't recursive so we just define\n+ *\ta global variable for this purpose.\n+ */\n+ extern Node *newNodeMacroHolder;\n+ \n+ #define newNode(size, tag) \\\n+ ( \\\n+ \tAssertMacro((size) >= sizeof(Node)),\t\t/* need the tag, at least */ \\\n+ \\\n+ \tnewNodeMacroHolder = (Node *) palloc0(size), \\\n+ \tnewNodeMacroHolder->type = (tag), \\\n+ \tnewNodeMacroHolder \\\n+ )\n+ \n+ \n #define makeNode(_type_)\t\t((_type_ *) newNode(sizeof(_type_),T_##_type_))\n #define NodeSetTag(nodeptr,t)\t(((Node*)(nodeptr))->type = (t))\n \n***************\n*** 281,291 ****\n *\t\t\t\t\t extern declarations follow\n * ----------------------------------------------------------------\n */\n- \n- /*\n- * nodes/nodes.c\n- */\n- extern Node *newNode(Size size, NodeTag tag);\n \n /*\n * nodes/{outfuncs.c,print.c}\n--- 299,304 ----\nIndex: src/include/utils/palloc.h\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/include/utils/palloc.h,v\nretrieving revision 1.19\ndiff -c -c -r1.19 palloc.h\n*** src/include/utils/palloc.h\t20 Jun 2002 20:29:53 -0000\t1.19\n--- src/include/utils/palloc.h\t11 Oct 2002 04:06:06 -0000\n***************\n*** 46,53 ****\n--- 46,56 ----\n * Fundamental memory-allocation operations (more are in utils/memutils.h)\n */\n extern void *MemoryContextAlloc(MemoryContext context, Size size);\n+ extern void *MemoryContextAllocZero(MemoryContext context, Size size);\n \n #define palloc(sz)\tMemoryContextAlloc(CurrentMemoryContext, (sz))\n+ \n+ #define palloc0(sz)\tMemoryContextAllocZero(CurrentMemoryContext, (sz))\n \n extern void pfree(void *pointer);\n \nIndex: src/interfaces/libpq/fe-connect.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/interfaces/libpq/fe-connect.c,v\nretrieving revision 1.206\ndiff -c -c -r1.206 fe-connect.c\n*** src/interfaces/libpq/fe-connect.c\t3 Oct 2002 17:09:42 -0000\t1.206\n--- src/interfaces/libpq/fe-connect.c\t11 Oct 2002 04:06:15 -0000\n***************\n*** 1131,1137 ****\n \t\t\t\treturn 0;\n \t\t\t}\n \n! \t\t\tremains.tv_sec = finish_time - current_time;\n \t\t\tremains.tv_usec = 0;\n \t\t}\n \t}\n--- 1131,1140 ----\n \t\t\t\treturn 0;\n \t\t\t}\n \n! \t\t\tif (finish_time > current_time)\n! \t\t\t\tremains.tv_sec = finish_time - current_time;\n! \t\t\telse\n! \t\t\t\tremains.tv_sec = 0;\n \t\t\tremains.tv_usec = 0;\n \t\t}\n \t}", "msg_date": "Fri, 11 Oct 2002 00:10:42 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Diff for src/interfaces/libpq/fe-connect.c between version" } ]
[ { "msg_contents": "Is it possible to get rid of the \"t_natts\" fields in the tuple header? Is this field only for \"alter table add/drop\" support? Then it might\npossible to get rid of it and put the \"t_natts\" field in the page header, not the tuple header, if it can be assured that when updating/inserting\nrecords only a compatible (a page file with the same number of attributes) page file is used. Especially master-detail tables would \nprofit from this, reducing the tuple overhead by another 9%.\n\nMight this be possible?\n\nRegards,\n\tMario Weilguni\n\n\n\n", "msg_date": "Fri, 11 Oct 2002 09:14:50 +0200", "msg_from": "Mario Weilguni <mweilguni@sime.com>", "msg_from_op": true, "msg_subject": "number of attributes in page files?" }, { "msg_contents": "Mario Weilguni <mweilguni@sime.com> writes:\n> Is it possible to get rid of the \"t_natts\" fields in the tuple header?\n> Is this field only for \"alter table add/drop\" support?\n\n\"Only\"? A lot of people consider that pretty important ...\n\nBut removing 2 bytes isn't going to save anything, on most machines,\nbecause of alignment considerations.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 11 Oct 2002 08:12:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] number of attributes in page files? " }, { "msg_contents": "Am Freitag, 11. Oktober 2002 14:12 schrieb Tom Lane:\n> Mario Weilguni <mweilguni@sime.com> writes:\n> > Is it possible to get rid of the \"t_natts\" fields in the tuple header?\n> > Is this field only for \"alter table add/drop\" support?\n>\n> \"Only\"? A lot of people consider that pretty important ...\n\nWith \"only\" I mean it's an administrative task which requires operator intervenation anyways, and it's a seldom needed operation which may take longer, when\nqueries become faster.\n\n>\n> But removing 2 bytes isn't going to save anything, on most machines,\n> because of alignment considerations.\n\nok, I did not consider alignment, but the question remains, is this easily doable? Especially because only one another byte has to be saved for\nreal saving on many architectures, which is t_hoff. IMO t_hoff is not useful because it can be computed easily. This would give 20 byte headers instead of 23 (24) bytes as it's now. \nThis is 17% saved, and if it's not too complicated it might be worth to consider.\n\nBest regards,\n\tMario Weilguni\n", "msg_date": "Fri, 11 Oct 2002 16:00:13 +0200", "msg_from": "Mario Weilguni <mweilguni@sime.com>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] number of attributes in page files?" } ]
[ { "msg_contents": "I guess we had this discussion before but I have just gone through the \ngeneral list and I have encountered a problem I had a least VERY often \nbefore.\nSometimes the planner does not find the best way through a query. \nLooking at the problem of query optimization it is pretty obvious that \nthings like that can happen. The planner is a wonderful piece of \nsoftware and I have a high esteem of people working on it.\n\nIn some cases the planner fails because it is impossible to optimize \nevery query coming along - this is a natural thing.\nIn case of very complex SQL statements it would be wonderful to have a \ncommand which allows the user to turn an INDEX on or off temporarily. \nThis would solve 90% of all problems people have with the planner.\nPeople say that 10% of all queries cause 90% of the load. If we could \nhelp those 10% we could gain A LOT of performance with very little effort.\nImproving other things help of lot as well but in some cases the planner \ndecides whether a query can be done or not. YES/NO is a much bigger \nproblem than 5% faster or not.\n\nJust have a look at a query like that:\n\n$database->dbi_select(\"SELECT a.code, b.code, t_gruppe.id, t_strukturtyp.id,\n t_struktur.id,t_struktur.oid\n FROM t_master, t_struktur, t_strukturtyp, t_gruppenelement,\n t_gruppe, t_text AS a, t_text AS b, t_betriebdetail,\n t_strukturbetrieb\n WHERE t_master.master_id = '$sportort'\n AND t_master.slave_id = t_struktur.id\n AND t_struktur.typid = t_strukturtyp.id\n AND t_strukturtyp.kommentar = 'betrieb'\n AND get_bezahlt(t_struktur.id) = 't'\n AND t_strukturtyp.id = t_gruppenelement.suchid\n AND t_gruppenelement.icode = 'strukturtyp'\n AND t_gruppenelement.gruppeid = t_gruppe.id\n AND a.suchid = t_gruppe.id\n AND a.icode = 'gruppe'\n AND a.sprache = $session{lang}\n AND a.texttyp IS NULL\n AND b.suchid = t_struktur.id\n AND b.icode = 'struktur'\n AND b.sprache = $session{lang}\n AND b.texttyp IS NULL\n AND t_gruppe.sortierung >= getmin('basic')\n AND t_gruppe.sortierung <= getmax('basic')\n AND t_struktur.id IN (\n SELECT DISTINCT a.refid\n FROM t_punkte AS a,t_text AS \nb,t_struktur AS c\n WHERE a.refid=b.suchid\n AND a.icode='struktur'\n AND b.icode='struktur'\n AND a.refid=c.id\n AND b.sprache=1\n AND a.bildid='$picdata[0]'\n AND b.texttyp IS NULL )\n AND t_betriebdetail.von < now()\n AND t_betriebdetail.strukturbetriebid =\n t_strukturbetrieb.betriebid\n AND t_strukturbetrieb.strukturid = t_struktur.id\n ORDER BY t_gruppe.sortierung, t_strukturtyp.sortierung,\n t_betriebdetail.leistung , b.code\");\n\nThis has been taken from a real world application I have written a few \nweeks ago (unfortunately it is German).\nIn this case the planner does it absolutely right. There are subqueries \nand functions and many other ugly things for the planner but it works. \nWhat should I do if it doesn't work?\nWell, I could turn seq scans off globally even if I knew that there is \njust one table causing high execution times. People can easily imagine \nthat a bad execution plan can lead to really bad performance - \nespecially when there are millions of records around. By tweaking the \noptimizer a little we could gain 100% percents of performance. (idx scan \nvs. nested loop and seq scan or something like that).\n\nI guess the patch for this tweaking stuff could be fairly easy.\nCurrently I am abusing system tables to get the problem fixed (which is \nbad for other queries of course). Running VACUUM is not that funny if \nthe data in the system tables is mistreated.\n\nConcern:\nPeople might think this is ANSI: I know that this can be a problem but \nis it better if people start abusing system tables or think that \nPostgreSQL is bad or slow?\n\nTake the time and fix the planner: I can fully understand this concern. \nHowever, there is no way to fix the optimizer to do it right in every \ncase. The planner is really good but I am talking about 3% of all those \nqueries out there - unfortunately they cause 90% of the problems people \nhave.\n\nI have taken this query so that people can see that the planner is doing \ngood work but people should also think of a situation where a query like \nthat can cause severe head ache ...\n\nmaybe this problem should be discussed from time to time.\n\n Best regards,\n\n Hans\n\n\n<http://kernel.cybertec.at>\n\n", "msg_date": "Fri, 11 Oct 2002 09:21:29 +0200", "msg_from": "=?ISO-8859-1?Q?Hans-J=FCrgen_Sch=F6nig?= <postgres@cybertec.at>", "msg_from_op": true, "msg_subject": "Suggestion: Helping the optimizer" } ]
[ { "msg_contents": "\nHi all,\n\nI am trying to add some replication features to postgres (yes, I have\nalready looked at ongoing work), in a peer to peer manner. The goal\nis to achive `nearly complete fault tolerence' by replicating data.\n\nThe basic framework I have in mind is somewhat like this.\n\n- Postmasters are running on different computers on a networked cluster.\n Their data areas are identical at the beginning and recide on local\n storage devices.\n\n- Each postmaster is aware that they are a part of a cluster and they\n can communicate with each other, send multicast requests and look for\n each other's presence (like heartbeat in linux-ha project).\n\n- When a frontend process sends a read query, each backend process\n does that from its own data area.\n\n- There are two types of write queries. Postmasters use seperate\n communication channels for each. One is the sequencial channel which\n carries writes whose order is important, and the non-sequencial\n channel carries write queries whose order is not important.\n\n- When a frontend process sends non-sequencial write query to a backend,\n it is directly written to the local data area and a multicast is\n sent (preferably asynchronously) to the other postmasters who will\n also update their respective local areas.\n\n May be we can simply duplicate what goes to WAL into a TCP/IP socket\n (with some header info, of course).\n\n- When a sequencial-write query is requested, the corresponding\n postmaster informs a main-postmaster (more about in the next point),\n waits for his acknowledgement, and proceeds the same way as the\n non-sequencial write.\n\n- Each postmaster is assigned a priority. The one with the highest\n priority is doing some bookkeeping to handle concurrency issues etc.\n If he goes away, another one takes charge.\n\n Or maybe we can completely ignore the main-postmaster concept and\n let the clients broadcast a request to obtain locks etc.\n\n- When a new postmaster, hence a computer, joins the cluster, he\n will replicate the current database from one of the clients.\n\nSuggessions and critisisms are welcome.\n\n\tAnuradha\n\n-- \n\nDebian GNU/Linux (kernel 2.4.18-xfs-1.1)\n\nThe best audience is intelligent, well-educated and a little drunk.\n\t\t-- Maurice Baring\n\n", "msg_date": "Fri, 11 Oct 2002 16:16:57 +0600", "msg_from": "Anuradha Ratnaweera <anuradha@lklug.pdn.ac.lk>", "msg_from_op": true, "msg_subject": "Peer to peer replication of Postgresql databases" }, { "msg_contents": "On 11 Oct 2002 at 16:16, Anuradha Ratnaweera wrote:\n\n> \n> Hi all,\n> \n> I am trying to add some replication features to postgres (yes, I have\n> already looked at ongoing work), in a peer to peer manner. The goal\n> is to achive `nearly complete fault tolerence' by replicating data.\n\nSounds a lot like usogres. You got it running. (I never had a chance.) I would \nlike to hear how it compares against it.\n\nCan anybody comment how maintained usogres is. It covers an important area of \nreplication but I am not sure how maintained that is. If it is not, I suggest \nwe pick it up and finish it.\n\nHTH\n\nBye\n Shridhar\n\n--\nYou go slow, be gentle. It's no one-way street -- you know how youfeel and \nthat's all. It's how the girl feels too. Don't press. Ifthe girl feels \nanything for you at all, you'll know.\t\t-- Kirk, \"Charlie X\", stardate 1535.8\n\n", "msg_date": "Fri, 11 Oct 2002 15:54:15 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Re: Peer to peer replication of Postgresql databases" }, { "msg_contents": "On Fri, Oct 11, 2002 at 03:54:15PM +0530, Shridhar Daithankar wrote:\n>\n> On 11 Oct 2002 at 16:16, Anuradha Ratnaweera wrote:\n> \n> > I am trying to add some replication features to postgres (yes, I have\n> > already looked at ongoing work), in a peer to peer manner. The goal\n> > is to achive `nearly complete fault tolerence' by replicating data.\n> \n> Sounds a lot like usogres. You got it running. (I never had a chance.) I would \n> like to hear how it compares against it.\n> \n> Can anybody comment how maintained usogres is. It covers an important area of \n> replication but I am not sure how maintained that is. If it is not, I suggest \n> we pick it up and finish it.\n\nI will look at it, too. Thanks for the link. In some cases, starting\nanew is faster than learning unmaintained existing code.\n\nMy original mail would have been much shorter if it simply stated that I\nwant to add `application level RAID-0' to postgres ;)\n\n\tAnuradha\n\n-- \n\nDebian GNU/Linux (kernel 2.4.18-xfs-1.1)\n\nI think the world is run by C students.\n\t\t-- Al McGuire\n\n", "msg_date": "Fri, 11 Oct 2002 16:29:59 +0600", "msg_from": "Anuradha Ratnaweera <anuradha@lklug.pdn.ac.lk>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Peer to peer replication of Postgresql databases" }, { "msg_contents": "On 11 Oct 2002 at 16:29, Anuradha Ratnaweera wrote:\n\n> On Fri, Oct 11, 2002 at 03:54:15PM +0530, Shridhar Daithankar wrote:\n> I will look at it, too. Thanks for the link. In some cases, starting\n> anew is faster than learning unmaintained existing code.\n\nWhile that's true, usogres code is just few files. I wouldn't take more than \nhalf an hour to read up the things. And besides it contain postgresql protocol \nimplementation necessary which would take some time to test and debug,\n\nAnd it's in C++. I like that..;-)\n\n\n> My original mail would have been much shorter if it simply stated that I\n> want to add `application level RAID-0' to postgres ;)\n\n:-)\n\nBye\n Shridhar\n\n--\nQOTD:\t\"Do you smell something burning or is it me?\"\t\t-- Joan of Arc\n\n", "msg_date": "Fri, 11 Oct 2002 16:04:29 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Re: Peer to peer replication of Postgresql databases" }, { "msg_contents": "On Fri, Oct 11, 2002 at 04:04:29PM +0530, Shridhar Daithankar wrote:\n> On 11 Oct 2002 at 16:29, Anuradha Ratnaweera wrote:\n> \n> > On Fri, Oct 11, 2002 at 03:54:15PM +0530, Shridhar Daithankar wrote:\n> > I will look at it, too. Thanks for the link. In some cases, starting\n> > anew is faster than learning unmaintained existing code.\n\nOk. Checked out what usogres is. It is not what I want. I don't want\na static `main database'. It should simply a cluster of them - just like\na set of Raid-0 disks, may be with a tempory controller for some tasks.\n\nAlso, as a matter of fact, usogres is not unmaintained code.\n\n> While that's true, usogres code is just few files. I wouldn't take more than \n> half an hour to read up the things. And besides it contain postgresql protocol \n> implementation necessary which would take some time to test and debug,\n\nGreat. I will look into this over the weekend.\n\n> And it's in C++. I like that..;-)\n\nAnd I DON'T like that ;)\n\n\tAnuradha\n\n-- \n\nDebian GNU/Linux (kernel 2.4.18-xfs-1.1)\n\nQOTD:\n\t\"I ain't broke, but I'm badly bent.\"\n\n", "msg_date": "Fri, 11 Oct 2002 16:39:54 +0600", "msg_from": "Anuradha Ratnaweera <anuradha@lklug.pdn.ac.lk>", "msg_from_op": true, "msg_subject": "Re: Peer to peer replication of Postgresql databases" }, { "msg_contents": "On 11 Oct 2002 at 16:39, Anuradha Ratnaweera wrote:\n\n> On Fri, Oct 11, 2002 at 04:04:29PM +0530, Shridhar Daithankar wrote:\n> > On 11 Oct 2002 at 16:29, Anuradha Ratnaweera wrote:\n> > \n> > > On Fri, Oct 11, 2002 at 03:54:15PM +0530, Shridhar Daithankar wrote:\n> > > I will look at it, too. Thanks for the link. In some cases, starting\n> > > anew is faster than learning unmaintained existing code.\n> \n> Ok. Checked out what usogres is. It is not what I want. I don't want\n> a static `main database'. It should simply a cluster of them - just like\n> a set of Raid-0 disks, may be with a tempory controller for some tasks.\n\nWell, I don't think adding support for multiple slaves to usogres would be that \nproblematic. Of course if you want to load balance your application queries, \napplication has to be aware of that. I will not do sending requests to a mosix \ncluster anyway.\n\n \n> Also, as a matter of fact, usogres is not unmaintained code.\n\nGlad to know that. I wrote to author with some suggestion and never got a \nreply. Didn't bother joining mailing list though..\n\n\nRegards,\n Shridhar\n\n-----------------------------------------------------------\nShridhar Daithankar\nLIMS CPE Team Member, PSPL.\nmailto:shridhar_daithankar@persistent.co.in\nPhone:- +91-20-5678900 Extn.270\nFax :- +91-20-5678901 \n-----------------------------------------------------------\n\n", "msg_date": "Fri, 11 Oct 2002 16:29:53 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Re: Peer to peer replication of Postgresql databases" }, { "msg_contents": "On Fri, Oct 11, 2002 at 04:29:53PM +0530, Shridhar Daithankar wrote:\n> \n> Well, I don't think adding support for multiple slaves to usogres would be that \n> problematic. Of course if you want to load balance your application queries, \n> application has to be aware of that. I will not do sending requests to a mosix \n> cluster anyway.\n\nHave already tested postgres on a mosix cluster, and as expected results\nare not good. (although mosix does the correct thing in keeping all the\ndatabase backend processes on one node).\n\n\tAnuradha\n\n-- \n\nDebian GNU/Linux (kernel 2.4.18-xfs-1.1)\n\nRemember: Silly is a state of Mind, Stupid is a way of Life.\n\t\t-- Dave Butler\n\n", "msg_date": "Fri, 11 Oct 2002 17:15:00 +0600", "msg_from": "Anuradha Ratnaweera <anuradha@lklug.pdn.ac.lk>", "msg_from_op": true, "msg_subject": "Re: Peer to peer replication of Postgresql databases" }, { "msg_contents": "I'd be curious to hear in a little more detail what constitutes \"not\ngood\" for postgres on a mosix cluster.\n\nGreg\n\n\nOn Fri, 2002-10-11 at 06:15, Anuradha Ratnaweera wrote:\n> On Fri, Oct 11, 2002 at 04:29:53PM +0530, Shridhar Daithankar wrote:\n> > \n> > Well, I don't think adding support for multiple slaves to usogres would be that \n> > problematic. Of course if you want to load balance your application queries, \n> > application has to be aware of that. I will not do sending requests to a mosix \n> > cluster anyway.\n> \n> Have already tested postgres on a mosix cluster, and as expected results\n> are not good. (although mosix does the correct thing in keeping all the\n> database backend processes on one node).\n> \n> \tAnuradha\n> \n> -- \n> \n> Debian GNU/Linux (kernel 2.4.18-xfs-1.1)\n> \n> Remember: Silly is a state of Mind, Stupid is a way of Life.\n> \t\t-- Dave Butler\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org", "msg_date": "11 Oct 2002 08:30:55 -0500", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: Peer to peer replication of Postgresql databases" }, { "msg_contents": "On 11 Oct 2002 at 8:30, Greg Copeland wrote:\n\n> I'd be curious to hear in a little more detail what constitutes \"not\n> good\" for postgres on a mosix cluster.\n> On Fri, 2002-10-11 at 06:15, Anuradha Ratnaweera wrote:\n> > On Fri, Oct 11, 2002 at 04:29:53PM +0530, Shridhar Daithankar wrote:\n> > Have already tested postgres on a mosix cluster, and as expected results\n> > are not good. (although mosix does the correct thing in keeping all the\n> > database backend processes on one node).\n\nWell, I guess in kind of replication we are talking here, the performance will \nbe enhanced only if separate instances of psotgresql runs on separate machine. \nNow if mosix kernel applies some AI and puts all of them on same machine, it \nisn't going to be any good for the purpose replication is deployed.\n\nI guess that's what she meant..\n\nBye\n Shridhar\n\n--\nUser n.:\tA programmer who will believe anything you tell him.\n\n", "msg_date": "Fri, 11 Oct 2002 19:10:26 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Re: Peer to peer replication of Postgresql databases" }, { "msg_contents": "Well, not scalable doesn't have to mean \"not good\". That's why I\nasked. Considering this is one of the problems with mosix clusters\n(process migration and associated restrictions) and the nature of\nPostgreSQL's implementation I'm not sure what other result may of been\nexpected. Because of that, I wasn't sure if something else was being\nimplied.\n\nGreg\n\n\n\nOn Fri, 2002-10-11 at 08:40, Shridhar Daithankar wrote:\n> On 11 Oct 2002 at 8:30, Greg Copeland wrote:\n> \n> > I'd be curious to hear in a little more detail what constitutes \"not\n> > good\" for postgres on a mosix cluster.\n> > On Fri, 2002-10-11 at 06:15, Anuradha Ratnaweera wrote:\n> > > On Fri, Oct 11, 2002 at 04:29:53PM +0530, Shridhar Daithankar wrote:\n> > > Have already tested postgres on a mosix cluster, and as expected results\n> > > are not good. (although mosix does the correct thing in keeping all the\n> > > database backend processes on one node).\n> \n> Well, I guess in kind of replication we are talking here, the performance will \n> be enhanced only if separate instances of psotgresql runs on separate machine. \n> Now if mosix kernel applies some AI and puts all of them on same machine, it \n> isn't going to be any good for the purpose replication is deployed.\n> \n> I guess that's what she meant..\n> \n> Bye\n> Shridhar\n> \n> --\n> User n.:\tA programmer who will believe anything you tell him.\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org", "msg_date": "11 Oct 2002 09:49:17 -0500", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: Peer to peer replication of Postgresql databases" }, { "msg_contents": "[ pgsql-patches removed from Cc: list ]\n\nAnuradha Ratnaweera <anuradha@lklug.pdn.ac.lk> writes:\n> I am trying to add some replication features to postgres (yes, I have\n> already looked at ongoing work), in a peer to peer manner.\n\nDid you look at the research behind Postgres-R, and the pgreplication\nstuff?\n\n> - When a frontend process sends a read query, each backend process\n> does that from its own data area.\n\nSurely that's not correct -- a SELECT can be handled by *any one*\nnode, not each and every one, right?\n\n> - There are two types of write queries. Postmasters use seperate\n> communication channels for each. One is the sequencial channel which\n> carries writes whose order is important, and the non-sequencial\n> channel carries write queries whose order is not important.\n\nHow do you distinguish between these?\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n", "msg_date": "11 Oct 2002 12:07:00 -0400", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": false, "msg_subject": "Re: Peer to peer replication of Postgresql databases" }, { "msg_contents": "On Fri, Oct 11, 2002 at 08:30:55AM -0500, Greg Copeland wrote:\n>\n> I'd be curious to hear in a little more detail what constitutes \"not\n> good\" for postgres on a mosix cluster.\n\nIt seems that almost all the postgres processes remain in the `home'\nnode.\n\nPlease notice that I am not underestimating Mosix in any way. We have\ntested many programs from our parallel processing project with extreme\nsuccess on our mosix cluster.\n\n\tAnuradha\n\n-- \n\nDebian GNU/Linux (kernel 2.4.18-xfs-1.1)\n\nGinger snap.\n\n", "msg_date": "Mon, 14 Oct 2002 11:52:36 +0600", "msg_from": "Anuradha Ratnaweera <anuradha@lklug.pdn.ac.lk>", "msg_from_op": true, "msg_subject": "Re: Peer to peer replication of Postgresql databases" }, { "msg_contents": "On Fri, Oct 11, 2002 at 07:10:26PM +0530, Shridhar Daithankar wrote:\n> On 11 Oct 2002 at 8:30, Greg Copeland wrote:\n> \n> > I'd be curious to hear in a little more detail what constitutes \"not\n> > good\" for postgres on a mosix cluster.\n> \n> Well, I guess in kind of replication we are talking here, the\n> performance will be enhanced only if separate instances of psotgresql\n> runs on separate machine. Now if mosix kernel applies some AI and\n> puts all of them on same machine, it isn't going to be any good for\n> the purpose replication is deployed.\n\nExactly. First, since we know what is going on, it is not necessary for\nthe OS to decide what's going on. Secondly, database replication is not\nlooked after at all, unless we do some crude tricks on the filesystem.\nStill it won't be efficient.\n\n> I guess that's what she meant..\n ^^^\nCorrection: \"that's what _HE_ meant...\" ;)\n\n\tAnuradha\n\n-- \n\nDebian GNU/Linux (kernel 2.4.18-xfs-1.1)\n\nAll other things being equal, a bald man cannot be elected President of\nthe United States.\n\t\t-- Vic Gold\n\n", "msg_date": "Mon, 14 Oct 2002 11:55:49 +0600", "msg_from": "Anuradha Ratnaweera <anuradha@lklug.pdn.ac.lk>", "msg_from_op": true, "msg_subject": "Re: Peer to peer replication of Postgresql databases" }, { "msg_contents": "On 14 Oct 2002 at 11:55, Anuradha Ratnaweera wrote:\n\n> On Fri, Oct 11, 2002 at 07:10:26PM +0530, Shridhar Daithankar wrote:\n> > On 11 Oct 2002 at 8:30, Greg Copeland wrote:\n> > \n> > > I'd be curious to hear in a little more detail what constitutes \"not\n> > > good\" for postgres on a mosix cluster.\n> > \n> > Well, I guess in kind of replication we are talking here, the\n> > performance will be enhanced only if separate instances of psotgresql\n> > runs on separate machine. Now if mosix kernel applies some AI and\n> > puts all of them on same machine, it isn't going to be any good for\n> > the purpose replication is deployed.\n> \n> Exactly. First, since we know what is going on, it is not necessary for\n> the OS to decide what's going on. Secondly, database replication is not\n> looked after at all, unless we do some crude tricks on the filesystem.\n> Still it won't be efficient.\n\nIMO any one layer of clustering should be enough. If you use mosix, you \nshouldn't need clustering in postgresql. If postgresql clustering is applied \nany heterogenous machines like freebsd/linux should do. (OK same architecture \nat least. No suns and PCs..)\n\nLet's keep aside mosix for the time being. Application level clustering is what \npostgresql needs.\n\nWhat next? which one should we work on? Postgres-R/Usogres/ER-server?\n\n> \n> > I guess that's what she meant..\n> ^^^\n> Correction: \"that's what _HE_ meant...\" ;)\n\nArgh... Extremely sorry, in India, special nouns ending with 'a' are usually \nfeminine.. Like Radha..\n\n Sorry again..:-)\n\nBye\n Shridhar\n\n--\nWeiner's Law of Libraries:\tThere are no answers, only cross references.\n\n", "msg_date": "Mon, 14 Oct 2002 11:31:33 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Re: Peer to peer replication of Postgresql databases" }, { "msg_contents": "On Fri, Oct 11, 2002 at 12:07:00PM -0400, Neil Conway wrote:\n> [ pgsql-patches removed from Cc: list ]\n> \n> Anuradha Ratnaweera <anuradha@lklug.pdn.ac.lk> writes:\n> > I am trying to add some replication features to postgres (yes, I\n> > have already looked at ongoing work), in a peer to peer manner.\n> \n> Did you look at the research behind Postgres-R, and the pgreplication\n> stuff?\n\nAm looking at the research papers related to it now.\n\n> > - When a frontend process sends a read query, each backend process\n> > does that from its own data area.\n> \n> Surely that's not correct -- a SELECT can be handled by *any one*\n> node, not each and every one, right?\n\nYes. Sorry about my careless wording. Unless anything is kind of\nlocked, each node has a copy of the database, so each one can handle\nSELECTs individually.\n\nThe actual situation will be far from this simple, because there will be\ndatabase writes going on and generating consistent SELECTs would need\ncareful handling of concurency issues.\n\n> > - There are two types of write queries. Postmasters use seperate\n> > communication channels for each. One is the sequencial channel which\n> > carries writes whose order is important, and the non-sequencial\n> > channel carries write queries whose order is not important.\n> \n> How do you distinguish between these?\n\nNope. We assume that all the communication should go through the\nsequencial channel unless indicated by the client. In that case, we\nwill have to find a way to indicate this from the client's side. This\ndoesn't sound very elegant, may be we can figure out a better way.\n\n\tAnuradha\n\n-- \n\nDebian GNU/Linux (kernel 2.4.18-xfs-1.1)\n\n\"Being against torture ought to be sort of a bipartisan thing.\"\n-- Karl Lehenbauer\n\n", "msg_date": "Mon, 14 Oct 2002 12:05:30 +0600", "msg_from": "Anuradha Ratnaweera <anuradha@lklug.pdn.ac.lk>", "msg_from_op": true, "msg_subject": "Re: Peer to peer replication of Postgresql databases" } ]
[ { "msg_contents": "For give me for responding to the beginning of this thread, but my \ncomments only apply to this post.\n\n> already looked at ongoing work), in a peer to peer manner. The goal\n> is to achive `nearly complete fault tolerence' by replicating data.\n\nA worthy goal indeed!\n\n> - Postmasters are running on different computers on a networked cluster.\n> Their data areas are identical at the beginning and recide on local\n> storage devices.\n> \n> - Each postmaster is aware that they are a part of a cluster and they\n> can communicate with each other, send multicast requests and look for\n> each other's presence (like heartbeat in linux-ha project).\n\nThese first two point on extending postmaster for a network cluster\nand HA could be a bit tricky. Have you considered using a group\ncommunication system like spread.org that already has the network\ncluster and heartbeat built in?\n\n\n> - There are two types of write queries. Postmasters use seperate\n> communication channels for each. One is the sequencial channel which\n> carries writes whose order is important, and the non-sequencial\n> channel carries write queries whose order is not important.\n\nThis puts the burden of determining weather a conflict can happen on\nthe application or user. Application design could become a bit tricky.\nIf you plan to use the non-sequential channel in an application, you\nwould need to make sure there are never any possible conflicts.\n\n> \n> - When a frontend process sends non-sequencial write query to a backend,\n> it is directly written to the local data area and a multicast is\n> sent (preferably asynchronously) to the other postmasters who will\n> also update their respective local areas.\n\nWhat are you planning to send? (SQL, parsed statements, or tuples)\n\n\n> - When a sequencial-write query is requested, the corresponding\n> postmaster informs a main-postmaster (more about in the next point),\n> waits for his acknowledgement, and proceeds the same way as the\n> non-sequencial write.\n\nThis would make the main postmaster handle all the concurrency control for\nthe replicated system. Are you thinking a two phased commit protocol here?\n\n> \n> Or maybe we can completely ignore the main-postmaster concept and\n> let the clients broadcast a request to obtain locks etc.\n\nIf each system can obtain locks, how will you handle deadlocks across \nsystem boundaries?\n\n\n> \n> Suggessions and critisisms are welcome.\n> \n\nHave you taken a look at Postgres-R or the pg-replicaiton project. The goals \nare the same as yours, and the approach is some what similar. There is\na mailing to discuss different approaches, and if you like what we are doing\nyou can certainly participate in the development.\n\nhttp://gborg.postgresql.org/project/pgreplication/projdisplay.php\n\nRegards\n\nDarren\n\n\n", "msg_date": "Fri, 11 Oct 2002 13:51:33 -0400", "msg_from": "<darren@up.hrcoxmail.com>", "msg_from_op": true, "msg_subject": "Re: Peer to peer replication of Postgresql databases" } ]
[ { "msg_contents": "\nHello,\n\nI sometimes need to perform client-side merges, sometimes between two \ntables on the same database, sometimes between two different databases.\n\nWhen the merge key is numeric all goes well but, when the merge key is a \nstring a problem arises: string comparison operators often behave \ndifferently between the database(s) and the client's language.\n\nSometimes it is due to the locale settings, sometimes is the particular \nimplementation of the operator, as a matter of facts, I cannot trust the \nstrings comparison operators.\n\nSi, the question is how client-side merge should be done...\n\n- Perform the sorting locally... only one operator... maybe suboptimal \nsorting... etc....\n\n- Compare the strings hex-encoded: overhead apart, I found myself unable \nto use encode(..) function on PostgreSQL since it accepts only BYTEA \ndata and text isn't castable to bytea.\n\n- Invent a new operator whose behaviour would be always consistent, \nlocale-indepentent... (like the very-first C's strcmp).\n\nWhich do you think should be the correct approach ?\n\nThanks in advance!\nBest regards!\n\n-- \n Daniele Orlandi\n Planet Srl\n\n", "msg_date": "Fri, 11 Oct 2002 21:21:51 +0200", "msg_from": "Daniele Orlandi <daniele@orlandi.com>", "msg_from_op": true, "msg_subject": "Client-side merge & string sorting" } ]
[ { "msg_contents": "Now that we are changing relfilenode in 7.3, I think we need to rename\noid2name to relfilenode2name, and perhaps move it into the main tree.\n\nTODO item added:\n\n\tRename oid2name to relfilenode2name and install by default\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sat, 12 Oct 2002 15:03:37 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "oid2name and relfilenode" }, { "msg_contents": "Bruce Momjian wrote:\n> Now that we are changing relfilenode in 7.3, I think we need to rename\n> oid2name to relfilenode2name, and perhaps move it into the main tree.\n> \n> TODO item added:\n> \n> \tRename oid2name to relfilenode2name and install by default\n> \n\nActually, to be accurate, I think databases are stored based on their\noid and tables/indexes are stored based on their relfilenode. That is\npretty confusing. Do we still do the renaming?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sat, 12 Oct 2002 15:13:42 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: oid2name and relfilenode" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Bruce Momjian wrote:\n> > Now that we are changing relfilenode in 7.3, I think we need to rename\n> > oid2name to relfilenode2name, and perhaps move it into the main tree.\n> >\n> > TODO item added:\n> >\n> > Rename oid2name to relfilenode2name and install by default\n\nShould it be renamed to pg_<something> for namespace consistency?\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\n> Actually, to be accurate, I think databases are stored based on their\n> oid and tables/indexes are stored based on their relfilenode. That is\n> pretty confusing. Do we still do the renaming?\n> \n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Sun, 13 Oct 2002 05:16:55 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: oid2name and relfilenode" }, { "msg_contents": "Justin Clift wrote:\n> Bruce Momjian wrote:\n> > \n> > Bruce Momjian wrote:\n> > > Now that we are changing relfilenode in 7.3, I think we need to rename\n> > > oid2name to relfilenode2name, and perhaps move it into the main tree.\n> > >\n> > > TODO item added:\n> > >\n> > > Rename oid2name to relfilenode2name and install by default\n> \n> Should it be renamed to pg_<something> for namespace consistency?\n\nGood question. We usually add the pg_ for stuff that could be\nconfusing, like pg_dump or pg_restore. For things like vacuumdb and\ndropdb, we don't. We probably should have pg_ on createuser, but\nhistorically we haven't.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sat, 12 Oct 2002 15:19:32 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: oid2name and relfilenode" }, { "msg_contents": "Bruce Momjian writes:\n\n> > \tRename oid2name to relfilenode2name and install by default\n> >\n>\n> Actually, to be accurate, I think databases are stored based on their\n> oid and tables/indexes are stored based on their relfilenode. That is\n> pretty confusing. Do we still do the renaming?\n\nI don't think we should do either of these. Instead of giving people\ntools to dig around in the internals, let's give them tools to get the\ninformation they really want. Not sure what that is though.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 15 Oct 2002 20:53:30 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: oid2name and relfilenode" }, { "msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > > \tRename oid2name to relfilenode2name and install by default\n> > >\n> >\n> > Actually, to be accurate, I think databases are stored based on their\n> > oid and tables/indexes are stored based on their relfilenode. That is\n> > pretty confusing. Do we still do the renaming?\n> \n> I don't think we should do either of these. Instead of giving people\n> tools to dig around in the internals, let's give them tools to get the\n> information they really want. Not sure what that is though.\n\nThat's the problem. People sometimes need to access those files from\nthe file system level. There are just too many variations of why they\nneed that I don't think there is any way to anticipate them. If someone\ncomes up with a better idea, I am all ears, but seeing as nothing better\nhas appeared since it was first written, I think we need to move forward.\n\nI will add these items to the TODO list, unless someone else votes. I\nthink they are too important to have in /contrib (and we moved\npg_resetxlog/pg_controldata from /contrib in 7.3) and I think\nrelfilenode is better than OID so people start using that rather than\nOID for filename mapping.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 15 Oct 2002 14:54:04 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: oid2name and relfilenode" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I will add these items to the TODO list, unless someone else votes.\n\nI was not thrilled with the idea of moving oid2name out of contrib\neither, but kept silent to see if someone else would complain first ...\n\nBasically I think that oid2name is a hacker's tool and not something\nusers or DBAs really want as-is --- which I guess is another way of\nstating Peter's gripe that what it produces is not what the users want\nto know. The actual useful guts of it are nothing more than\n\tSELECT oid, datname FROM pg_database;\n\tSELECT relfilenode, relname FROM pg_class;\nwhich does not seem significant enough to justify the packaging and\ndocumentation overhead of having another command-line tool.\n\nThe only actual use-case I've seen for it so far is as a vehicle for\ncomputing actual database sizes on-disk; which would be better served\nby a tool that did the whole job. What other uses do people have for\noid2name?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 15 Oct 2002 17:00:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: oid2name and relfilenode " }, { "msg_contents": "\nOK, removed from TODO. I figured it was as useful as pg_controldata but\ncan see what you say that those give information that you can't get any\nother way, while oid2name info can be gotten another way. You can look\nat the oid2name README for examples of its usage. \n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I will add these items to the TODO list, unless someone else votes.\n> \n> I was not thrilled with the idea of moving oid2name out of contrib\n> either, but kept silent to see if someone else would complain first ...\n> \n> Basically I think that oid2name is a hacker's tool and not something\n> users or DBAs really want as-is --- which I guess is another way of\n> stating Peter's gripe that what it produces is not what the users want\n> to know. The actual useful guts of it are nothing more than\n> \tSELECT oid, datname FROM pg_database;\n> \tSELECT relfilenode, relname FROM pg_class;\n> which does not seem significant enough to justify the packaging and\n> documentation overhead of having another command-line tool.\n> \n> The only actual use-case I've seen for it so far is as a vehicle for\n> computing actual database sizes on-disk; which would be better served\n> by a tool that did the whole job. What other uses do people have for\n> oid2name?\n> \n> \t\t\tregards, tom lane\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 15 Oct 2002 17:35:16 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: oid2name and relfilenode" } ]
[ { "msg_contents": "\nI've been testing contrib/dbmirror with 7.3 and schema's and have come \nacross a problem.\n\nSPI_getrelname(tg_relation) can be used by a trigger to get the name of \nthe table that the trigger was fired on. But that just gives the \ntablename and not the schema that the table is in. If you have a schema \nnamed \"A\" and a schema named \"B\" each with an employee table how can a \ntrigger determine if it was fired on A.employee or B.employee.\n\nThanks\n\n-- \nSteven Singer ssinger@navtechinc.com\nAircraft Performance Systems Phone: 519-747-1170 ext 282\nNavtech Systems Support Inc. AFTN: CYYZXNSX SITA: YYZNSCR\nWaterloo, Ontario ARINC: YKFNSCR\n\n", "msg_date": "Sat, 12 Oct 2002 19:29:26 +0000 (GMT)", "msg_from": "Steven Singer <ssinger@navtechinc.com>", "msg_from_op": true, "msg_subject": "Triggers and Schema's." }, { "msg_contents": "Steven Singer <ssinger@navtechinc.com> writes:\n> I've been testing contrib/dbmirror with 7.3 and schema's and have come \n> across a problem.\n> SPI_getrelname(tg_relation) can be used by a trigger to get the name of \n> the table that the trigger was fired on. But that just gives the \n> tablename and not the schema that the table is in. If you have a schema \n> named \"A\" and a schema named \"B\" each with an employee table how can a \n> trigger determine if it was fired on A.employee or B.employee.\n\nYou can do\n\tget_namespace_name(RelationGetNamespace(tg_relation))\nIs this sufficiently useful to justify adding an SPI_getrelnamespace()\nfunction? I'm not very clear on the uses for SPI_getrelname(). Most\nof the examples we have in contrib/ are using it for error messages,\nwhich don't really need extra qualification IMHO. The ones that are\ninterpolating the result into queries are for the most part demonstrably\nwrong anyway, because they fail to consider the possible need for double\nquotes.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 12 Oct 2002 16:26:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Triggers and Schema's. " }, { "msg_contents": "On Sat, 12 Oct 2002, Tom Lane wrote:\n\n> Steven Singer <ssinger@navtechinc.com> writes:\n\n> \tget_namespace_name(RelationGetNamespace(tg_relation))\n> Is this sufficiently useful to justify adding an SPI_getrelnamespace()\n> function? I'm not very clear on the uses for SPI_getrelname(). Most\n> of the examples we have in contrib/ are using it for error messages,\n\nThanks that function does the job.\n\nI have no problem using that function(or other backend functions) from \ntriggers to do the job. If the idea behind SPI is that functions are only \nsupposed to access backend functions through SPI and not call the backend \ndirectly then it probably should be added at some point for completeness \nsake but I suspect other functions would fall into this category as well. \n\n\n\n-- \nSteven Singer ssinger@navtechinc.com\nAircraft Performance Systems Phone: 519-747-1170 ext 282\nNavtech Systems Support Inc. AFTN: CYYZXNSX SITA: YYZNSCR\nWaterloo, Ontario ARINC: YKFNSCR\n\n", "msg_date": "Tue, 15 Oct 2002 01:42:15 +0000 (GMT)", "msg_from": "Steven Singer <ssinger@navtechinc.com>", "msg_from_op": true, "msg_subject": "Re: Triggers and Schema's. " } ]
[ { "msg_contents": "Hi,\n\nI must be missing something obvious here but it seems that bitfromint4 \nis not working under 7.3b2. I can still see bitfromint4 in the source \ncode, utils/adt/varbit.c, but it is no longer working. Any ideas why?\n\nBest wishes,\nNeophytos\n\nPS. It is also not working with the latest CVS checkout.\n\n", "msg_date": "Sat, 12 Oct 2002 22:29:37 +0300", "msg_from": "Neophytos Demetriou <k2pts@cytanet.com.cy>", "msg_from_op": true, "msg_subject": "7.3b2 ?bug? bitfromint4 is not working" }, { "msg_contents": "Neophytos Demetriou <k2pts@cytanet.com.cy> writes:\n> I must be missing something obvious here but it seems that bitfromint4 \n> is not working under 7.3b2. I can still see bitfromint4 in the source \n> code, utils/adt/varbit.c, but it is no longer working. Any ideas why?\n\nIt's still there:\n\nregression=# select \"bit\"(42);\n bit\n----------------------------------\n 00000000000000000000000000101010\n(1 row)\n\nHowever, it's not listed in pg_cast :-(\n\nregression=# select cast(42 as bit);\nERROR: Cannot cast type integer to bit\n\nLooking at the CVS logs, this seems to be caused by overlapping changes.\nOn 4-Aug Thomas renamed bittoint4 and bitfromint4 to match the more\nusual naming conventions for cast functions, viz int4(bit) and\nbit(int4), and he also added int8(bit) and bit(int8). This was after\nPeter had trolled the catalogs for cast functions and created the\ninitial contents of pg_cast.h (on 18-Jul).\n\nUpshot: we have here four functions that ought to be in pg_cast and are\nnot.\n\nIs it worth an initdb for 7.3b3 to fix this? I think we were already\nconsidering forcing one for the command-tag issues, otherwise I'd\nprobably vote \"no\". Comments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 12 Oct 2002 16:05:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.3b2 ?bug? bitfromint4 is not working " }, { "msg_contents": "> Is it worth an initdb for 7.3b3 to fix this? I think we were already\n> considering forcing one for the command-tag issues, otherwise I'd\n> probably vote \"no\". Comments?\n\nDo we need an initdb to fix command tags? I thought that was just a\nchange in the Query structure.\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Neophytos Demetriou <k2pts@cytanet.com.cy> writes:\n> > I must be missing something obvious here but it seems that bitfromint4 \n> > is not working under 7.3b2. I can still see bitfromint4 in the source \n> > code, utils/adt/varbit.c, but it is no longer working. Any ideas why?\n> \n> It's still there:\n> \n> regression=# select \"bit\"(42);\n> bit\n> ----------------------------------\n> 00000000000000000000000000101010\n> (1 row)\n> \n> However, it's not listed in pg_cast :-(\n> \n> regression=# select cast(42 as bit);\n> ERROR: Cannot cast type integer to bit\n> \n> Looking at the CVS logs, this seems to be caused by overlapping changes.\n> On 4-Aug Thomas renamed bittoint4 and bitfromint4 to match the more\n> usual naming conventions for cast functions, viz int4(bit) and\n> bit(int4), and he also added int8(bit) and bit(int8). This was after\n> Peter had trolled the catalogs for cast functions and created the\n> initial contents of pg_cast.h (on 18-Jul).\n> \n> Upshot: we have here four functions that ought to be in pg_cast and are\n> not.\n> \n> Is it worth an initdb for 7.3b3 to fix this? I think we were already\n> considering forcing one for the command-tag issues, otherwise I'd\n> probably vote \"no\". Comments?\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 13 Oct 2002 23:44:33 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.3b2 ?bug? bitfromint4 is not working" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Do we need an initdb to fix command tags? I thought that was just a\n> change in the Query structure.\n\nA change in Query struct breaks stored rules. Looks like initdb\nmaterial to me ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 13 Oct 2002 23:50:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.3b2 ?bug? bitfromint4 is not working " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Do we need an initdb to fix command tags? I thought that was just a\n> > change in the Query structure.\n> \n> A change in Query struct breaks stored rules. Looks like initdb\n> material to me ...\n\nOh, I forgot about stored rules. Yep, that would cause it. Not sure if\nfixing rule return is a valid initdb reason, but with the 'bit' type\nproblem, seems it would be worth while. I am going to post in a few\nminutes about a push to get all those open items wrapped up. I think we\nare drifting.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 14 Oct 2002 00:11:37 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.3b2 ?bug? bitfromint4 is not working" } ]
[ { "msg_contents": "Isn't this a bug?\n\nregression=# create table FOO (f1 int);\nCREATE TABLE\nregression=# \\copy FOO from stdin\nERROR: Relation \"FOO\" does not exist\n\\copy: ERROR: Relation \"FOO\" does not exist\nregression=#\n\nThis happens because \\copy takes the given table name and slaps\ndouble quotes around it, so the backend gets COPY \"FOO\" ...\nrather than COPY FOO ...\n\nIt seems to me that psql's \\copy should interpret the table name\nthe same way that a regular SQL command would: honor double quotes,\ndowncase in the absence of quotes.\n\nComments, objections?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 12 Oct 2002 16:57:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "\\copy and identifier quoting" }, { "msg_contents": "Tom Lane wrote:\n> Isn't this a bug?\n> \n> regression=# create table FOO (f1 int);\n> CREATE TABLE\n> regression=# \\copy FOO from stdin\n> ERROR: Relation \"FOO\" does not exist\n> \\copy: ERROR: Relation \"FOO\" does not exist\n> regression=#\n> \n> This happens because \\copy takes the given table name and slaps\n> double quotes around it, so the backend gets COPY \"FOO\" ...\n> rather than COPY FOO ...\n> \n> It seems to me that psql's \\copy should interpret the table name\n> the same way that a regular SQL command would: honor double quotes,\n> downcase in the absence of quotes.\n> \n> Comments, objections?\n\nYes, that makes perfect sense.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 13 Oct 2002 23:46:10 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: \\copy and identifier quoting" } ]
[ { "msg_contents": "Hello hackers,\n\nIs there some established way of debugging the bootstrapping sequence?\nIf not, does anyone have a tip on how would one do it?\n\nI mean getting the standalone boostrapping backend into gdb...\n\nThank you.\n\n-- \nAlvaro Herrera (<alvherre[a]dcc.uchile.cl>)\n\"La naturaleza, tan fragil, tan expuesta a la muerte... y tan viva\"\n", "msg_date": "Sat, 12 Oct 2002 17:42:34 -0400", "msg_from": "Alvaro Herrera <alvherre@dcc.uchile.cl>", "msg_from_op": true, "msg_subject": "Debugging bootstrap" }, { "msg_contents": "Alvaro Herrera <alvherre@dcc.uchile.cl> writes:\n> Is there some established way of debugging the bootstrapping sequence?\n> If not, does anyone have a tip on how would one do it?\n> I mean getting the standalone boostrapping backend into gdb...\n\nI'd just run the backend under gdb, passing it the same command line\nand stdin file as initdb would provide. Don't forget to initialize\nthe data directory as much as initdb would do, too.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 12 Oct 2002 18:42:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Debugging bootstrap " } ]
[ { "msg_contents": ">That doesn't seem to work any more:\n\n>>>>help set warrenmassengill@hotmail.com\n\n= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =\nhelp unknowntopic\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nERROR - the document you requested is not available.\n= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =\n\n\n\nMon, 7 Oct 2002 14:01:25 +0200\n\nfor a full list of set options send:\n\nhelp set\n\nto majordomo.\n\nRegards,\nMichael Paesold\n\n\n\n_________________________________________________________________\nJoin the world�s largest e-mail service with MSN Hotmail. \nhttp://www.hotmail.com\n\n", "msg_date": "Sat, 12 Oct 2002 16:54:48 -0500", "msg_from": "\"Warren Massengill\" <warrenmassengill@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: cross-posts (was Re: [GENERAL] Large databases, " } ]
[ { "msg_contents": "Hi, all\n\nWhile trying dblink_exec(), one of dblink()'s functions, I noticed there was an\nodd situation: case 1 and case 2 worked well, but case 3 didn't(see below). \n I hadn't been aware of it so that I only executed BEGIN and END in\ndblink_exec() at first . This time, however, I noticed it by executing ROLLBACK.\n\nI'm hoping that dblink_exec() returns something like warning if those who intend\nto do transactions make a declaration of blink_exec('dbname=some', 'begin') by mistake.\n\nfor example \n WARNING :You should declare dblink_exec('dbname=some', 'BEGIN; some queries;\nCOMMIT/ROLLBACK/END;') or use dblink_exec('BEGIN/COMMIT/ROLLBACK/END')\naround dblink_exec('some queries')s. If not, your transactions won't work.\n\nRegards,\nMasaru Sugawara\n\n\n\n> On Fri, 27 Sep 2002 09:35:48 -0700\n> Joe Conway <mail@joeconway.com> wrote:\n ...\n> The version of dblink in 7.3 (in beta now) has a new function, dblink_exec, \n> which is specifically intended for INSERT/UPDATE/DELETE. If you can, please \n> give the beta a try.\n> \n> I have a patch that allows dblink in 7.2 to execute INSERT/UPDATE/DELETE \n> statements. I'll send it to you off-list if you want (let me know), but it \n> would be better if you can wait for 7.3 to be released and use it.\n> \n> Joe\n ...\n> query\n> ------------\n> dblink(text,text) RETURNS setof record\n> - returns a set of results from remote SELECT query\n> (Note: comment out in dblink.sql to use deprecated version)\n\nfrom http://archives.postgresql.org/pgsql-general/2002-09/msg01290.php\n\n\n\n\n-- tables --\n$ cd ../postgresql-7.3.b2/contrib/dblink\n$ createdb regression_slave\n$ createdb regression_master\n$ createlang plpgsql regression_master\n$ psql regression_slave\n\n\\i dblink.sql\nCREATE TABLE foo(f1 int, f2 text, f3 text[], PRIMARY KEY (f1,f2));\nINSERT INTO foo VALUES(0,'a','{\"a0\",\"b0\",\"c0\"}');\nINSERT INTO foo VALUES(1,'b','{\"a1\",\"b1\",\"c1\"}');\nINSERT INTO foo VALUES(2,'c','{\"a2\",\"b2\",\"c2\"}');\n\n\\connect regression_master;\n\\i dblink.sql\nCREATE TABLE foo(f1 int, f2 text, f3 text[], PRIMARY KEY (f1,f2));\nINSERT INTO foo VALUES(0,'a','{\"a0\",\"b0\",\"c0\"}');\nINSERT INTO foo VALUES(1,'b','{\"a1\",\"b1\",\"c1\"}');\nINSERT INTO foo VALUES(2,'c','{\"a2\",\"b2\",\"c2\"}');\n\n\n-- case 1. --\n SELECT dblink_connect('dbname=regression_slave');\n SELECT dblink_exec('BEGIN');\n SELECT dblink_exec('INSERT INTO foo VALUES(12,''m'',''{\"a12\",\"b12\",\"c12\"}'');');\n SELECT dblink_exec('ROLLBACK'); -- success !\n SELECT dblink_disconnect();\n\n-- case 2. --\n SELECT dblink_exec('dbname=regression_slave', \n 'BEGIN;\n INSERT INTO foo VALUES(12,''m'',''{\"a12\",\"b12\",\"c12\"}'');\n ROLLBACK; \n '); -- success !\n\n-- case 3. -- \n SELECT dblink_exec('dbname=regression_slave', 'BEGIN'); \n SELECT dblink_exec('dbname=regression_slave',\n 'INSERT INTO foo VALUES(12,''m'',''{\"a12\",\"b12\",\"c12\"}'');');\n SELECT dblink_exec('dbname=regression_slave', 'ROLLBACK'); -- failure !\n\n\n\n\n", "msg_date": "Sun, 13 Oct 2002 11:23:36 +0900", "msg_from": "Masaru Sugawara <rk73@sea.plala.or.jp>", "msg_from_op": true, "msg_subject": "Transactions through dblink_exec()" }, { "msg_contents": "Masaru Sugawara wrote:\n> Hi, all\n> \n> While trying dblink_exec(), one of dblink()'s functions, I noticed there was an\n> odd situation: case 1 and case 2 worked well, but case 3 didn't(see below). \n> I hadn't been aware of it so that I only executed BEGIN and END in\n> dblink_exec() at first . This time, however, I noticed it by executing ROLLBACK.\n> \n> I'm hoping that dblink_exec() returns something like warning if those who intend\n> to do transactions make a declaration of blink_exec('dbname=some', 'begin') by mistake.\n> \n> for example \n> WARNING :You should declare dblink_exec('dbname=some', 'BEGIN; some queries;\n> COMMIT/ROLLBACK/END;') or use dblink_exec('BEGIN/COMMIT/ROLLBACK/END')\n> around dblink_exec('some queries')s. If not, your transactions won't work.\n\nHow can dblink() possibly be used safely for non-readonly \ntransactions without a full implementation of a two-phase commit \nprotocol? What happens when the remote server issues the COMMIT \nand then the local server crashes?\n\nMike Mascari\nmascarm@mascari.com\n\n", "msg_date": "Sun, 13 Oct 2002 02:15:36 -0400", "msg_from": "Mike Mascari <mascarm@mascari.com>", "msg_from_op": false, "msg_subject": "Re: Transactions through dblink_exec()" }, { "msg_contents": "Masaru Sugawara wrote:\n> I'm hoping that dblink_exec() returns something like warning if those who intend\n> to do transactions make a declaration of blink_exec('dbname=some', 'begin') by mistake.\n> \n> for example \n> WARNING :You should declare dblink_exec('dbname=some', 'BEGIN; some queries;\n> COMMIT/ROLLBACK/END;') or use dblink_exec('BEGIN/COMMIT/ROLLBACK/END')\n> around dblink_exec('some queries')s. If not, your transactions won't work.\n> \n{...snip...]\n> \n> -- case 3. -- \n> SELECT dblink_exec('dbname=regression_slave', 'BEGIN'); \n> SELECT dblink_exec('dbname=regression_slave',\n> 'INSERT INTO foo VALUES(12,''m'',''{\"a12\",\"b12\",\"c12\"}'');');\n> SELECT dblink_exec('dbname=regression_slave', 'ROLLBACK'); -- failure !\n\nHmmm. No surprise this din't work. Each time you specify the connect string, a \nconnection is opened, the statement executed, and then the connection is \nclosed -- i.e. each of the invocations of dblink_exec above stands alone. Are \nyou suggesting a warning only on something like:\n SELECT dblink_exec('dbname=regression_slave', 'BEGIN');\n? Seems like maybe a warning in the documentation would be enough. Any other \nopinions out there?\n\nWhat occurs to me though, is that this is one of those \"clients affected by \nthe autocommit setting\" situations. (...goes off and tries it out...) Sure \nenough. If you have autocommit set to off, you can do:\n SELECT dblink_exec('dbname=regression_slave',\n 'INSERT INTO foo VALUES(12,''m'',''{\"a12\",\"b12\",\"c12\"}'');');\nall day and never get it to succeed.\n\nGiven the above, should dblink_exec(CONNSTR, SQL) always wrap SQL in an \nexplicit transaction? Any thoughts on this?\n\nJoe\n\n\n", "msg_date": "Sat, 12 Oct 2002 23:37:18 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Transactions through dblink_exec()" }, { "msg_contents": "Mike Mascari wrote:\n> How can dblink() possibly be used safely for non-readonly transactions \n> without a full implementation of a two-phase commit protocol? What \n> happens when the remote server issues the COMMIT and then the local \n> server crashes?\n> \n\nIt can't be used safely if you're trying to ensure a distributed transaction \neither fails or commits. At least I can't think of a way without two-phase \ncommits implemented.\n\nBut depending on your scenario, just being sure that the remote transaction \nfails or succeeds as a unit may be all you care about.\n\nJoe\n\n\n", "msg_date": "Sat, 12 Oct 2002 23:44:18 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Transactions through dblink_exec()" }, { "msg_contents": "On Sat, 12 Oct 2002 23:37:18 -0700\nJoe Conway <mail@joeconway.com> wrote:\n\n> Masaru Sugawara wrote:\n> > I'm hoping that dblink_exec() returns something like warning if those\n> > who intend to do transactions make a declaration of\n> > blink_exec('dbname=some', 'begin') by mistake.\n> > \n> > for example \n> > WARNING :You should declare dblink_exec('dbname=some', 'BEGIN; some queries;\n> > COMMIT/ROLLBACK/END;') or use dblink_exec('BEGIN/COMMIT/ROLLBACK/END')\n> > around dblink_exec('some queries')s. If not, your transactions won't work.\n> > \n> {...snip...]\n> > \n> > -- case 3. -- \n> > SELECT dblink_exec('dbname=regression_slave', 'BEGIN'); \n> > SELECT dblink_exec('dbname=regression_slave',\n> > 'INSERT INTO foo VALUES(12,''m'',''{\"a12\",\"b12\",\"c12\"}'');');\n> > SELECT dblink_exec('dbname=regression_slave', 'ROLLBACK'); -- failure !\n> \n> Hmmm. No surprise this din't work. Each time you specify the connect string, a \n> connection is opened, the statement executed, and then the connection is \n> closed -- i.e. each of the invocations of dblink_exec above stands alone. Are \n> you suggesting a warning only on something like:\n> SELECT dblink_exec('dbname=regression_slave', 'BEGIN');\n\nYes.\n\n\n> ? Seems like maybe a warning in the documentation would be enough. \n\nYes, certainly. I came to think a warning in the doc is better than in the\ncommand line because that is not a bug.\n\n\n>Any other opinions out there?\n> \n> What occurs to me though, is that this is one of those \"clients affected by \n> the autocommit setting\" situations. (...goes off and tries it out...) Sure \n> enough. If you have autocommit set to off, you can do:\n> SELECT dblink_exec('dbname=regression_slave',\n> 'INSERT INTO foo VALUES(12,''m'',''{\"a12\",\"b12\",\"c12\"}'');');\n> all day and never get it to succeed.\n\nI didn't think of a situation of autocommit = off. As for me in some\ntransactions like the following, I haven't deeply worried about behaviors of \ndblink_exec(CONNSTR, 'BEGIN') because I would like to use dblink_connect() .\nHowever, I'm not sure whether the following is perfectly safe against every\naccident or not . \n\nBEGIN;\n SELECT dblink_connect('dbname=regression_slave');\n SELECT dblink_exec('BEGIN');\n SELECT dblink_exec('INSERT INTO foo VALUES(12, ''m'', ''{\"a12\",\"b12\",\"c12\"}'');');\n INSERT INTO foo VALUES(12, 'm', '{\"a12\",\"b12\",\"c12\"}');\n SELECT dblink_exec('END');\n SELECT dblink_disconnect();\nEND;\n\nor \n\nCREATE OR REPLACE FUNCTION fn_mirror() RETURNS text AS '\nDECLARE\n ret text;\nBEGIN\n PERFORM dblink_connect(''dbname=regression_slave'');\n PERFORM dblink_exec(''BEGIN'');\n -- PERFORM dblink_exec(\n -- ''INSERT INTO foo VALUES(12, ''''m'''', ''''{\"a12\",\"b12\",\"c12\"}'''');'');\n SELECT INTO ret * FROM dblink_exec(\n ''INSERT INTO foo VALUES(12, ''''m'''', ''''{\"a12\",\"b12\",\"c12\"}'''');'');\n RAISE NOTICE ''slave : %'', ret;\n INSERT INTO foo VALUES(12, ''m'', ''{\"a12\",\"b12\",\"c12\"}'');\n PERFORM dblink_exec(''END'');\n PERFORM dblink_disconnect();\n RETURN ''OK'';\nEND;\n' LANGUAGE 'plpgsql';\n\nSELECT fn_mirror();\n\n\n> \n> Given the above, should dblink_exec(CONNSTR, SQL) always wrap SQL in an \n> explicit transaction? Any thoughts on this?\n> \n> Joe\n> \n> \n> \n\n\nRegards,\nMasaru Sugawara\n\n\n", "msg_date": "Sun, 13 Oct 2002 19:07:41 +0900", "msg_from": "Masaru Sugawara <rk73@sea.plala.or.jp>", "msg_from_op": true, "msg_subject": "Re: Transactions through dblink_exec()" } ]
[ { "msg_contents": "\ncreate table test (col_a bigint);\nupdate test set col_a = nullif('200', -1);\n\nThe above works fine on 7.2 but the update fails on 7.3b2 with the \nfollowing error:\n\nERROR: column \"col_a\" is of type bigint but expression is of type text\n\tYou will need to rewrite or cast the expression\n\nIs this change in behavior intentional or is it a bug?\n\n\nThis situation is occuring because of two changes. The first being the \ndifference in how the server is handling the above update in 7.2 vs. \n7.3. The second is a change in the jdbc driver in 7.3. The actual \nupdate in jdbc looks like:\nupdate test set col_a = nullif(?, -1);\nand a \"setLong(1, 200)\" call is being done. In 7.2 the jdbc driver \nbound the long/bigint value as a plain number, but in 7.3 it binds it \nwith quotes making it type text and exposing the change in server \nbehavior. This change was made in the jdbc driver to work around the \nfact that indexes are not used for int2 or int8 columns unless the value \nis enclosed in quotes (or an explicit cast is used). I am not sure if \nthe recent changes for implicit casts fixes this index usage problem in \nthe server or not.\n\nSo I have three options here:\n\n1) if this is a server bug wait for a fix for 7.3\n2) revert the jdbc driver back to not quoting int2 and int8 values\n - If the server now handles using indexes on int2/int8 columns then\n this should be done anyway\n - It the server still has problems with using indexes without the\n quotes then this removes an often requested bugfix/workaround\n for the index usage problem\n3) Just have people rework their sql to avoid the change in behavior\n\nAny suggestions?\n\n\nthanks,\n--Barry\n\n\n", "msg_date": "Sat, 12 Oct 2002 21:19:52 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": true, "msg_subject": "Difference between 7.2 and 7.3, possible bug?" }, { "msg_contents": "Barry Lind <barry@xythos.com> writes:\n> create table test (col_a bigint);\n> update test set col_a = nullif('200', -1);\n> The above works fine on 7.2 but the update fails on 7.3b2 with the \n> following error:\n> ERROR: column \"col_a\" is of type bigint but expression is of type text\n> \tYou will need to rewrite or cast the expression\n\n> Is this change in behavior intentional or is it a bug?\n\nThis is an intentional tightening of implicit-cast behavior.\n\n> This situation is occuring because of two changes. The first being the \n> difference in how the server is handling the above update in 7.2 vs. \n> 7.3. The second is a change in the jdbc driver in 7.3. The actual \n> update in jdbc looks like:\n> update test set col_a = nullif(?, -1);\n> and a \"setLong(1, 200)\" call is being done. In 7.2 the jdbc driver \n> bound the long/bigint value as a plain number, but in 7.3 it binds it \n> with quotes making it type text and exposing the change in server \n> behavior.\n\nI would say that that is a very bad decision in the JDBC driver and\nshould be reverted ... especially if the driver is not bright enough\nto notice the context in which the parameter is being used. Consider\nfor example\n\nregression=# select 12 + 34;\n ?column?\n----------\n 46\n(1 row)\n\nregression=# select '12' + '34';\n ?column?\n----------\n d\n(1 row)\n\nNot exactly the expected result ...\n\n> 2) revert the jdbc driver back to not quoting int2 and int8 values\n> - If the server now handles using indexes on int2/int8 columns then\n> this should be done anyway\n> - It the server still has problems with using indexes without the\n> quotes then this removes an often requested bugfix/workaround\n> for the index usage problem\n\nYou are trying to mask a server problem in the driver. This is not a\ngood idea. The server problem is short-term (yes, we've finally agreed\nhow to fix it, and it will happen in 7.4), but a client-library hack to\nmask it will cause problems indefinitely.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 13 Oct 2002 01:05:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Difference between 7.2 and 7.3, possible bug? " }, { "msg_contents": "\n\nTom Lane wrote:\n> I would say that that is a very bad decision in the JDBC driver and\n> should be reverted ... especially if the driver is not bright enough\n> to notice the context in which the parameter is being used. Consider\n> for example\n...\n> \n> You are trying to mask a server problem in the driver. This is not a\n> good idea. The server problem is short-term (yes, we've finally agreed\n> how to fix it, and it will happen in 7.4), but a client-library hack to\n> mask it will cause problems indefinitely.\n> \n> \t\t\tregards, tom lane\n> \n\nTom,\n\nThanks for the quick reply. I will back out the jdbc change. It was \none of those changes I did reluctantly. I have been pushing back for a \ncouple of releases now saying that this is a server bug and needs to be \nfixed there. But it didn't seem like that was ever going to happen so I \nfinally gave in. For some users this bug of not using indexes for \nint2/int8 makes it impossible for them to use postgres. This happens \nfor users who are using a database abstraction layer that doesn't allow \nthe user to actually touch the sql being sent to the server. Therefore \nthey have no opportunity to work around the underlying bug and can't use \npostgres as their database because of the performance problems.\n\nI am glad to here this is finally getting resolved for 7.4.\n\nthanks,\n--Barry\n\n\n", "msg_date": "Sat, 12 Oct 2002 22:35:06 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Difference between 7.2 and 7.3, possible bug?" } ]
[ { "msg_contents": "\nI was spending some time investigating how to fix the jdbc driver to \ndeal with the autocommit functionality in 7.3. I am trying to come up \nwith a way of using 'set autocommit = on/off' as a way of implementing \nthe jdbc symantics for autocommit. The current code just inserts a \n'begin' after every commit or rollback when autocommit is turned off in \njdbc.\n\nI can continue to use the old way and just issue a 'set autocommit = on' \nat connection initialization, but I wanted to investigate if using 'set \nautocommit = off' would be a better implementation.\n\nThe problem I am having is finding a way to turn autocommit on or off \nwithout generating warning messages, or without having the change \naccidentally rolled back later.\n\nBelow is the current behavior (based on a fresh pull from cvs this morning):\n\nKey: ACon = autocommit on\n ACoff = autocommit off\n NIT = not in transaction\n IT = in transaction\n IT* = in transaction where a rollback will change autocommit state\n\nCurrent State Action End State\nACon and NIT set ACon ACon and NIT\n set ACoff ACoff and IT*\nACon and IT set ACon ACon and IT\n set ACoff ACoff and IT*\nACon and IT* set ACon ACon and IT*\n set ACoff ACoff and IT\nACoff and NIT set ACon ACon and NIT\n set ACoff ACoff and IT\nACoff and IT set ACon ACon and IT*\n set ACoff ACoff and IT\nACoff and IT* set ACon ACon and IT\n set ACoff ACoff and IT*\n\nThere are two conclusions I have drawn from this:\n\n1) Without the ability for the client to know the current transaction \nstate it isn't feasible to use set autocommit = on/off in the client. \nThere will either end up being spurious warning messages about \ntransaction already in process or no transaction in process, or \nsituations where a subsequent rollback can undo the change. So I will \nstay with the current functionality in the jdbc driver until the FE/BE \nprotocol provides access to the transaction status.\n\n2) In one place the current functionality doesn't make sense (at least \nto me).\nACon and NIT set ACoff ACoff and IT*\n\nIf I am running in autocommit mode and I issue a command I expect that \ncommand to be committed. But that is not the case here. I would have \nexpected the result to be: ACoff and NIT\n\n\nthanks,\n--Barry\n\n\n\n", "msg_date": "Sun, 13 Oct 2002 01:41:39 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": true, "msg_subject": "experiences with autocommit functionality in 7.3" }, { "msg_contents": "Barry Lind <barry@xythos.com> writes:\n> Below is the current behavior (based on a fresh pull from cvs this morning):\n> Current State Action End State\n> ACon and NIT set ACon ACon and NIT\n> set ACoff ACoff and IT*\n\nBruce was supposed to fix this. We agreed that a SET command would\nnever initiate a transaction block on its own. Looks like it's not\nthere yet --- but IMHO the behavior should be\n\nACon and NIT set ACon ACon and NIT\n set ACoff ACoff and NIT\nACon and IT set ACon ACon and IT\n set ACoff ACoff and IT*\nACon and IT* set ACon ACon and IT*\n set ACoff ACoff and IT\nACoff and NIT set ACon ACon and NIT\n set ACoff ACoff and NIT\nACoff and IT set ACon ACon and IT*\n set ACoff ACoff and IT\nACoff and IT* set ACon ACon and IT\n set ACoff ACoff and IT*\n\nWill that resolve your concern?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 13 Oct 2002 10:51:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: experiences with autocommit functionality in 7.3 " }, { "msg_contents": "I said:\n> Bruce was supposed to fix this. We agreed that a SET command would\n> never initiate a transaction block on its own. Looks like it's not\n> there yet ---\n\nNow it is. Give it another try ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 13 Oct 2002 13:10:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: experiences with autocommit functionality in 7.3 " }, { "msg_contents": "Tom Lane wrote:\n> Barry Lind <barry@xythos.com> writes:\n> > Below is the current behavior (based on a fresh pull from cvs this morning):\n> > Current State Action End State\n> > ACon and NIT set ACon ACon and NIT\n> > set ACoff ACoff and IT*\n> \n> Bruce was supposed to fix this. We agreed that a SET command would\n> never initiate a transaction block on its own. Looks like it's not\n> there yet --- but IMHO the behavior should be\n\nWell, I thought I did it, and it did work on my limited number of test\ncases. Seems you got it fully working.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 13 Oct 2002 23:35:24 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: experiences with autocommit functionality in 7.3" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Well, I thought I did it, and it did work on my limited number of test\n> cases. Seems you got it fully working.\n\nActually, it failed for me (and evidently for Barry) on exactly the test\ncase you posted along with the patch. You said\n\n> test=> set autocommit = off;\n> SET\n> test=> commit;\n> WARNING: COMMIT: no transaction in progress\n> COMMIT\n\nbut in fact I saw the COMMIT succeeding without complaint. I was\nmeaning to ask you just what code you'd tested, because this morning's\nCVS tip did *not* behave as above.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 13 Oct 2002 23:47:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: experiences with autocommit functionality in 7.3 " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Well, I thought I did it, and it did work on my limited number of test\n> > cases. Seems you got it fully working.\n> \n> Actually, it failed for me (and evidently for Barry) on exactly the test\n> case you posted along with the patch. You said\n> \n> > test=> set autocommit = off;\n> > SET\n> > test=> commit;\n> > WARNING: COMMIT: no transaction in progress\n> > COMMIT\n> \n> but in fact I saw the COMMIT succeeding without complaint. I was\n> meaning to ask you just what code you'd tested, because this morning's\n> CVS tip did *not* behave as above.\n\nI am stumped myself as well. I still have the CVS of my old code, and\nit fails just as you saw, but I know I tested it and copied that into\nthe email via cut/paste so my only guess is that I tweaked something\nafter I ran the test and if broke something else. If you got it all\nworking now, I won't research further.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 14 Oct 2002 00:15:51 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: experiences with autocommit functionality in 7.3" } ]
[ { "msg_contents": "I just cut and pasted someone's mac address:\n\npatrimoine=# update ethernet set mac='00-00-39-AB-92-FO' where id=623;\nUPDATE 1\npatrimoine=# select mac from ethernet where id=623;\n mac \n-------------------\n 00:00:39:ab:92:0f\n(1 row)\n\n\nNote the typo \"O\" instead of \"0\". I can see how that happened - should it\nbe \"notify\"ed against?\n\n(pre-25 Sept code, 7.3b1)\n\nCheers,\n\nPatrick\n", "msg_date": "Sun, 13 Oct 2002 14:16:09 +0100", "msg_from": "Patrick Welche <prlw1@newn.cam.ac.uk>", "msg_from_op": true, "msg_subject": "mac typo prob?" }, { "msg_contents": "Patrick Welche <prlw1@newn.cam.ac.uk> writes:\n> I just cut and pasted someone's mac address:\n> patrimoine=# update ethernet set mac='00-00-39-AB-92-FO' where id=623;\n> UPDATE 1\n> patrimoine=# select mac from ethernet where id=623;\n> mac \n> -------------------\n> 00:00:39:ab:92:0f\n> (1 row)\n\n\n> Note the typo \"O\" instead of \"0\". I can see how that happened - should it\n> be \"notify\"ed against?\n\nNo, it should be an error IMHO. macaddr_in() is failing to check for\ntrailing junk, which is a standard problem for sscanf-based parsing...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 13 Oct 2002 10:55:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: mac typo prob? " } ]
[ { "msg_contents": "----- Forwarded message from Bruce Momjian <pgman@candle.pha.pa.us> -----\n\nI don't know. Would you ask hackers list, and perhaps CC the author of\nthat patch.\n\n---------------------------------------------------------------------------\n\nDenis A Ustimenko wrote:\n> Bruce, why have all precise time calculations been droped out in 1.206? If there is no\n> gettimeofday in win32?\n> \n----- End forwarded message -----\n\n-- \nRegards\nDenis\n", "msg_date": "Mon, 14 Oct 2002 10:32:19 +0700", "msg_from": "Denis A Ustimenko <denis@oldham.ru>", "msg_from_op": true, "msg_subject": "droped out precise time calculations in\n\tsrc/interfaces/libpq/fe-connect.c" }, { "msg_contents": "Denis A Ustimenko wrote:\n>>Bruce, why have all precise time calculations been droped out in 1.206? If there is no\n>>gettimeofday in win32?\n\ngettimeofday was not portable to win32 (at least not that I could find) and \nhence broke the win32 build of the clients.\n\nJoe\n\n\n", "msg_date": "Sun, 13 Oct 2002 21:02:55 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: droped out precise time calculations in\n\tsrc/interfaces/libpq/fe-connect.c" }, { "msg_contents": "On Sun, Oct 13, 2002 at 09:02:55PM -0700, Joe Conway wrote:\n> Denis A Ustimenko wrote:\n> >>Bruce, why have all precise time calculations been droped out in 1.206? \n> >>If there is no\n> >>gettimeofday in win32?\n> \n> gettimeofday was not portable to win32 (at least not that I could find) and \n> hence broke the win32 build of the clients.\n> \n\nGetSystemTimeAsFileTime should help.\n\nhttp://msdn.microsoft.com/library/default.asp?url=/library/en-us/sysinfo/base/getsystemtimeasfiletime.asp\n\n-- \nRegards\nDenis\n", "msg_date": "Mon, 14 Oct 2002 11:48:27 +0700", "msg_from": "Denis A Ustimenko <denis@oldham.ru>", "msg_from_op": true, "msg_subject": "Re: droped out precise time calculations in\n\tsrc/interfaces/libpq/fe-connect.c" }, { "msg_contents": "Denis A Ustimenko wrote:\n> On Sun, Oct 13, 2002 at 09:02:55PM -0700, Joe Conway wrote:\n> > Denis A Ustimenko wrote:\n> > >>Bruce, why have all precise time calculations been droped out in 1.206? \n> > >>If there is no\n> > >>gettimeofday in win32?\n> > \n> > gettimeofday was not portable to win32 (at least not that I could find) and \n> > hence broke the win32 build of the clients.\n> > \n> \n> GetSystemTimeAsFileTime should help.\n> \n> http://msdn.microsoft.com/library/default.asp?url=/library/en-us/sysinfo/base/getsystemtimeasfiletime.asp\n\nIt's not clear to me how we could get this into something we can deal\nwith like gettimeofday.\n\nI looked at the Apache APR project, and they have a routine that returns\nthe microseconds since 1970 for Unix:\n\t\n\t/* NB NB NB NB This returns GMT!!!!!!!!!! */\n\tAPR_DECLARE(apr_time_t) apr_time_now(void)\n\t{\n\t struct timeval tv;\n\t gettimeofday(&tv, NULL);\n\t return tv.tv_sec * APR_USEC_PER_SEC + tv.tv_usec;\n\t}\n\nand for Win32:\n\t\n\tAPR_DECLARE(apr_time_t) apr_time_now(void)\n\t{\n\t LONGLONG aprtime = 0;\n\t FILETIME time;\n\t#ifndef _WIN32_WCE\n\t GetSystemTimeAsFileTime(&time);\n\t#else\n\t SYSTEMTIME st;\n\t GetSystemTime(&st);\n\t SystemTimeToFileTime(&st, &time);\n\t#endif\n\t FileTimeToAprTime(&aprtime, &time);\n\t return aprtime; \n\t}\n\nand FileTimeToAprTime() is:\n\t\n\t/* Number of micro-seconds between the beginning of the Windows epoch\n\t * (Jan. 1, 1601) and the Unix epoch (Jan. 1, 1970) \n\t */\n\t#define APR_DELTA_EPOCH_IN_USEC APR_TIME_C(11644473600000000);\n\t\n\t\n\t__inline void FileTimeToAprTime(apr_time_t *result, FILETIME *input)\n\t{\n\t /* Convert FILETIME one 64 bit number so we can work with it. */\n\t *result = input->dwHighDateTime;\n\t *result = (*result) << 32;\n\t *result |= input->dwLowDateTime;\n\t *result /= 10; /* Convert from 100 nano-sec periods to micro-seconds. */\n\t *result -= APR_DELTA_EPOCH_IN_USEC; /* Convert from Windows epoch to Unix epoch */\n\t return;\n\t}\n\nSo, this is what needs to be dealt with to get it working.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 14 Oct 2002 01:00:07 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: droped out precise time calculations in\n\tsrc/interfaces/libpq/fe-connect.c" }, { "msg_contents": "Bruce Momjian wrote:\n> So, this is what needs to be dealt with to get it working.\n> \n\nMore to the point, why is sub-second precision needed in this function? \nConnection timeout is given to us in whole seconds (1.205 code, i.e. prior to \nthe patch in question):\n\n remains.tv_sec = atoi(conn->connect_timeout);\n if (!remains.tv_sec)\n {\n conn->status = CONNECTION_BAD;\n return 0;\n }\n remains.tv_usec = 0;\n rp = &remains;\n\nSo there is no way to bail out prior to one second. Once you accept that the \ntimeout is >= 1 second, and in whole second increments, why does it need \nsub-second resolution?\n\nJoe\n\n", "msg_date": "Sun, 13 Oct 2002 22:15:52 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: droped out precise time calculations in\n\tsrc/interfaces/libpq/fe-connect.c" }, { "msg_contents": "Joe Conway wrote:\n> Bruce Momjian wrote:\n> > So, this is what needs to be dealt with to get it working.\n> > \n> \n> More to the point, why is sub-second precision needed in this function? \n> Connection timeout is given to us in whole seconds (1.205 code, i.e. prior to \n> the patch in question):\n> \n> remains.tv_sec = atoi(conn->connect_timeout);\n> if (!remains.tv_sec)\n> {\n> conn->status = CONNECTION_BAD;\n> return 0;\n> }\n> remains.tv_usec = 0;\n> rp = &remains;\n> \n> So there is no way to bail out prior to one second. Once you accept that the \n> timeout is >= 1 second, and in whole second increments, why does it need \n> sub-second resolution?\n\nIt could be argued that our seconds are not as exact as they could be\nwith subsecond timing. Not sure it is worth it, but I can see the\npoint. I would like to remove the tv_usec test because it suggests we\nare doing something with microseconds when we are not. Also, should we\nswitch to a simple time() call, rather than gettimeofday()?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 14 Oct 2002 01:48:41 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: droped out precise time calculations in\n\tsrc/interfaces/libpq/fe-connect.c" }, { "msg_contents": "Bruce Momjian wrote:\n> It could be argued that our seconds are not as exact as they could be\n> with subsecond timing. Not sure it is worth it, but I can see the\n> point.\n\nWell, if we were specifying the timeout in microseconds instead of seconds, it \nwould make sense to have better resolution. But when you can only specify the \ntimeout in seconds, the internal time comparison doesn't need to be any more \naccurate than seconds (IMHO anyway).\n\n> are doing something with microseconds when we are not. Also, should we\n> switch to a simple time() call, rather than gettimeofday()?\n> \n\nAlready done -- that's what Denis is unhappy about.\n\nJoe\n\n", "msg_date": "Sun, 13 Oct 2002 22:59:40 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: droped out precise time calculations in\n\tsrc/interfaces/libpq/fe-connect.c" }, { "msg_contents": "On Sun, Oct 13, 2002 at 10:59:40PM -0700, Joe Conway wrote:\n> Well, if we were specifying the timeout in microseconds instead of seconds, \n> it would make sense to have better resolution. But when you can only \n> specify the timeout in seconds, the internal time comparison doesn't need \n> to be any more accurate than seconds (IMHO anyway).\n\nActually we have the state machine in connectDBComplete() and the timeout is\nset for machine as the whole. Therefore if 1 second timeout is seted for the\nconnectDBComplete() the timeout of particualr iteration of loop can be less\nthen 1 second. \n\n-- \nRegards\nDenis\n", "msg_date": "Mon, 14 Oct 2002 13:40:43 +0700", "msg_from": "Denis A Ustimenko <denis@oldham.ru>", "msg_from_op": true, "msg_subject": "Re: droped out precise time calculations in\n\tsrc/interfaces/libpq/fe-connect.c" }, { "msg_contents": "Denis A Ustimenko <denis@oldham.ru> writes:\n> On Sun, Oct 13, 2002 at 10:59:40PM -0700, Joe Conway wrote:\n>> Well, if we were specifying the timeout in microseconds instead of seconds, \n>> it would make sense to have better resolution. But when you can only \n>> specify the timeout in seconds, the internal time comparison doesn't need \n>> to be any more accurate than seconds (IMHO anyway).\n\n> Actually we have the state machine in connectDBComplete() and the timeout is\n> set for machine as the whole. Therefore if 1 second timeout is seted for the\n> connectDBComplete() the timeout of particualr iteration of loop can be less\n> then 1 second. \n\nHowever, the code's been restructured so that we don't need to keep\ntrack of the exact time spent in any one iteration. The error is only\non the overall delay. I agree with Joe that it's not worth the effort\nneeded (in the Win32 case) to make the timeout accurate to < 1 sec.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 Oct 2002 09:53:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: droped out precise time calculations in\n\tsrc/interfaces/libpq/fe-connect.c" }, { "msg_contents": "Joe Conway wrote:\n> Bruce Momjian wrote:\n> > It could be argued that our seconds are not as exact as they could be\n> > with subsecond timing. Not sure it is worth it, but I can see the\n> > point.\n> \n> Well, if we were specifying the timeout in microseconds instead of seconds, it \n> would make sense to have better resolution. But when you can only specify the \n> timeout in seconds, the internal time comparison doesn't need to be any more \n> accurate than seconds (IMHO anyway).\n> \n> > are doing something with microseconds when we are not. Also, should we\n> > switch to a simple time() call, rather than gettimeofday()?\n> > \n> \n> Already done -- that's what Denis is unhappy about.\n\nOK, I see that, but now, we are stuffing everything into a timeval\nstruct. Does that make sense? Shouldn't we just use time_t? I realize\nwe need the timeval struct for select() in pqWaitTimed, but we are\nmaking a copy of the timeval we pass in anyway. Seems it would be easier\njust making it a time_t.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 14 Oct 2002 10:49:26 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: droped out precise time calculations in\n\tsrc/interfaces/libpq/fe-connect.c" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> Already done -- that's what Denis is unhappy about.\n\n> OK, I see that, but now, we are stuffing everything into a timeval\n> struct. Does that make sense? Shouldn't we just use time_t?\n\nYeah, the code could be simplified now. I'm also still not happy about\nthe question of whether it's safe to assume tv_sec is signed. I think\nthe running state should be just finish_time, and then inside the\nloop when we are about to wait, we could do\n\n\tcurrent_time = time(NULL);\n\tif (current_time >= finish_time)\n\t{\n\t\t// failure exit\n\t}\n\tremains.tv_sec = finish_time - current_time;\n\tremains.tv_usec = 0;\n\t// pass &remains to select...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 Oct 2002 11:19:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: droped out precise time calculations in\n\tsrc/interfaces/libpq/fe-connect.c" }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> Already done -- that's what Denis is unhappy about.\n> \n> > OK, I see that, but now, we are stuffing everything into a timeval\n> > struct. Does that make sense? Shouldn't we just use time_t?\n> \n> Yeah, the code could be simplified now. I'm also still not happy about\n> the question of whether it's safe to assume tv_sec is signed. I think\n> the running state should be just finish_time, and then inside the\n> loop when we are about to wait, we could do\n> \n> \tcurrent_time = time(NULL);\n> \tif (current_time >= finish_time)\n> \t{\n> \t\t// failure exit\n> \t}\n> \tremains.tv_sec = finish_time - current_time;\n> \tremains.tv_usec = 0;\n> \t// pass &remains to select...\n\nThat whole remains structure should be a time_t variable, and then we\n_know_ we can't assume it is signed. The use of timeval should\nhappen only in pqWaitTimed because it has to use select().\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 14 Oct 2002 11:58:27 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: droped out precise time calculations in\n\tsrc/interfaces/libpq/fe-connect.c" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> That whole remains structure should be a time_t variable, and then we\n> _know_ we can't assume it is signed. The use of timeval should\n> happen only in pqWaitTimed because it has to use select().\n\nI think it's fine to use struct timeval as the parameter type for\npqWaitTimed. This particular caller of pqWaitTimed has no need for\nsub-second wait precision, but that doesn't mean we might not want it\nfor other purposes later.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 Oct 2002 12:10:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: droped out precise time calculations in\n\tsrc/interfaces/libpq/fe-connect.c" }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > That whole remains structure should be a time_t variable, and then we\n> > _know_ we can't assume it is signed. The use of timeval should\n> > happen only in pqWaitTimed because it has to use select().\n> \n> I think it's fine to use struct timeval as the parameter type for\n> pqWaitTimed. This particular caller of pqWaitTimed has no need for\n> sub-second wait precision, but that doesn't mean we might not want it\n> for other purposes later.\n\nThat was a question: whether pqWaitTimed() was something exported by\nlibpq and therefore something that has an API that shouldn't change. I\nsee it in libpq-int.h, which I think means it isn't exported, but yes,\nthere could be later cases where we need subsecond stuff.\n\nI have applied the following patch to get us a little closer to sanity.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\nIndex: src/interfaces/libpq/fe-connect.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/interfaces/libpq/fe-connect.c,v\nretrieving revision 1.208\ndiff -c -c -r1.208 fe-connect.c\n*** src/interfaces/libpq/fe-connect.c\t11 Oct 2002 04:41:59 -0000\t1.208\n--- src/interfaces/libpq/fe-connect.c\t14 Oct 2002 17:10:19 -0000\n***************\n*** 1071,1085 ****\n \t\t\tconn->status = CONNECTION_BAD;\n \t\t\treturn 0;\n \t\t}\n! \t\tremains.tv_usec = 0;\n \t\trp = &remains;\n \n \t\t/* calculate the finish time based on start + timeout */\n \t\tfinish_time = time((time_t *) NULL) + remains.tv_sec;\n \t}\n \n! \twhile (rp == NULL || remains.tv_sec > 0 ||\n! \t\t (remains.tv_sec == 0 && remains.tv_usec > 0))\n \t{\n \t\t/*\n \t\t * Wait, if necessary.\tNote that the initial state (just after\n--- 1071,1084 ----\n \t\t\tconn->status = CONNECTION_BAD;\n \t\t\treturn 0;\n \t\t}\n! \t\tremains.tv_usec = 0;\t/* We don't use subsecond timing */\n \t\trp = &remains;\n \n \t\t/* calculate the finish time based on start + timeout */\n \t\tfinish_time = time((time_t *) NULL) + remains.tv_sec;\n \t}\n \n! \twhile (rp == NULL || remains.tv_sec > 0)\n \t{\n \t\t/*\n \t\t * Wait, if necessary.\tNote that the initial state (just after\n***************\n*** 1133,1139 ****\n \t\t\t}\n \n \t\t\tremains.tv_sec = finish_time - current_time;\n- \t\t\tremains.tv_usec = 0;\n \t\t}\n \t}\n \tconn->status = CONNECTION_BAD;\n--- 1132,1137 ----\nIndex: src/interfaces/libpq/fe-misc.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/interfaces/libpq/fe-misc.c,v\nretrieving revision 1.80\ndiff -c -c -r1.80 fe-misc.c\n*** src/interfaces/libpq/fe-misc.c\t3 Oct 2002 17:09:42 -0000\t1.80\n--- src/interfaces/libpq/fe-misc.c\t14 Oct 2002 17:10:22 -0000\n***************\n*** 783,796 ****\n }\n \n int\n! pqWaitTimed(int forRead, int forWrite, PGconn *conn, const struct timeval * timeout)\n {\n \tfd_set\t\tinput_mask;\n \tfd_set\t\toutput_mask;\n \tfd_set\t\texcept_mask;\n \n \tstruct timeval tmp_timeout;\n- \tstruct timeval *ptmp_timeout = NULL;\n \n \tif (conn->sock < 0)\n \t{\n--- 783,795 ----\n }\n \n int\n! pqWaitTimed(int forRead, int forWrite, PGconn *conn, const struct timeval *timeout)\n {\n \tfd_set\t\tinput_mask;\n \tfd_set\t\toutput_mask;\n \tfd_set\t\texcept_mask;\n \n \tstruct timeval tmp_timeout;\n \n \tif (conn->sock < 0)\n \t{\n***************\n*** 823,836 ****\n \t\tif (NULL != timeout)\n \t\t{\n \t\t\t/*\n! \t\t\t * select may modify timeout argument on some platforms use\n! \t\t\t * copy\n \t\t\t */\n \t\t\ttmp_timeout = *timeout;\n- \t\t\tptmp_timeout = &tmp_timeout;\n \t\t}\n \t\tif (select(conn->sock + 1, &input_mask, &output_mask,\n! \t\t\t\t &except_mask, ptmp_timeout) < 0)\n \t\t{\n \t\t\tif (SOCK_ERRNO == EINTR)\n \t\t\t\tgoto retry5;\n--- 822,834 ----\n \t\tif (NULL != timeout)\n \t\t{\n \t\t\t/*\n! \t\t\t * \tselect() may modify timeout argument on some platforms so\n! \t\t\t *\tuse copy\n \t\t\t */\n \t\t\ttmp_timeout = *timeout;\n \t\t}\n \t\tif (select(conn->sock + 1, &input_mask, &output_mask,\n! \t\t\t\t &except_mask, &tmp_timeout) < 0)\n \t\t{\n \t\t\tif (SOCK_ERRNO == EINTR)\n \t\t\t\tgoto retry5;\nIndex: src/interfaces/libpq/libpq-int.h\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/interfaces/libpq/libpq-int.h,v\nretrieving revision 1.58\ndiff -c -c -r1.58 libpq-int.h\n*** src/interfaces/libpq/libpq-int.h\t3 Oct 2002 17:09:42 -0000\t1.58\n--- src/interfaces/libpq/libpq-int.h\t14 Oct 2002 17:10:24 -0000\n***************\n*** 340,346 ****\n extern int\tpqFlush(PGconn *conn);\n extern int\tpqSendSome(PGconn *conn);\n extern int\tpqWait(int forRead, int forWrite, PGconn *conn);\n! extern int\tpqWaitTimed(int forRead, int forWrite, PGconn *conn, const struct timeval * timeout);\n extern int\tpqReadReady(PGconn *conn);\n extern int\tpqWriteReady(PGconn *conn);\n \n--- 340,346 ----\n extern int\tpqFlush(PGconn *conn);\n extern int\tpqSendSome(PGconn *conn);\n extern int\tpqWait(int forRead, int forWrite, PGconn *conn);\n! extern int\tpqWaitTimed(int forRead, int forWrite, PGconn *conn, const struct timeval *timeout);\n extern int\tpqReadReady(PGconn *conn);\n extern int\tpqWriteReady(PGconn *conn);", "msg_date": "Mon, 14 Oct 2002 13:14:22 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: droped out precise time calculations in\n\tsrc/interfaces/libpq/fe-connect.c" }, { "msg_contents": "Tom, excuse me, I forget to copy previous posting to pgsql-hackers@postgresql.org.\n\nOn Mon, Oct 14, 2002 at 09:53:27AM -0400, Tom Lane wrote:\n> Denis A Ustimenko <denis@oldham.ru> writes:\n> > On Sun, Oct 13, 2002 at 10:59:40PM -0700, Joe Conway wrote:\n> >> Well, if we were specifying the timeout in microseconds instead of seconds, \n> >> it would make sense to have better resolution. But when you can only \n> >> specify the timeout in seconds, the internal time comparison doesn't need \n> >> to be any more accurate than seconds (IMHO anyway).\n> \n> > Actually we have the state machine in connectDBComplete() and the timeout is\n> > set for machine as the whole. Therefore if 1 second timeout is seted for the\n> > connectDBComplete() the timeout of particualr iteration of loop can be less\n> > then 1 second. \n> \n> However, the code's been restructured so that we don't need to keep\n> track of the exact time spent in any one iteration. The error is only\n> on the overall delay. I agree with Joe that it's not worth the effort\n> needed (in the Win32 case) to make the timeout accurate to < 1 sec.\n> \n\nBeware of almost 1 second posiible error. For example: connect_timeout == 1,\nwe start at 0.999999 then finish_time == 1. If CPU is quite busy we will\ndo only one iteration. I don't know is it enough to make connection?\nTrue timeout in this case == 0.000001\n\n-- \nRegards\nDenis\n", "msg_date": "Tue, 15 Oct 2002 10:30:01 +0700", "msg_from": "Denis A Ustimenko <denis@oldham.ru>", "msg_from_op": true, "msg_subject": "Re: droped out precise time calculations in\n\tsrc/interfaces/libpq/fe-connect.c" }, { "msg_contents": "Denis A Ustimenko wrote:\n> Tom, excuse me, I forget to copy previous posting to pgsql-hackers@postgresql.org.\n> \n> On Mon, Oct 14, 2002 at 09:53:27AM -0400, Tom Lane wrote:\n> > Denis A Ustimenko <denis@oldham.ru> writes:\n> > > On Sun, Oct 13, 2002 at 10:59:40PM -0700, Joe Conway wrote:\n> > >> Well, if we were specifying the timeout in microseconds instead of seconds, \n> > >> it would make sense to have better resolution. But when you can only \n> > >> specify the timeout in seconds, the internal time comparison doesn't need \n> > >> to be any more accurate than seconds (IMHO anyway).\n> > \n> > > Actually we have the state machine in connectDBComplete() and the timeout is\n> > > set for machine as the whole. Therefore if 1 second timeout is seted for the\n> > > connectDBComplete() the timeout of particualr iteration of loop can be less\n> > > then 1 second. \n> > \n> > However, the code's been restructured so that we don't need to keep\n> > track of the exact time spent in any one iteration. The error is only\n> > on the overall delay. I agree with Joe that it's not worth the effort\n> > needed (in the Win32 case) to make the timeout accurate to < 1 sec.\n> > \n> \n> Beware of almost 1 second posiible error. For example: connect_timeout == 1,\n> we start at 0.999999 then finish_time == 1. If CPU is quite busy we will\n> do only one iteration. I don't know is it enough to make connection?\n> True timeout in this case == 0.000001\n\nGood question. What is going to happen is that select() is going to be\npassed tv_sec = 1, and it is going to sleep for one second. Now, if\nselect is interrupted, another time() call is going to be made. If the\nsecond ticked just before we run time(), we are going to think one\nsecond has elapsed and return, so in the non-interrupt case, we are\ngoing to be fine, but with select() interrupted, we are going to have\nthis problem. Now, if we used gettimeofday(), we would be fine, but we\ndon't have that on Win32 and the ported version of that looks pretty\ncomplicated. Maybe we will go with what we have and see if anyone\ncomplains.\n\nOne idea I am looking at is to pass a time_t into pqWaitTimeout, and do\nthat clock checking in there, and maybe I can do a gettimeofday() for\nnon-Win32 and a time() on Win32.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 14 Oct 2002 23:38:55 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: droped out precise time calculations in\n\tsrc/interfaces/libpq/fe-connect.c" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Denis A Ustimenko wrote:\n>> Beware of almost 1 second posiible error. For example: connect_timeout == 1,\n>> we start at 0.999999 then finish_time == 1. If CPU is quite busy we will\n>> do only one iteration. I don't know is it enough to make connection?\n>> True timeout in this case == 0.000001\n\n> Good question. What is going to happen is that select() is going to be\n> passed tv_sec = 1, and it is going to sleep for one second. Now, if\n> select is interrupted, another time() call is going to be made.\n\nThere is a very simple answer to this, which I think I suggested to Joe\noriginally, but it's not in the code now: the initial calculation of\nfinish_time = now() + timeout must add one. This ensures that any\nroundoff error is in the conservative direction of timing out later,\nrather than sooner.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 15 Oct 2002 00:01:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: droped out precise time calculations in\n\tsrc/interfaces/libpq/fe-connect.c" }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Denis A Ustimenko wrote:\n> >> Beware of almost 1 second posiible error. For example: connect_timeout == 1,\n> >> we start at 0.999999 then finish_time == 1. If CPU is quite busy we will\n> >> do only one iteration. I don't know is it enough to make connection?\n> >> True timeout in this case == 0.000001\n> \n> > Good question. What is going to happen is that select() is going to be\n> > passed tv_sec = 1, and it is going to sleep for one second. Now, if\n> > select is interrupted, another time() call is going to be made.\n> \n> There is a very simple answer to this, which I think I suggested to Joe\n> originally, but it's not in the code now: the initial calculation of\n> finish_time = now() + timeout must add one. This ensures that any\n> roundoff error is in the conservative direction of timing out later,\n> rather than sooner.\n\nYes, I saw that and will try to add it back into the code.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 15 Oct 2002 00:22:33 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: droped out precise time calculations in\n\tsrc/interfaces/libpq/fe-connect.c" }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>>Good question. What is going to happen is that select() is going to be\n>>passed tv_sec = 1, and it is going to sleep for one second. Now, if\n>>select is interrupted, another time() call is going to be made.\n> \n> There is a very simple answer to this, which I think I suggested to Joe\n> originally, but it's not in the code now: the initial calculation of\n> finish_time = now() + timeout must add one. This ensures that any\n> roundoff error is in the conservative direction of timing out later,\n> rather than sooner.\n\nYes, my bad, I guess.\n\nThe thing was that with the extra +1, I was repeatedly getting a wall-clock \ntime of 2 seconds with a timeout set to 1 second. It seemed odd to have my 1 \nsecond timeout automatically turned into 2 seconds every time. With the \ncurrent code, I tried a timeout of 1 second at least a 100 times and it always \ntook about 1 full wall-clock second. But I guess if there is some corner case \nthat needs it...\n\nJoe\n\n", "msg_date": "Tue, 15 Oct 2002 09:31:41 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: droped out precise time calculations in\n\tsrc/interfaces/libpq/fe-connect.c" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> The thing was that with the extra +1, I was repeatedly getting a wall-clock \n> time of 2 seconds with a timeout set to 1 second. It seemed odd to have my 1 \n> second timeout automatically turned into 2 seconds every time.\n\nThat is odd; seems like you should get between 1 and 2 seconds. How\nwere you measuring the delay, exactly?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 15 Oct 2002 13:08:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: droped out precise time calculations in\n\tsrc/interfaces/libpq/fe-connect.c" }, { "msg_contents": "Tom Lane wrote:\n> Joe Conway <mail@joeconway.com> writes:\n> > The thing was that with the extra +1, I was repeatedly getting a wall-clock \n> > time of 2 seconds with a timeout set to 1 second. It seemed odd to have my 1 \n> > second timeout automatically turned into 2 seconds every time.\n> \n> That is odd; seems like you should get between 1 and 2 seconds. How\n> were you measuring the delay, exactly?\n\nRemember, that if you add 1, the select() is going to get tv_sec = 2, so\nyes, it will be two seconds.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 15 Oct 2002 16:13:02 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: droped out precise time calculations in\n\tsrc/interfaces/libpq/fe-connect.c" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Tom Lane wrote:\n>> That is odd; seems like you should get between 1 and 2 seconds. How\n>> were you measuring the delay, exactly?\n\n> Remember, that if you add 1, the select() is going to get tv_sec = 2, so\n> yes, it will be two seconds.\n\nYeah, but only if the value isn't recalculated shortly later. Consider\n\n\tcaller computes finish_time = time() + timeout;\n\n\t...\n\n\tinside select-wait loop, compute max_delay = finish_time - time();\n\nIf the time() value has incremented by 1 second between these two lines\nof code, you have a problem with a 1-second timeout...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 15 Oct 2002 16:42:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: droped out precise time calculations in\n\tsrc/interfaces/libpq/fe-connect.c" }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Tom Lane wrote:\n> >> That is odd; seems like you should get between 1 and 2 seconds. How\n> >> were you measuring the delay, exactly?\n> \n> > Remember, that if you add 1, the select() is going to get tv_sec = 2, so\n> > yes, it will be two seconds.\n> \n> Yeah, but only if the value isn't recalculated shortly later. Consider\n> \n> \tcaller computes finish_time = time() + timeout;\n> \n> \t...\n> \n> \tinside select-wait loop, compute max_delay = finish_time - time();\n> \n> If the time() value has incremented by 1 second between these two lines\n> of code, you have a problem with a 1-second timeout...\n\nYep. If you track finish time, you get that 1 second rounding problem,\nand if you track just duration/timeout, you get into the problem of not\nknowing when the timeout has ended. I don't think these can be fixed\nexcept by overestimating (+1) or by tracking subseconds along with\nseconds so you really know when one second has elapsed.\n\nPerhaps we need to modify a timeout of 1 to be 2 and leave other values\nalone.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 15 Oct 2002 17:42:04 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: droped out precise time calculations in\n\tsrc/interfaces/libpq/fe-connect.c" }, { "msg_contents": "Tom Lane wrote:\n > Joe Conway <mail@joeconway.com> writes:\n >\n >> The thing was that with the extra +1, I was repeatedly getting a\n >> wall-clock time of 2 seconds with a timeout set to 1 second. It seemed\n >> odd to have my 1 second timeout automatically turned into 2 seconds every\n >> time.\n >\n > That is odd; seems like you should get between 1 and 2 seconds. How were\n > you measuring the delay, exactly?\n\nOK. I got a little more scientific about my testing. I used a php script, \nrunning on the same machine, to connect/disconnect in a tight loop and timed \nsuccessful and unsuccessful connection attempts using microtime().\n\nHere are the results. First with current cvs code:\n\ncurrent cvs libpq code\n-----------------------\ngood connect info, using unix socket, timeout = 1 second:\n=========================================================\nunsuccessful 69 times: sum 0.41736388206482: avg 0.0060487519139829\nsuccessful 9931 times: sum 68.798981308937: avg 0.0069276992557584\n\ngood connect info, using hostaddr, timeout = 1 second\n=====================================================\nunsuccessful 72 times: sum 0.37020063400269: avg 0.0051416754722595\nsuccessful 9928 times: sum 75.047878861427: avg 0.0075592142285886\n\ncurrent cvs libpq code - bad hostaadr, using hostaddr, timeout = 1 second\n=========================================================================\nunsuccessful 100 times: sum 99.975758910179: avg 0.99975758910179\nsuccessful 0 times: sum 0: avg n/a\n\n\nClearly not good. The timeout code is causing connection failures about 0.7% \nof the time. Next are the results using the attached patch. Per Bruce's \nsuggestion, it only adds 1 if the timeout is set to 1.\n\n\nwith patch libpq code\n---------------------\ngood connect info, using unix socket, timeout = 1 second\n========================================================\nunsuccessful 0 times: sum 0: avg n/a\nsuccessful 10000 times: sum 68.95981669426: avg 0.006895981669426\n\nwith patch libpq code - good connect info, using hostaddr, timeout = 1 second\n=============================================================================\nunsuccessful 0 times: sum 0: avg n/a\nsuccessful 10000 times: sum 73.500863552094: avg 0.0073500863552094\n\nwith patch libpq code - good connect info, using hostaddr, timeout = 2 seconds\n==============================================================================\nunsuccessful 0 times: sum 0: avg n/a\nsuccessful 10000 times: sum 73.354710936546: avg 0.0073354710936546\n\nwith patch libpq code - bad hostaadr, using hostaddr, timeout = 1 second\n========================================================================\nunsuccessful 100 times: sum 149.98181843758: avg 1.4998181843758\nsuccessful 0 times: sum 0: avg n/a\n\nwith patch libpq code - bad hostaadr, using hostaddr, timeout = 2 seconds\n=========================================================================\nunsuccessful 100 times: sum 149.98445630074: avg 1.4998445630074\nsuccessful 0 times: sum 0: avg n/a\n\nwith patch libpq code - bad hostaadr, using hostaddr, timeout = 3 seconds\n=========================================================================\nunsuccessful 20 times: sum 59.842629671097: avg 2.9921314835548\nsuccessful 0 times: sum 0: avg n/a\n\n\nWith the patch there were 0 failures on 30000 attempts using good connect \ninformation.\n\nIf there are no objections, please apply the attached. Otherwise let me know \nif you'd like different tests or would like to try other approaches.\n\nThanks,\n\nJoe", "msg_date": "Tue, 15 Oct 2002 16:42:02 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: droped out precise time calculations in\n\tsrc/interfaces/libpq/fe-connect.c" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> [ some convincing test cases that timeout=1 is not good ]\n\n> \t\tremains.tv_sec = atoi(conn->connect_timeout);\n> + \t\tif (remains.tv_sec == 1)\n> + \t\t\tremains.tv_sec += 1;\n> \t\tif (!remains.tv_sec)\n> \t\t{\n> \t\t\tconn->status = CONNECTION_BAD;\n\nOn pure-paranoia grounds, I'd suggest the logic\n\n+\t\t/* force a sane minimum delay */\n+ \t\tif (remains.tv_sec < 2)\n+ \t\t\tremains.tv_sec = 2;\n\nwhereupon you could remove the failure check just below.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 15 Oct 2002 22:41:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: droped out precise time calculations in\n\tsrc/interfaces/libpq/fe-connect.c" }, { "msg_contents": "Tom Lane wrote:\n> Joe Conway <mail@joeconway.com> writes:\n> > [ some convincing test cases that timeout=1 is not good ]\n> \n> > \t\tremains.tv_sec = atoi(conn->connect_timeout);\n> > + \t\tif (remains.tv_sec == 1)\n> > + \t\t\tremains.tv_sec += 1;\n> > \t\tif (!remains.tv_sec)\n> > \t\t{\n> > \t\t\tconn->status = CONNECTION_BAD;\n> \n> On pure-paranoia grounds, I'd suggest the logic\n> \n> +\t\t/* force a sane minimum delay */\n> + \t\tif (remains.tv_sec < 2)\n> + \t\t\tremains.tv_sec = 2;\n> \n> whereupon you could remove the failure check just below.\n\nI think we should fail if they set the timeout to zero, rather than\ncover it up by setting the delay to two.\n\nAttached is a patch that implements most of what we have discussed:\n\n\tuse time()\n\tget rid of timeval where not needed\n\tallow restart of select() to properly compute remaining time\n\tadd 1 to timeout == 1\n\tpass finish time to pqWaitTimed\n\nPatch applied. I am applying it so it is in CVS and everyone can see\nit. I will keep modifying it until everyone likes it. It is just\neasier to do it that way when multiple people are reviewing it. They\ncan jump in and make changes too.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\nIndex: src/interfaces/libpq/fe-connect.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/interfaces/libpq/fe-connect.c,v\nretrieving revision 1.210\ndiff -c -c -r1.210 fe-connect.c\n*** src/interfaces/libpq/fe-connect.c\t15 Oct 2002 01:48:25 -0000\t1.210\n--- src/interfaces/libpq/fe-connect.c\t16 Oct 2002 02:48:07 -0000\n***************\n*** 1052,1061 ****\n {\n \tPostgresPollingStatusType flag = PGRES_POLLING_WRITING;\n \n! \ttime_t\t\t\tfinish_time = 0,\n! \t\t\t\t\tcurrent_time;\n! \tstruct timeval\tremains,\n! \t\t\t\t *rp = NULL;\n \n \tif (conn == NULL || conn->status == CONNECTION_BAD)\n \t\treturn 0;\n--- 1052,1058 ----\n {\n \tPostgresPollingStatusType flag = PGRES_POLLING_WRITING;\n \n! \ttime_t\t\t\tfinish_time = -1;\n \n \tif (conn == NULL || conn->status == CONNECTION_BAD)\n \t\treturn 0;\n***************\n*** 1065,1084 ****\n \t */\n \tif (conn->connect_timeout != NULL)\n \t{\n! \t\tremains.tv_sec = atoi(conn->connect_timeout);\n! \t\tif (!remains.tv_sec)\n \t\t{\n \t\t\tconn->status = CONNECTION_BAD;\n \t\t\treturn 0;\n \t\t}\n! \t\tremains.tv_usec = 0;\t/* We don't use subsecond timing */\n! \t\trp = &remains;\n! \n \t\t/* calculate the finish time based on start + timeout */\n! \t\tfinish_time = time((time_t *) NULL) + remains.tv_sec;\n \t}\n \n! \twhile (rp == NULL || remains.tv_sec > 0)\n \t{\n \t\t/*\n \t\t * Wait, if necessary.\tNote that the initial state (just after\n--- 1062,1082 ----\n \t */\n \tif (conn->connect_timeout != NULL)\n \t{\n! \t\tint timeout = atoi(conn->connect_timeout);\n! \n! \t\tif (timeout == 0)\n \t\t{\n \t\t\tconn->status = CONNECTION_BAD;\n \t\t\treturn 0;\n \t\t}\n! \t\t/* Rounding could cause connection to fail;we need at least 2 secs */\n! \t\tif (timeout == 1)\n! \t\t\ttimeout++;\n \t\t/* calculate the finish time based on start + timeout */\n! \t\tfinish_time = time(NULL) + timeout;\n \t}\n \n! \twhile (finish_time == -1 || time(NULL) >= finish_time)\n \t{\n \t\t/*\n \t\t * Wait, if necessary.\tNote that the initial state (just after\n***************\n*** 1094,1100 ****\n \t\t\t\treturn 1;\t\t/* success! */\n \n \t\t\tcase PGRES_POLLING_READING:\n! \t\t\t\tif (pqWaitTimed(1, 0, conn, rp))\n \t\t\t\t{\n \t\t\t\t\tconn->status = CONNECTION_BAD;\n \t\t\t\t\treturn 0;\n--- 1092,1098 ----\n \t\t\t\treturn 1;\t\t/* success! */\n \n \t\t\tcase PGRES_POLLING_READING:\n! \t\t\t\tif (pqWaitTimed(1, 0, conn, finish_time))\n \t\t\t\t{\n \t\t\t\t\tconn->status = CONNECTION_BAD;\n \t\t\t\t\treturn 0;\n***************\n*** 1102,1108 ****\n \t\t\t\tbreak;\n \n \t\t\tcase PGRES_POLLING_WRITING:\n! \t\t\t\tif (pqWaitTimed(0, 1, conn, rp))\n \t\t\t\t{\n \t\t\t\t\tconn->status = CONNECTION_BAD;\n \t\t\t\t\treturn 0;\n--- 1100,1106 ----\n \t\t\t\tbreak;\n \n \t\t\tcase PGRES_POLLING_WRITING:\n! \t\t\t\tif (pqWaitTimed(0, 1, conn, finish_time))\n \t\t\t\t{\n \t\t\t\t\tconn->status = CONNECTION_BAD;\n \t\t\t\t\treturn 0;\n***************\n*** 1119,1138 ****\n \t\t * Now try to advance the state machine.\n \t\t */\n \t\tflag = PQconnectPoll(conn);\n- \n- \t\t/*\n- \t\t * If connecting timeout is set, calculate remaining time.\n- \t\t */\n- \t\tif (rp != NULL)\n- \t\t{\n- \t\t\tif (time(&current_time) == -1)\n- \t\t\t{\n- \t\t\t\tconn->status = CONNECTION_BAD;\n- \t\t\t\treturn 0;\n- \t\t\t}\n- \n- \t\t\tremains.tv_sec = finish_time - current_time;\n- \t\t}\n \t}\n \tconn->status = CONNECTION_BAD;\n \treturn 0;\n--- 1117,1122 ----\nIndex: src/interfaces/libpq/fe-misc.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/interfaces/libpq/fe-misc.c,v\nretrieving revision 1.83\ndiff -c -c -r1.83 fe-misc.c\n*** src/interfaces/libpq/fe-misc.c\t14 Oct 2002 18:11:17 -0000\t1.83\n--- src/interfaces/libpq/fe-misc.c\t16 Oct 2002 02:48:10 -0000\n***************\n*** 779,789 ****\n int\n pqWait(int forRead, int forWrite, PGconn *conn)\n {\n! \treturn pqWaitTimed(forRead, forWrite, conn, (const struct timeval *) NULL);\n }\n \n int\n! pqWaitTimed(int forRead, int forWrite, PGconn *conn, const struct timeval *timeout)\n {\n \tfd_set\t\tinput_mask;\n \tfd_set\t\toutput_mask;\n--- 779,789 ----\n int\n pqWait(int forRead, int forWrite, PGconn *conn)\n {\n! \treturn pqWaitTimed(forRead, forWrite, conn, -1);\n }\n \n int\n! pqWaitTimed(int forRead, int forWrite, PGconn *conn, time_t finish_time)\n {\n \tfd_set\t\tinput_mask;\n \tfd_set\t\toutput_mask;\n***************\n*** 820,826 ****\n \t\t\tFD_SET(conn->sock, &output_mask);\n \t\tFD_SET(conn->sock, &except_mask);\n \n! \t\tif (NULL != timeout)\n \t\t{\n \t\t\t/*\n \t\t\t * \tselect() may modify timeout argument on some platforms so\n--- 820,826 ----\n \t\t\tFD_SET(conn->sock, &output_mask);\n \t\tFD_SET(conn->sock, &except_mask);\n \n! \t\tif (finish_time != -1)\n \t\t{\n \t\t\t/*\n \t\t\t * \tselect() may modify timeout argument on some platforms so\n***************\n*** 831,837 ****\n \t\t\t *\tincorrect timings when select() is interrupted.\n \t\t\t *\tbjm 2002-10-14\n \t\t\t */\n! \t\t\ttmp_timeout = *timeout;\n \t\t\tptmp_timeout = &tmp_timeout;\n \t\t}\n \t\tif (select(conn->sock + 1, &input_mask, &output_mask,\n--- 831,839 ----\n \t\t\t *\tincorrect timings when select() is interrupted.\n \t\t\t *\tbjm 2002-10-14\n \t\t\t */\n! \t\t\tif ((tmp_timeout.tv_sec = finish_time - time(NULL)) < 0)\n! \t\t\t\ttmp_timeout.tv_sec = 0; /* possible? */\n! \t\t\ttmp_timeout.tv_usec = 0;\n \t\t\tptmp_timeout = &tmp_timeout;\n \t\t}\n \t\tif (select(conn->sock + 1, &input_mask, &output_mask,\nIndex: src/interfaces/libpq/libpq-int.h\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/interfaces/libpq/libpq-int.h,v\nretrieving revision 1.59\ndiff -c -c -r1.59 libpq-int.h\n*** src/interfaces/libpq/libpq-int.h\t14 Oct 2002 17:15:11 -0000\t1.59\n--- src/interfaces/libpq/libpq-int.h\t16 Oct 2002 02:48:12 -0000\n***************\n*** 340,346 ****\n extern int\tpqFlush(PGconn *conn);\n extern int\tpqSendSome(PGconn *conn);\n extern int\tpqWait(int forRead, int forWrite, PGconn *conn);\n! extern int\tpqWaitTimed(int forRead, int forWrite, PGconn *conn, const struct timeval *timeout);\n extern int\tpqReadReady(PGconn *conn);\n extern int\tpqWriteReady(PGconn *conn);\n \n--- 340,347 ----\n extern int\tpqFlush(PGconn *conn);\n extern int\tpqSendSome(PGconn *conn);\n extern int\tpqWait(int forRead, int forWrite, PGconn *conn);\n! extern int\tpqWaitTimed(int forRead, int forWrite, PGconn *conn, \n! \t\t\t\t\t\ttime_t finish_time);\n extern int\tpqReadReady(PGconn *conn);\n extern int\tpqWriteReady(PGconn *conn);", "msg_date": "Tue, 15 Oct 2002 22:54:45 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: droped out precise time calculations in\n\tsrc/interfaces/libpq/fe-connect.c" }, { "msg_contents": "Bruce Momjian wrote:\n> Patch applied. I am applying it so it is in CVS and everyone can see\n> it. I will keep modifying it until everyone likes it. It is just\n> easier to do it that way when multiple people are reviewing it. They\n> can jump in and make changes too.\n\nI ran the same test as before, with the following results:\n\n\ncurrent cvs\n-----------\ngood connect info, using hostaddr, timeout = 1 second\n=====================================================\nunsuccessful 0 times: avg n/a\nsuccessful 50000 times: avg 0.0087\n\nbad connect info, using hostaddr, timeout = 1 second\n====================================================\nunsuccessful 100 times: avg 1.4998\nsuccessful 0 times: sum 0: avg n/a\n\n\nSeems to work well. But one slight concern:\n\nwith previous 2 line patch\n--------------------------\ngood connect info, using hostaddr, timeout = 1 || 2 second(s)\n=============================================================\nunsuccessful 0 times: avg n/a\nsuccessful 20000 times: avg 0.0074\n\nThese tests were on the same, otherwise unloaded development box. Not sure if \nit is an artifact or not, but the average connection time has gone from 0.0074 \nto 0.0087, an increase of about 17%. Also worth noting is that there was very \nlittle deviation from connect-to-connect on both of the tests (eye-balled the \ntotal range at about 0.0003). I did not bother calculating standard deviation \nof the connect times, but I'm certain it would not be enough to account for \nthe difference. Could anything in Bruce's patch account for this, or do you \nthink it is normal variation due to something on my dev box?\n\nJoe\n\n", "msg_date": "Tue, 15 Oct 2002 22:15:27 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: droped out precise time calculations in\n\tsrc/interfaces/libpq/fe-connect.c" }, { "msg_contents": "Joe Conway wrote:\n> Seems to work well. But one slight concern:\n> \n> with previous 2 line patch\n> --------------------------\n> good connect info, using hostaddr, timeout = 1 || 2 second(s)\n> =============================================================\n> unsuccessful 0 times: avg n/a\n> successful 20000 times: avg 0.0074\n> \n> These tests were on the same, otherwise unloaded development box. Not sure if \n> it is an artifact or not, but the average connection time has gone from 0.0074 \n> to 0.0087, an increase of about 17%. Also worth noting is that there was very \n> little deviation from connect-to-connect on both of the tests (eye-balled the \n> total range at about 0.0003). I did not bother calculating standard deviation \n> of the connect times, but I'm certain it would not be enough to account for \n> the difference. Could anything in Bruce's patch account for this, or do you \n> think it is normal variation due to something on my dev box?\n\nYes, the new code has _three_ time() calls, rather than the old code\nthat I think only had two. I was going to mention it but I figured\ntime() was a pretty light system call, sort of like getpid().\n\nI needed the additional time() calls so the computation of remaining\ntime was more accurate, i.e. we are not resetting the timer on a\nselect() EINTR anymore.\n\nShould I try to rework it?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 16 Oct 2002 01:23:44 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: droped out precise time calculations in\n\tsrc/interfaces/libpq/fe-connect.c" }, { "msg_contents": "Bruce Momjian wrote:\n> Yes, the new code has _three_ time() calls, rather than the old code\n> that I think only had two. I was going to mention it but I figured\n> time() was a pretty light system call, sort of like getpid().\n> \n> I needed the additional time() calls so the computation of remaining\n> time was more accurate, i.e. we are not resetting the timer on a\n> select() EINTR anymore.\n> \n> Should I try to rework it?\n> \n\nI tried two more runs of 10000, and the average is pretty steady at 0.0087. \nHowever the total range is a fair bit wider than I originally reported.\n\nI added a forth time() call to see what the effect would be. It increased the \naverage to 0.0089 (two runs of 10000 connects each), so I don't think the \ntime() call explains the entire difference.\n\nNot sure this is worth worrying about or not. I'd guess anyone serious about \nkeeping connect time to a minimum uses some kind of connection pool or \npersistent connection anyway.\n\nJoe\n\n\n", "msg_date": "Tue, 15 Oct 2002 22:35:58 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: droped out precise time calculations in\n\tsrc/interfaces/libpq/fe-connect.c" }, { "msg_contents": "Joe Conway wrote:\n> Bruce Momjian wrote:\n> > Yes, the new code has _three_ time() calls, rather than the old code\n> > that I think only had two. I was going to mention it but I figured\n> > time() was a pretty light system call, sort of like getpid().\n> > \n> > I needed the additional time() calls so the computation of remaining\n> > time was more accurate, i.e. we are not resetting the timer on a\n> > select() EINTR anymore.\n> > \n> > Should I try to rework it?\n> > \n> \n> I tried two more runs of 10000, and the average is pretty steady at 0.0087. \n> However the total range is a fair bit wider than I originally reported.\n> \n> I added a forth time() call to see what the effect would be. It increased the \n> average to 0.0089 (two runs of 10000 connects each), so I don't think the \n> time() call explains the entire difference.\n> \n> Not sure this is worth worrying about or not. I'd guess anyone serious about \n> keeping connect time to a minimum uses some kind of connection pool or \n> persistent connection anyway.\n\nWell, the fact you see a change of 0.0002 is significant. Let me add\nthat in the old code there was only one time() call _in_ the loop, while\nnow, there are two, so I can easily see there are several additional\ntime() calls. Did you put your calls in the while loop?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 16 Oct 2002 01:40:41 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: droped out precise time calculations in\n\tsrc/interfaces/libpq/fe-connect.c" }, { "msg_contents": "Bruce Momjian wrote:\n> Well, the fact you see a change of 0.0002 is significant. Let me add\n> that in the old code there was only one time() call _in_ the loop, while\n> now, there are two, so I can easily see there are several additional\n> time() calls. Did you put your calls in the while loop?\n> \n\nNot the first time, but I moved it around (into the loop in connectDBComplete, \nand just before select() in pqWaitTimed) and it didn't seem to make any \ndifference. Then I removed it entirely again, and still got 0.0089 seconds \naverage. So, at the least, 0.0002 worth of variation seems to be related to \nthe development machine itself.\n\nJoe\n\np.s. The good news is that with tens of thousands more tests at 1 second \ntimeout, still zero connection failures!\n\n\n", "msg_date": "Tue, 15 Oct 2002 23:00:02 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: droped out precise time calculations in\n\tsrc/interfaces/libpq/fe-connect.c" }, { "msg_contents": "On Mon, Oct 14, 2002 at 01:00:07AM -0400, Bruce Momjian wrote:\n> Denis A Ustimenko wrote:\n> > On Sun, Oct 13, 2002 at 09:02:55PM -0700, Joe Conway wrote:\n> > > Denis A Ustimenko wrote:\n> > > >>Bruce, why have all precise time calculations been droped out in 1.206? \n> > > >>If there is no\n> > > >>gettimeofday in win32?\n> > > \n> > > gettimeofday was not portable to win32 (at least not that I could find) and \n> > > hence broke the win32 build of the clients.\n> > > \n> > \n> > GetSystemTimeAsFileTime should help.\n> > \n> > http://msdn.microsoft.com/library/default.asp?url=/library/en-us/sysinfo/base/getsystemtimeasfiletime.asp\n> \n> It's not clear to me how we could get this into something we can deal\n> with like gettimeofday.\n> \n> I looked at the Apache APR project, and they have a routine that returns\n> the microseconds since 1970 for Unix:\n> \t\n\nHere is my version of gettimeofday for win32. It was tested with Watcom C 11.0c. I think it can be used. I still belive that fine time calculation is the right way.\n\n#include<stdio.h>\n#ifdef _WIN32\n#include<winsock.h>\n#else\n#include<sys/time.h>\n#endif\n\nmain()\n{\n struct timeval t;\n if (gettimeofday(&t,NULL)) {\n printf(\"error\\n\\r\");\n } else {\n printf(\"the time is %ld.%ld\\n\\r\", t.tv_sec, t.tv_usec);\n }\n fflush(stdout);\n}\n\n#ifdef _WIN32\nint gettimeofday(struct timeval *tp, void *tzp)\n{\n FILETIME time;\n __int64 tmp;\n\n if ( NULL == tp) return -1;\n\n GetSystemTimeAsFileTime(&time);\n\n tmp = time.dwHighDateTime;\n tmp <<= 32;\n tmp |= time.dwLowDateTime;\n tmp /= 10; // it was in 100 nanosecond periods\n tp->tv_sec = tmp / 1000000 - 11644473600L; // Windows Epoch begins at 12:00 AM 01.01.1601\n tp->tv_usec = tmp % 1000000;\n return 0;\n}\n#endif\n\n\n\n-- \nRegards\nDenis\n", "msg_date": "Wed, 16 Oct 2002 16:50:47 +0700", "msg_from": "Denis A Ustimenko <denis@oldham.ru>", "msg_from_op": true, "msg_subject": "Re: droped out precise time calculations in\n\tsrc/interfaces/libpq/fe-connect.c" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Yes, the new code has _three_ time() calls, rather than the old code\n> that I think only had two. I was going to mention it but I figured\n> time() was a pretty light system call, sort of like getpid().\n> I needed the additional time() calls so the computation of remaining\n> time was more accurate, i.e. we are not resetting the timer on a\n> select() EINTR anymore.\n\nAs long as the time() calls aren't invoked in the default no-timeout\ncase, I doubt that the small additional slowdown matters too much.\nStill, one could ask why we are expending extra cycles to make the\ntimeout more accurate. Who the heck needs an accurate timeout on\nconnect? Can you really give a use-case where the user won't have\npicked a number out of the air anyway?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 16 Oct 2002 10:03:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: droped out precise time calculations in\n\tsrc/interfaces/libpq/fe-connect.c" }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Yes, the new code has _three_ time() calls, rather than the old code\n> > that I think only had two. I was going to mention it but I figured\n> > time() was a pretty light system call, sort of like getpid().\n> > I needed the additional time() calls so the computation of remaining\n> > time was more accurate, i.e. we are not resetting the timer on a\n> > select() EINTR anymore.\n> \n> As long as the time() calls aren't invoked in the default no-timeout\n> case, I doubt that the small additional slowdown matters too much.\n> Still, one could ask why we are expending extra cycles to make the\n> timeout more accurate. Who the heck needs an accurate timeout on\n> connect? Can you really give a use-case where the user won't have\n> picked a number out of the air anyway?\n\nI think we do need to properly compute the timeout on an EINTR of\nselect() because if we don't, a 30 second timeout could become 90\nseconds if select() is interrupted. The other time() calls are needed,\none above the loop, and one inside the loop.\n\nThe only thing I can do is to pass in the end time _and_ the duration to\npqWaitTimeout and do the time() recomputation only on EINTR. This would\ncompute duration in the loop where I call time() and therefore elminate\na time() call in the normal, non-EINTR case. Of course, it makes the\ncode more complicated, and that is why I avoided it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 16 Oct 2002 10:56:31 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: droped out precise time calculations in\n\tsrc/interfaces/libpq/fe-connect.c" }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Yes, the new code has _three_ time() calls, rather than the old code\n> > that I think only had two. I was going to mention it but I figured\n> > time() was a pretty light system call, sort of like getpid().\n> > I needed the additional time() calls so the computation of remaining\n> > time was more accurate, i.e. we are not resetting the timer on a\n> > select() EINTR anymore.\n> \n> As long as the time() calls aren't invoked in the default no-timeout\n> case, I doubt that the small additional slowdown matters too much.\n> Still, one could ask why we are expending extra cycles to make the\n> timeout more accurate. Who the heck needs an accurate timeout on\n> connect? Can you really give a use-case where the user won't have\n> picked a number out of the air anyway?\n\nYes, the default no-timeout case makes no time() calls.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 16 Oct 2002 11:01:33 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: droped out precise time calculations in\n\tsrc/interfaces/libpq/fe-connect.c" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Tom Lane wrote:\n>> Still, one could ask why we are expending extra cycles to make the\n>> timeout more accurate. Who the heck needs an accurate timeout on\n>> connect? Can you really give a use-case where the user won't have\n>> picked a number out of the air anyway?\n\n> I think we do need to properly compute the timeout on an EINTR of\n> select() because if we don't, a 30 second timeout could become 90\n> seconds if select() is interrupted. The other time() calls are needed,\n> one above the loop, and one inside the loop.\n\nAFAICS we need one time() call at the start, and then one inside the\nselect loop. I haven't looked at your recent patches, but you said\nsomething about putting two calls in the loop; that seems like overkill.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 16 Oct 2002 11:02:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: droped out precise time calculations in\n\tsrc/interfaces/libpq/fe-connect.c" }, { "msg_contents": "\nI will keep this in case we need it later. I think we worked around\nthis problem by having timeout == 1 set equal to 2 so we get at least\none second for the connection.\n\n---------------------------------------------------------------------------\n\nDenis A Ustimenko wrote:\n> On Mon, Oct 14, 2002 at 01:00:07AM -0400, Bruce Momjian wrote:\n> > Denis A Ustimenko wrote:\n> > > On Sun, Oct 13, 2002 at 09:02:55PM -0700, Joe Conway wrote:\n> > > > Denis A Ustimenko wrote:\n> > > > >>Bruce, why have all precise time calculations been droped out in 1.206? \n> > > > >>If there is no\n> > > > >>gettimeofday in win32?\n> > > > \n> > > > gettimeofday was not portable to win32 (at least not that I could find) and \n> > > > hence broke the win32 build of the clients.\n> > > > \n> > > \n> > > GetSystemTimeAsFileTime should help.\n> > > \n> > > http://msdn.microsoft.com/library/default.asp?url=/library/en-us/sysinfo/base/getsystemtimeasfiletime.asp\n> > \n> > It's not clear to me how we could get this into something we can deal\n> > with like gettimeofday.\n> > \n> > I looked at the Apache APR project, and they have a routine that returns\n> > the microseconds since 1970 for Unix:\n> > \t\n> \n> Here is my version of gettimeofday for win32. It was tested with Watcom C 11.0c. I think it can be used. I still belive that fine time calculation is the right way.\n> \n> #include<stdio.h>\n> #ifdef _WIN32\n> #include<winsock.h>\n> #else\n> #include<sys/time.h>\n> #endif\n> \n> main()\n> {\n> struct timeval t;\n> if (gettimeofday(&t,NULL)) {\n> printf(\"error\\n\\r\");\n> } else {\n> printf(\"the time is %ld.%ld\\n\\r\", t.tv_sec, t.tv_usec);\n> }\n> fflush(stdout);\n> }\n> \n> #ifdef _WIN32\n> int gettimeofday(struct timeval *tp, void *tzp)\n> {\n> FILETIME time;\n> __int64 tmp;\n> \n> if ( NULL == tp) return -1;\n> \n> GetSystemTimeAsFileTime(&time);\n> \n> tmp = time.dwHighDateTime;\n> tmp <<= 32;\n> tmp |= time.dwLowDateTime;\n> tmp /= 10; // it was in 100 nanosecond periods\n> tp->tv_sec = tmp / 1000000 - 11644473600L; // Windows Epoch begins at 12:00 AM 01.01.1601\n> tp->tv_usec = tmp % 1000000;\n> return 0;\n> }\n> #endif\n> \n> \n> \n> -- \n> Regards\n> Denis\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 16 Oct 2002 11:02:10 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: droped out precise time calculations in\n\tsrc/interfaces/libpq/fe-connect.c" }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Tom Lane wrote:\n> >> Still, one could ask why we are expending extra cycles to make the\n> >> timeout more accurate. Who the heck needs an accurate timeout on\n> >> connect? Can you really give a use-case where the user won't have\n> >> picked a number out of the air anyway?\n> \n> > I think we do need to properly compute the timeout on an EINTR of\n> > select() because if we don't, a 30 second timeout could become 90\n> > seconds if select() is interrupted. The other time() calls are needed,\n> > one above the loop, and one inside the loop.\n> \n> AFAICS we need one time() call at the start, and then one inside the\n> select loop. I haven't looked at your recent patches, but you said\n> something about putting two calls in the loop; that seems like overkill.\n\nYes, one call at the start, and one in the loop. We need another in\npqWaitTimeout, but only if we hit EINTR. As the code stands now we do\ntime() unconditionally in pqWaitTimeout too because we only have to pass\nin the funish time. If we want to pass in both finish and duration (for\nuse as select() timeout param), then we can eliminate the time() call in\nthere for the non-EINTR case. Is it worth the added code complexity? \nThat is what I am not sure about.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 16 Oct 2002 11:05:40 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: droped out precise time calculations in\n\tsrc/interfaces/libpq/fe-connect.c" } ]
[ { "msg_contents": "\nOK, I would like to shoot for 7.3 final in the next few weeks. Can we\nget some of these items completed so we can make that happen and move on\nto 7.4? Remeber, PITR and Win32 are waiting!\n\nLet me comment on these:\n\n---------------------------------------------------------------------------\n\n P O S T G R E S Q L\n\n 7 . 3 O P E N I T E M S\n\n\nCurrent at ftp://momjian.postgresql.org/pub/postgresql/open_items.\n\t\n\tRequired Changes\n\t-------------------\n\tSchema handling - ready? interfaces? client apps?\n\nWhat specifically still needs to be done here?\n\n\tDrop column handling - ready for all clients, apps?\n\nSame.\n\n\tGet bison upgrade on postgresql.org for ecpg only (Marc)\n\nOK, bison 1.50 is officially released. I am using it here and it passes\nthe regression tests. Marc, would you download that on to\npostgresql.org and then Michael can get ecpg fully working, and we can\nget this item removed from the list.\n\n\tFix vacuum btree bug (Tom)\n\nOK, Tom, I know you were looking for a better fix than locking down the\nindex pages to fix this. Perhaps you should throw out the question one\nmore time, and if no one has a better idea, let's code the fix and\ndocument the problem in the code so maybe it can get improved at some\nlater date.\n\n\tFix client apps for autocommit = off\n\nOK, exactly what needs to be done here. If we add 'SET autocommit =\noff' to the top of the apps, is that all we need to do? For apps that\nconnect to the database multiple times, do we have to do it multiple\ntimes?\n\n\tFix pg_dump to handle 64-bit off_t offsets for custom format\n\nWhere are we on this?\n\t\n\tOptional Changes\n\t----------------\n\nNone of these have to be done before final.\n\n\tFix BeOS, QNX4 ports\n\nI am removing this one. No one is volunteering.\n\n\tReturn last command of INSTEAD rule for tuple count/oid/tag for rules, SPI\n\nIf we are going to fix it, this week is the time. If not, it stays on\nTODO.\n\n\tAdd schema dump option to pg_dump\n\nSame.\n\n\tAdd/remove GRANT EXECUTE to all /contrib functions?\n\nWho volunteered to clean all this up a while back?\n\n\tMissing casts for bit operations\n\nRequires initdb, as does INSTEAD fix.\n\n\t\\copy doesn't handle column names\n\tCOPY doesn't handle schemas\n\tCOPY quotes all table names\n\nAgain, all optional.\n\t\n\tOn Going\n\t--------\n\tSecurity audit\n\nHaven't heard anything on this for a while. Removed.\n\t\n\t\n\tDocumentation Changes\n\t---------------------\n\tMove documation to gborg for moved projects\n\nThis needs to be done. What plan to we have for those gborg docs to\ndeal with SGML?\n\nThis is the week folks --- let's get these done or moved to TODO.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 14 Oct 2002 00:49:34 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Let's get 7.3 done" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> \tReturn last command of INSTEAD rule for tuple count/oid/tag for rules, SPI\n\n> If we are going to fix it, this week is the time. If not, it stays on\n> TODO.\n\nIf we've agreed on the behavior, I'll try to take care of making it\nhappen. Note that will mean an initdb for 7.3b3.\n\n> \tAdd/remove GRANT EXECUTE to all /contrib functions?\n\n> Who volunteered to clean all this up a while back?\n\nI think that now that we changed the default permissions for functions,\nthis is a dead issue. There's no need to put explicit GRANTs into the\ncontrib files. OTOH, taking out the ones that were added already is\na waste of cycles too.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 Oct 2002 10:49:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Let's get 7.3 done " }, { "msg_contents": "At 12:49 AM 14/10/2002 -0400, Bruce Momjian wrote:\n> Fix pg_dump to handle 64-bit off_t offsets for custom format\n\nI'll try to get back to this in the next day or so...\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n", "msg_date": "Tue, 15 Oct 2002 09:44:44 +1000", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: Let's get 7.3 done" } ]
[ { "msg_contents": "Hi there,\nmy question is short, quite simple and \ndefinately a hacker thing (hopefully).\n\nFIRST of all:\nPOSTGRES is a great database system! Thanks \na lot for your perfect work.\n\nNow my Question:\nWouldn't it be possible to change the \ndefault setting of NAMEDATALEN in\nall distributions to a higher value,\nlets say 128?\n\nOr a bit better:\nGet the length of NAMEDATALEN by a \nselect statement for all applications \nusing postgres beeing a bit more flexible?\n\nReason:\nI think that there are several systems out\nthere running with higher values to make\nreading (and understanding) table- and \nrownames a bit easier.\n\nAll linux packages are compiled with that standard \nvalue as well as all applications (like psqlodbc eg) \nare actually designed - also by default - for a length \nof 32, so changing NAMEDATALEN and recompiling \npostgreql does'nt solve anything, afterwards you \nhave to contact odbc-developers, tool-developers \nand so on for a hint how to make their systems \ncope with that new value.\n\nRegarding decreasing cpu-speed and increasing hardware \ncost, shouldn't it be possible to set a higher value \nfor NAMEDATALEN by default, so that especially complex \npostgres-databases can be developed on a standard compilation,\nand the development of applications would be made a bit more\ncomfortable?\n\nRegards and thanks for your time\n\njochen\n\n", "msg_date": "Mon, 14 Oct 2002 17:40:56 +0200", "msg_from": "Jochen Westland <jochen.westland@invigo.de>", "msg_from_op": true, "msg_subject": "Default setting of NAMEDATALEN" }, { "msg_contents": "\nNew NAMEDATALEN default will be 64 in 7.3. We are in beta now. We did\nsee performance hit with values greater than 64. It is too hard to make\nit changeable after compilation.\n\n---------------------------------------------------------------------------\n\nJochen Westland wrote:\n> Hi there,\n> my question is short, quite simple and \n> definately a hacker thing (hopefully).\n> \n> FIRST of all:\n> POSTGRES is a great database system! Thanks \n> a lot for your perfect work.\n> \n> Now my Question:\n> Wouldn't it be possible to change the \n> default setting of NAMEDATALEN in\n> all distributions to a higher value,\n> lets say 128?\n> \n> Or a bit better:\n> Get the length of NAMEDATALEN by a \n> select statement for all applications \n> using postgres beeing a bit more flexible?\n> \n> Reason:\n> I think that there are several systems out\n> there running with higher values to make\n> reading (and understanding) table- and \n> rownames a bit easier.\n> \n> All linux packages are compiled with that standard \n> value as well as all applications (like psqlodbc eg) \n> are actually designed - also by default - for a length \n> of 32, so changing NAMEDATALEN and recompiling \n> postgreql does'nt solve anything, afterwards you \n> have to contact odbc-developers, tool-developers \n> and so on for a hint how to make their systems \n> cope with that new value.\n> \n> Regarding decreasing cpu-speed and increasing hardware \n> cost, shouldn't it be possible to set a higher value \n> for NAMEDATALEN by default, so that especially complex \n> postgres-databases can be developed on a standard compilation,\n> and the development of applications would be made a bit more\n> comfortable?\n> \n> Regards and thanks for your time\n> \n> jochen\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 21 Oct 2002 11:29:22 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Default setting of NAMEDATALEN" } ]
[ { "msg_contents": "Hi everyone,\n\nThe Turkish translation of the PostgreSQL \"Advocacy and Marketing\" site,\ndone by Devrim GUNDUZ <devrim@oper.metu.edu.tr> is now complete and\nready for public use:\n\nhttp://advocacy.postgresql.org/?lang=tr\n\nPretty cool stuff. Thanks Devrim. :-)\n\nRegards and best wishes,\n\nJustin Clift\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Tue, 15 Oct 2002 03:01:32 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": true, "msg_subject": "Turkish version of the PostgreSQL \"Advocacy and Marketing\" site is\n\tready" } ]
[ { "msg_contents": "\nOops, overoptimized a little. ptmp_timeout is needed in case no time is\npassed; ptmp_timeout restored.\n\n---------------------------------------------------------------------------\n\npgman wrote:\n> Tom Lane wrote:\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > That whole remains structure should be a time_t variable, and then we\n> > > _know_ we can't assume it is signed. The use of timeval should\n> > > happen only in pqWaitTimed because it has to use select().\n> > \n> > I think it's fine to use struct timeval as the parameter type for\n> > pqWaitTimed. This particular caller of pqWaitTimed has no need for\n> > sub-second wait precision, but that doesn't mean we might not want it\n> > for other purposes later.\n> \n> That was a question: whether pqWaitTimed() was something exported by\n> libpq and therefore something that has an API that shouldn't change. I\n> see it in libpq-int.h, which I think means it isn't exported, but yes,\n> there could be later cases where we need subsecond stuff.\n> \n> I have applied the following patch to get us a little closer to sanity.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 14 Oct 2002 13:34:26 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: droped out precise time calculations in\n\tsrc/interfaces/libpq/fe-connect.c" } ]
[ { "msg_contents": "I have applied the following comment patch. The current code resets the\ntimer when select() is interruped. On OS's that modify timeout to show\nthe remaining time, we should be using that value instead of resetting\nthe timer to its original value on select retry.\n\n---------------------------------------------------------------------------\n\npgman wrote:\n> \n> Oops, overoptimized a little. ptmp_timeout is needed in case no time is\n> passed; ptmp_timeout restored.\n> \n> ---------------------------------------------------------------------------\n> \n> pgman wrote:\n> > Tom Lane wrote:\n> > > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > > That whole remains structure should be a time_t variable, and then we\n> > > > _know_ we can't assume it is signed. The use of timeval should\n> > > > happen only in pqWaitTimed because it has to use select().\n> > > \n> > > I think it's fine to use struct timeval as the parameter type for\n> > > pqWaitTimed. This particular caller of pqWaitTimed has no need for\n> > > sub-second wait precision, but that doesn't mean we might not want it\n> > > for other purposes later.\n> > \n> > That was a question: whether pqWaitTimed() was something exported by\n> > libpq and therefore something that has an API that shouldn't change. I\n> > see it in libpq-int.h, which I think means it isn't exported, but yes,\n> > there could be later cases where we need subsecond stuff.\n> > \n> > I have applied the following patch to get us a little closer to sanity.\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\nIndex: src/interfaces/libpq/fe-misc.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/interfaces/libpq/fe-misc.c,v\nretrieving revision 1.82\ndiff -c -c -r1.82 fe-misc.c\n*** src/interfaces/libpq/fe-misc.c\t14 Oct 2002 17:33:08 -0000\t1.82\n--- src/interfaces/libpq/fe-misc.c\t14 Oct 2002 18:08:14 -0000\n***************\n*** 824,830 ****\n \t\t{\n \t\t\t/*\n \t\t\t * \tselect() may modify timeout argument on some platforms so\n! \t\t\t *\tuse copy\n \t\t\t */\n \t\t\ttmp_timeout = *timeout;\n \t\t\tptmp_timeout = &tmp_timeout;\n--- 824,835 ----\n \t\t{\n \t\t\t/*\n \t\t\t * \tselect() may modify timeout argument on some platforms so\n! \t\t\t *\tuse copy.\n! \t\t\t *\tXXX Do we really want to do that? If select() returns\n! \t\t\t *\tthe number of seconds remaining, we are resetting\n! \t\t\t *\tthe timeout to its original value. This will yeild\n! \t\t\t *\tincorrect timings when select() is interrupted.\n! \t\t\t *\tbjm 2002-10-14\n \t\t\t */\n \t\t\ttmp_timeout = *timeout;\n \t\t\tptmp_timeout = &tmp_timeout;", "msg_date": "Mon, 14 Oct 2002 14:10:41 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: droped out precise time calculations in\n\tsrc/interfaces/libpq/fe-connect.c" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> /*\n> * select() may modify timeout argument on some platforms so\n> ! * use copy.\n> ! * XXX Do we really want to do that? If select() returns\n> ! * the number of seconds remaining, we are resetting\n> ! * the timeout to its original value. This will yeild\n> ! * incorrect timings when select() is interrupted.\n> ! * bjm 2002-10-14\n> */\n> tmp_timeout = *timeout;\n> ptmp_timeout = &tmp_timeout;\n\n\nActually, now that I look at this, the API for PQwaitTimed is wrong\nafter all. The right way to implement this is for the caller to pass in\nfinish_time (or some indication that no timeout is wanted). Evaluation\nof the time left to wait should happen inside this retry loop. That\nway, you get the right behavior (plus or minus one second, anyway)\nindependently of whether the platform's select() reduces its timeout\nargument or not.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 Oct 2002 14:42:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: droped out precise time calculations in\n\tsrc/interfaces/libpq/fe-connect.c" }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > /*\n> > * select() may modify timeout argument on some platforms so\n> > ! * use copy.\n> > ! * XXX Do we really want to do that? If select() returns\n> > ! * the number of seconds remaining, we are resetting\n> > ! * the timeout to its original value. This will yeild\n> > ! * incorrect timings when select() is interrupted.\n> > ! * bjm 2002-10-14\n> > */\n> > tmp_timeout = *timeout;\n> > ptmp_timeout = &tmp_timeout;\n> \n> \n> Actually, now that I look at this, the API for PQwaitTimed is wrong\n> after all. The right way to implement this is for the caller to pass in\n> finish_time (or some indication that no timeout is wanted). Evaluation\n> of the time left to wait should happen inside this retry loop. That\n> way, you get the right behavior (plus or minus one second, anyway)\n> independently of whether the platform's select() reduces its timeout\n> argument or not.\n\nYes, you are saying do the time() inside PQwaitTimed(), so we can\nproperly get new time() values on select() retry. Yep.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 14 Oct 2002 18:41:27 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: droped out precise time calculations in\n\tsrc/interfaces/libpq/fe-connect.c" } ]
[ { "msg_contents": "I'm about to go off and implement this, so just to make sure we are all\non the same page, here's what I think we agreed to:\n\nIn the presence of rewrite rules, the command status returned by a\nrewritable query will behave thusly:\n\n1. If the original query is executed (ie, there is no unconditional\nINSTEAD rule), then the original query's command status is returned\nin all cases. Queries added by rules are ignored.\n\n2. If the original query is suppressed by an unconditional INSTEAD\nrule, then return the command status of the last-executed query that\nis both (a) from an INSTEAD rule and (b) of the same type (SELECT,\nINSERT, UPDATE, DELETE) as the original query. If there is no such\nquery, return the original query's command type and zeroes for the\ncount and OID fields of the status.\n\nThis gives us several properties that were agreed to be desirable:\n\n* Returned command type always matches original query.\n\n* Non-INSTEAD rules never affect the command status.\n\n* The user can control which query sets the command status in the\nINSTEAD case, by ordering the rules properly.\n\n* Reasonably compatible with the pre-7.2 behavior, taking into\naccount that the old behavior was not predictable if you had more\nthan one rule anyway.\n\nNote that this will force an initdb for 7.3beta3, since we need to add\na field to Query nodes to keep track of whether they came from INSTEAD\nrules. So I will also apply the pg_cast additions that we noticed were\nmissing a couple days ago. Were there any other items that we would\nhave fixed for 7.3 but were holding off to 7.4 just because of wanting\nto avoid initdb?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 Oct 2002 14:35:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Final(?) consensus on PQcmdStatus and rules" } ]
[ { "msg_contents": "The Internet Society (ISOC) won the bid to replace Verisign as the .org\nregistry. They're going to subcontract to Afilias, who has been running the\n.info registry.\n\nSee:\nhttp://www.icann.org/announcements/announcement-14oct02.htm\nhttp://www.afilias.info/about_afilias/dotorg/index_html\nhttp://www.washingtonpost.com/wp-dyn/articles/A23669-2002Oct14.html\n\nIs this the same group that recently asked for input on their proposal,\nwhich specified Postgres as the registry database?\n\n(For some reason I can't find the recent thread on their proposal on the\nhackers, general or admin lists.)\n\nDave De Graff\n\n\n", "msg_date": "Mon, 14 Oct 2002 11:57:06 -0700", "msg_from": "\"David De Graff\" <postgresql@awarehouse.com>", "msg_from_op": true, "msg_subject": "Postgres-based system to run .org?" } ]
[ { "msg_contents": "I just noticed that rewriteHandler.c contains a subroutine orderRules()\nthat reorders the rules for a relation into the order\n\tnon-instead rules\n\tqualified instead rules\n\tunqualified instead rules\nThis conflicts with the feature we'd added to 7.3 to fire rules in\nalphabetical order. (What will presently happen is they'll be fired\nalphabetically in each of these categories.)\n\nI see that the logic in fireRules() assumes that rules are processed in\nthis order, but that would be fairly easy to fix. Is there any other\ngood reason for doing this reordering? I'd like to remove orderRules()\nand implement straight alphabetical ordering.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 Oct 2002 15:13:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "orderRules() now a bad idea?" }, { "msg_contents": "Tom Lane wrote:\n> I just noticed that rewriteHandler.c contains a subroutine orderRules()\n> that reorders the rules for a relation into the order\n> \tnon-instead rules\n> \tqualified instead rules\n> \tunqualified instead rules\n> This conflicts with the feature we'd added to 7.3 to fire rules in\n> alphabetical order. (What will presently happen is they'll be fired\n> alphabetically in each of these categories.)\n> \n> I see that the logic in fireRules() assumes that rules are processed in\n> this order, but that would be fairly easy to fix. Is there any other\n> good reason for doing this reordering? I'd like to remove orderRules()\n> and implement straight alphabetical ordering.\n\nUnless Jan has an objection, I think alpha is best, because it matches\ntrigger rule odering. That original rule ordering isn't something\nanyone is going to figure out on their own.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 14 Oct 2002 18:53:27 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: orderRules() now a bad idea?" }, { "msg_contents": "Tom Lane wrote:\n> \n> I just noticed that rewriteHandler.c contains a subroutine orderRules()\n> that reorders the rules for a relation into the order\n> non-instead rules\n> qualified instead rules\n> unqualified instead rules\n> This conflicts with the feature we'd added to 7.3 to fire rules in\n> alphabetical order. (What will presently happen is they'll be fired\n> alphabetically in each of these categories.)\n> \n> I see that the logic in fireRules() assumes that rules are processed in\n> this order, but that would be fairly easy to fix. Is there any other\n> good reason for doing this reordering? I'd like to remove orderRules()\n> and implement straight alphabetical ordering.\n\nI don't see a strong reason why not doing it the way you propose. It's\njust that you need to keep a version of the parsetree before you applied\nan unqualified instead rule just for the case that you later need to\napply one of the others. But this copy shall not make it into the final\nlist of queries.\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n", "msg_date": "Tue, 15 Oct 2002 09:33:38 -0400", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: orderRules() now a bad idea?" }, { "msg_contents": "Bruce Momjian writes:\n\n> Unless Jan has an objection, I think alpha is best, because it matches\n> trigger rule odering. That original rule ordering isn't something\n> anyone is going to figure out on their own.\n\nBut alphabetical? According to whose definition of the alphabet?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 15 Oct 2002 20:53:50 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: orderRules() now a bad idea?" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> But alphabetical? According to whose definition of the alphabet?\n\nIt looks like NAME comparison uses strcmp (actually strncmp). So it'll\nbe numeric byte-code order.\n\nThere's no particular reason we couldn't make that be strcoll instead,\nI suppose, except perhaps speed.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 15 Oct 2002 16:36:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: orderRules() now a bad idea? " }, { "msg_contents": "Tom Lane writes:\n\n> > But alphabetical? According to whose definition of the alphabet?\n>\n> It looks like NAME comparison uses strcmp (actually strncmp). So it'll\n> be numeric byte-code order.\n>\n> There's no particular reason we couldn't make that be strcoll instead,\n> I suppose, except perhaps speed.\n\nBut how will this work when we have per-column/datum collation order?\nAnd what about languages that don't have any useful collation order for\ntheir alphabets (far east)? ISTM that a globally viable feature of this\nsort would have to sort by something numeric.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Thu, 17 Oct 2002 00:11:53 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: orderRules() now a bad idea? " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> It looks like NAME comparison uses strcmp (actually strncmp). So it'll\n>> be numeric byte-code order.\n>> There's no particular reason we couldn't make that be strcoll instead,\n>> I suppose, except perhaps speed.\n\n> But how will this work when we have per-column/datum collation order?\n> And what about languages that don't have any useful collation order for\n> their alphabets (far east)? ISTM that a globally viable feature of this\n> sort would have to sort by something numeric.\n\nI'm confused; are you saying that NAME's sort behavior is good as-is?\nIf not, what would you have it do differently?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 17 Oct 2002 00:21:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: orderRules() now a bad idea? " }, { "msg_contents": "Tom Lane wrote:\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > Tom Lane writes:\n> >> It looks like NAME comparison uses strcmp (actually strncmp). So it'll\n> >> be numeric byte-code order.\n> >> There's no particular reason we couldn't make that be strcoll instead,\n> >> I suppose, except perhaps speed.\n> \n> > But how will this work when we have per-column/datum collation order?\n> > And what about languages that don't have any useful collation order for\n> > their alphabets (far east)? ISTM that a globally viable feature of this\n> > sort would have to sort by something numeric.\n> \n> I'm confused; are you saying that NAME's sort behavior is good as-is?\n> If not, what would you have it do differently?\n\nYes, exotic ordering of rules just doesn't seem warranted. I think it\nshould match the ordering of pg_class.name, which is strcmp() already.\n\nLet's do ASCII ordering (strcmp) and see how things go. \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 17 Oct 2002 14:54:30 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: orderRules() now a bad idea?" }, { "msg_contents": "Tom Lane writes:\n\n> I'm confused; are you saying that NAME's sort behavior is good as-is?\n> If not, what would you have it do differently?\n\nWhat I am primarily saying is that ordering the rule execution order\nalphabetically is not a really good solution. Consequently, I would not\ngo out of my way to make code changes to pursue this goal.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Thu, 17 Oct 2002 21:17:07 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: orderRules() now a bad idea? " }, { "msg_contents": "Peter Eisentraut wrote:\n> Tom Lane writes:\n> \n> > I'm confused; are you saying that NAME's sort behavior is good as-is?\n> > If not, what would you have it do differently?\n> \n> What I am primarily saying is that ordering the rule execution order\n> alphabetically is not a really good solution. Consequently, I would not\n> go out of my way to make code changes to pursue this goal.\n\nWell, it seems to make the users happy, so that's good enough for me. \nThere was particular concern from users about what values are returned\nwhen multiple rules or multi-statement rules are fired, and ordering\nthem by ASCII order does give them the tools needed to get the job done.\nWe already order NAME by strcmp, so I don't see how we are breaking\nanything by doing the same for rules.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 17 Oct 2002 15:17:09 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: orderRules() now a bad idea?" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> I'm confused; are you saying that NAME's sort behavior is good as-is?\n>> If not, what would you have it do differently?\n\n> What I am primarily saying is that ordering the rule execution order\n> alphabetically is not a really good solution. Consequently, I would not\n> go out of my way to make code changes to pursue this goal.\n\nI think what you are really driving at is that you'd like to have some\nother mechanism than choice-of-rule-name for users to determine ordering\nof rule expansion. That's a fair enough objection, but you'd still need\nto get rid of orderRules() along the way. Unless you *like* ordering\nrestrictions that were made purely for implementation convenience?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 17 Oct 2002 23:38:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: orderRules() now a bad idea? " } ]
[ { "msg_contents": "The Internet Society (ISOC) won the bid to replace Verisign as the .org\nregistry. They're going to subcontract to Afilias, who has been running the\n.info registry.\n\nSee:\nhttp://www.icann.org/announcements/announcement-14oct02.htm\nhttp://www.afilias.info/about_afilias/dotorg/index_html\nhttp://www.washingtonpost.com/wp-dyn/articles/A23669-2002Oct14.html\n\nIs this the same group that recently asked for input on their proposal,\nwhich specified Postgres as the registry database?\n\n(For some reason I can't find the recent thread on their proposal on the\nhackers, general or admin lists.)\n\nDave De Graff\n\n\n", "msg_date": "Mon, 14 Oct 2002 12:42:37 -0700", "msg_from": "\"David De Graff\" <postgresql@awarehouse.com>", "msg_from_op": true, "msg_subject": "Postgres-based system to run .org registry?" }, { "msg_contents": "\"David De Graff\" <postgresql@awarehouse.com> writes:\n> The Internet Society (ISOC) won the bid to replace Verisign as the .org\n> registry. They're going to subcontract to Afilias, who has been running the\n> .info registry.\n\nCool.\n\n> See:\n> http://www.icann.org/announcements/announcement-14oct02.htm\n> http://www.afilias.info/about_afilias/dotorg/index_html\n> http://www.washingtonpost.com/wp-dyn/articles/A23669-2002Oct14.html\n\n> Is this the same group that recently asked for input on their proposal,\n> which specified Postgres as the registry database?\n\nYes, same bunch. Andrew Sullivan, whose name you may have seen on the\nlists, is the DBA for the .info registry; I suppose he will be running\nthe .org registry now too ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 Oct 2002 16:56:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Postgres-based system to run .org registry? " }, { "msg_contents": "On Mon, Oct 14, 2002 at 12:42:37PM -0700, David De Graff wrote:\n\n> Is this the same group that recently asked for input on their proposal,\n> which specified Postgres as the registry database?\n\nHi everyone,\n\nYes, this is us. (Sorry I've been inactive the last week. I was on\nvacation.)\n\nWhat follows is a strictly personal set of remarks. I don't speak\nfor Afilias, Liberty RMS, PIR, or ISOC. I'm just some guy. They\ndon't even like me. ;-)\n\nI want to take the opportunity to thank everyone in the PostgreSQL\ncommunity for the help they've offered, and for the fantastic\nsoftware. When I ask for help, the support I get is just tremendous,\nboth for marketing efforts and for general Postgres operation\nquestions I have.\n\nThis is a remarkable victory for PostgreSQL, by the way. As you can\nsee if you check out the public forums and the various, publicly\nposted remarks by various bidders (everything I know about in the\nbidding is posted on the ICANN site), people were \"gunning\" for\nPostgreSQL. There were many suggestions that PostgreSQL wasn't up to\nthe job. The Gartner Group, despite their natural tendency to be\nsuspicious of a thing \"nobody else\" is using, concluded that\nPostgreSQL was not too big a risk. That may sound like damning with\nfaint praise, when we all know that PostgreSQL can indeed handle this\nsort of task. But even such a hesitant endorsement from someone like\nGartner means that PostgreSQL is now regarded by the usual commercial\nsuspects as a \"real\" system. I can't believe that's a bad thing.\n\nI should note that we have had tremendous help from Geoff Davidson\nand the rest of the crew at PostgreSQL, Inc., and that Justic Clift\nhas been totally indefatiguable in finding clever ways of promoting\nPostgreSQL. Thanks, guys.\n\nI am very hopeful that this provides Liberty and Afilias with an\nopportunity to make additional contributions to PostgreSQL. \nNaturally, though, I don't know anything. I'll keep lobbying.\n\nI'm very proud to be associated with this development in PostgreSQL's\nhistory. It's only possible because of the PostgreSQL community: the\ntireless efforts of the contributors, and the astounding support that\nparticipants on lists like these give one another. Thank you.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Tue, 15 Oct 2002 08:48:36 -0400", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Postgres-based system to run .org registry?" }, { "msg_contents": "Andrew Sullivan wrote:\n> On Mon, Oct 14, 2002 at 12:42:37PM -0700, David De Graff wrote:\n> \n> > Is this the same group that recently asked for input on their proposal,\n> > which specified Postgres as the registry database?\n> \n> Hi everyone,\n> \n> Yes, this is us. (Sorry I've been inactive the last week. I was on\n> vacation.)\n\nYou host .org now. No more vacations! ;-)\n\n> What follows is a strictly personal set of remarks. I don't speak\n> for Afilias, Liberty RMS, PIR, or ISOC. I'm just some guy. They\n> don't even like me. ;-)\n> \n> I want to take the opportunity to thank everyone in the PostgreSQL\n> community for the help they've offered, and for the fantastic\n> software. When I ask for help, the support I get is just tremendous,\n> both for marketing efforts and for general Postgres operation\n> questions I have.\n\nIt is a huge win. I remember when Yahoo started using MySQL, and\nSlashdot uses it; well, this is the same, but bigger in many ways\nbecause it is so pervasive. You may not go to Yahoo, and most people\n(non-techies) don't go to slashdot, but everyone uses .org names, so it\nis the kind of universal calling card we need to take PostgreSQL to the\nnext level.\n\nThe past six months has seen huge improvements in PostgreSQL adoption; \nmuch more than I expected. The popularity of the software continues to\namaze me, and its market growth seems unstoppable at this point.\n\nWe are showing up in places I never expected: .org registry, tons of\nbooks, conventions, everywhere. It is just a wave that keeps getting\nbigger and bigger. I am starting to imagine what Linus felt seeing\nLinux take off; you just sit around and wonder how it is all happening,\nand of course, it is happening because you offer a unique value to\nusers, and their sharing with others just makes it continue to grow.\n\nIn one word: Amazing!\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 15 Oct 2002 18:19:36 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [GENERAL] Postgres-based system to run .org registry?" }, { "msg_contents": "En Tue, 15 Oct 2002 18:19:36 -0400 (EDT)\nBruce Momjian <pgman@candle.pha.pa.us> escribi�:\n\n> We are showing up in places I never expected: .org registry, tons of\n> books, conventions, everywhere. It is just a wave that keeps getting\n> bigger and bigger. I am starting to imagine what Linus felt seeing\n> Linux take off; you just sit around and wonder how it is all happening,\n> and of course, it is happening because you offer a unique value to\n> users, and their sharing with others just makes it continue to grow.\n> \n> In one word: Amazing!\n\nThis is not without good reason. PostgreSQL has shown repeatedly to be\nan excellent product, able to compete with most commercial products.\nI've just been asked to give a little administration course for people\nin an government organization, where Oracle has been disregarded for\ngiving little gain over what PostgreSQL gives. I am pleased to have\ndone such a good decision to stick with PostgreSQL a couple years ago.\n\nI will just continue with my crusade here, where everything appears to\nbe only Oracle and just a little MySQL.\n\nOf course none of this would be possible without the excellent quality\nof the work people here does. If there is something to be amazed at is\nthe people, their commitment and the quality of their work. One can\nonly say a big \"thanks!\".\n\n-- \nAlvaro Herrera (<alvherre[a]dcc.uchile.cl>)\nThou shalt study thy libraries and strive not to reinvent them without\ncause, that thy code may be short and readable and thy days pleasant\nand productive. (7th Commandment for C Programmers)\n", "msg_date": "Wed, 16 Oct 2002 00:02:17 -0300", "msg_from": "Alvaro Herrera <alvherre@dcc.uchile.cl>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [GENERAL] Postgres-based system to run .org registry?" }, { "msg_contents": "On 15 Oct 2002 at 18:19, Bruce Momjian wrote:\n> We are showing up in places I never expected: .org registry, tons of\n> books, conventions, everywhere. It is just a wave that keeps getting\n> bigger and bigger. I am starting to imagine what Linus felt seeing\n> Linux take off; you just sit around and wonder how it is all happening,\n> and of course, it is happening because you offer a unique value to\n> users, and their sharing with others just makes it continue to grow.\n\nSigh.. I wish enough people could understand difference between cost and value \nand more importantly apply the understanding while making a judgement..\n\nBye\n Shridhar\n\n--\nI'm a soldier, not a diplomat. I can only tell the truth.\t\t-- Kirk, \"Errand of \nMercy\", stardate 3198.9\n\n", "msg_date": "Wed, 16 Oct 2002 12:09:34 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [GENERAL] Postgres-based system to run .org registry?" } ]
[ { "msg_contents": "After turning autocommit off on my test database, my cron scripts that \nvacuum the database are now failing.\n\nThis can be easily reproduced, turn autocommit off in your \npostgresql.conf, then launch psql and run a vacuum.\n\n[blind@blind databases]$ psql files\nWelcome to psql 7.3b2, the PostgreSQL interactive terminal.\n\nType: \\copyright for distribution terms\n \\h for help with SQL commands\n \\? for help on internal slash commands\n \\g or terminate with semicolon to execute query\n \\q to quit\n\nfiles=# vacuum;\nERROR: VACUUM cannot run inside a BEGIN/END block\nfiles=#\n\nIt turns out that you need to commit/rollback first before you can issue \nthe vacuum command. While I understand why this is happening (psql is \nissuing some selects on startup which automatically starts a \ntransaction) it certainly isn't intuitive.\n\nDoes this mean that I need to change my cron scripts to do \"rollback; \nvacuum;\"?\n\nthanks,\n--Barry\n\n\n", "msg_date": "Mon, 14 Oct 2002 13:06:44 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": true, "msg_subject": "interesting side effect of autocommit = off" }, { "msg_contents": "Barry Lind wrote:\n> After turning autocommit off on my test database, my cron scripts that \n> vacuum the database are now failing.\n> \n> This can be easily reproduced, turn autocommit off in your \n> postgresql.conf, then launch psql and run a vacuum.\n> \n> [blind@blind databases]$ psql files\n> Welcome to psql 7.3b2, the PostgreSQL interactive terminal.\n> \n> Type: \\copyright for distribution terms\n> \\h for help with SQL commands\n> \\? for help on internal slash commands\n> \\g or terminate with semicolon to execute query\n> \\q to quit\n> \n> files=# vacuum;\n> ERROR: VACUUM cannot run inside a BEGIN/END block\n> files=#\n> \n> It turns out that you need to commit/rollback first before you can issue \n> the vacuum command. While I understand why this is happening (psql is \n> issuing some selects on startup which automatically starts a \n> transaction) it certainly isn't intuitive.\n> \n> Does this mean that I need to change my cron scripts to do \"rollback; \n> vacuum;\"?\n\nOK, I can reproduce it here, but the issue is only reproducable if you\nuse autocommit off in postgresql.conf. If you run it interactively as\nyour first command, it is OK. \n\nI am sure the problem is that psql doing a query on startup:\n\n\n\t$ sql -E test\n\t********* QUERY **********\n\tSELECT usesuper FROM pg_catalog.pg_user WHERE usename = 'postgres'\n\t**************************\n\nFortunately, we have an open item for 7.3 for this exact case:\n\n\tFix client apps for autocommit = off\n\nand psql is one of them. I was just asking what we need to do to get\nthis addressed. I think the fix will be in within the next few days.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 14 Oct 2002 19:00:15 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: interesting side effect of autocommit = off" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I am sure the problem is that psql doing a query on startup:\n\nYeah, and libpq does one too in some cases :-(. Both of these need to\nbe fixed before 7.3 if possible.\n\nWhether we fix these or not, it'd be a good idea to document that\nturning autocommit off in postgresql.conf is not yet well-supported.\nI doubt that all client-side code will be happy with that for awhile\nyet ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 Oct 2002 19:59:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: interesting side effect of autocommit = off " }, { "msg_contents": "Tom Lane wrote:\n> Yeah, and libpq does one too in some cases :-(. Both of these need to\n> be fixed before 7.3 if possible.\n> \n> Whether we fix these or not, it'd be a good idea to document that\n> turning autocommit off in postgresql.conf is not yet well-supported.\n> I doubt that all client-side code will be happy with that for awhile\n> yet ...\n\nYup -- here's another example. I was playing around with autocommit off in \npostgresql.conf to see the effect on dblink. Just now I tried to use \npg_dumpall in preparation for an initdb, and got this:\n\n$ pg_dumpall > cur.2002.10.14.dmp\npg_dump: WARNING: BEGIN: already a transaction in progress\npg_dump: could not set transaction isolation level to serializable: ERROR: \nSET TRANSACTION ISOLATION LEVEL must be called before any query\npg_dumpall: pg_dump failed on dblink_test_master, exiting\n\nJoe\n\n", "msg_date": "Mon, 14 Oct 2002 19:00:46 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: interesting side effect of autocommit = off" } ]
[ { "msg_contents": "Yep, that's them. This is a big win from a PostgreSQL advocacy position,\nespecially since oracle pr made an official statement against the use of\nPostgreSQL. Has this info hit any of the linux oriented news sites\n(linux-today, slashdot, etc...) If not someone from the PostgreSQL\nmarketing dept. (wink wink) should come up with a press release.\n\nRobert Treat\n\n----- Original Message ----- \nFrom: \"David De Graff\" <postgresql@awarehouse.com>\nTo: <pgsql-hackers@postgresql.org>; <pgsql-admin@postgresql.org>;\n<pgsql-general@postgresql.org>\nSent: Monday, October 14, 2002 3:42 PM\nSubject: [GENERAL] Postgres-based system to run .org registry?\n\n\n> The Internet Society (ISOC) won the bid to replace Verisign as the\n.org\n> registry. They're going to subcontract to Afilias, who has been\nrunning the\n> .info registry.\n> \n> See:\n> http://www.icann.org/announcements/announcement-14oct02.htm\n> http://www.afilias.info/about_afilias/dotorg/index_html\n> http://www.washingtonpost.com/wp-dyn/articles/A23669-2002Oct14.html\n> \n> Is this the same group that recently asked for input on their\nproposal,\n> which specified Postgres as the registry database?\n> \n> (For some reason I can't find the recent thread on their proposal on\nthe\n> hackers, general or admin lists.)\n> \n> Dave De Graff\n\n\n\n\n", "msg_date": "14 Oct 2002 16:08:09 -0400", "msg_from": "Robert Treat <xzilla@users.sourceforge.net>", "msg_from_op": true, "msg_subject": "Re: [GENERAL] Postgres-based system to run .org registry?" }, { "msg_contents": "It's on Slashdot, but there's only one post there that mentions the use of \nPostgresql.\n\nOn 14 Oct 2002, Robert Treat wrote:\n\n> Yep, that's them. This is a big win from a PostgreSQL advocacy position,\n> especially since oracle pr made an official statement against the use of\n> PostgreSQL. Has this info hit any of the linux oriented news sites\n> (linux-today, slashdot, etc...) If not someone from the PostgreSQL\n> marketing dept. (wink wink) should come up with a press release.\n> \n> Robert Treat\n> \n> ----- Original Message ----- \n> From: \"David De Graff\" <postgresql@awarehouse.com>\n> To: <pgsql-hackers@postgresql.org>; <pgsql-admin@postgresql.org>;\n> <pgsql-general@postgresql.org>\n> Sent: Monday, October 14, 2002 3:42 PM\n> Subject: [GENERAL] Postgres-based system to run .org registry?\n> \n> \n> > The Internet Society (ISOC) won the bid to replace Verisign as the\n> .org\n> > registry. They're going to subcontract to Afilias, who has been\n> running the\n> > .info registry.\n> > \n> > See:\n> > http://www.icann.org/announcements/announcement-14oct02.htm\n> > http://www.afilias.info/about_afilias/dotorg/index_html\n> > http://www.washingtonpost.com/wp-dyn/articles/A23669-2002Oct14.html\n> > \n> > Is this the same group that recently asked for input on their\n> proposal,\n> > which specified Postgres as the registry database?\n> > \n> > (For some reason I can't find the recent thread on their proposal on\n> the\n> > hackers, general or admin lists.)\n> > \n> > Dave De Graff\n> \n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n", "msg_date": "Mon, 14 Oct 2002 14:14:58 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Postgres-based system to run .org registry?" }, { "msg_contents": "On Monday 14 October 2002 04:08 pm, Robert Treat wrote:\n> Yep, that's them. This is a big win from a PostgreSQL advocacy position,\n> especially since oracle pr made an official statement against the use of\n> PostgreSQL. Has this info hit any of the linux oriented news sites\n> (linux-today, slashdot, etc...) If not someone from the PostgreSQL\n> marketing dept. (wink wink) should come up with a press release.\n\nI have submitted a story to LinuxToday. We'll see how that goes. Anyone want \nto take on Slashdot? :-)\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 14 Oct 2002 16:31:25 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Postgres-based system to run .org registry?" }, { "msg_contents": "Robert Treat wrote:\n> Yep, that's them. This is a big win from a PostgreSQL advocacy position,\n> especially since oracle pr made an official statement against the use of\n> PostgreSQL. Has this info hit any of the linux oriented news sites\n> (linux-today, slashdot, etc...) If not someone from the PostgreSQL\n> marketing dept. (wink wink) should come up with a press release.\n\nYes, this is a huge win, and something for our advocacy site, and for\nthe main PostgreSQL page too. Vince?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 14 Oct 2002 19:01:26 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Postgres-based system to run .org registry?" }, { "msg_contents": "> Yep, that's them. This is a big win from a PostgreSQL advocacy position,\n> especially since oracle pr made an official statement against the use of\n> PostgreSQL. Has this info hit any of the linux oriented news sites\n> (linux-today, slashdot, etc...) If not someone from the PostgreSQL\n> marketing dept. (wink wink) should come up with a press release.\n>\n> Robert Treat\n\nDon't worry about Oracle, they don't need real customers. Their database \nmarket consists of a healthy balance of real and fictitious people. See \nhttp://slashdot.org/articles/02/04/17/179252.shtml?tid=98\n\nRegards,\n\tJeff Davis\n", "msg_date": "Mon, 14 Oct 2002 20:47:05 -0700", "msg_from": "Jeff Davis <list-pgsql-hackers@empires.org>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Postgres-based system to run .org registry?" }, { "msg_contents": "On Mon, 2002-10-14 at 16:14, scott.marlowe wrote:\n> It's on Slashdot, but there's only one post there that mentions the use of \n> Postgresql.\n> \n> On 14 Oct 2002, Robert Treat wrote:\n> \n> > Yep, that's them. This is a big win from a PostgreSQL advocacy position,\n> > especially since oracle pr made an official statement against the use of\n> > PostgreSQL. Has this info hit any of the linux oriented news sites\n> > (linux-today, slashdot, etc...) If not someone from the PostgreSQL\n> > marketing dept. (wink wink) should come up with a press release.\n\nAnybody have a link where I can find the /. or the Oracle statement?\n\n--\nKarl\n\n", "msg_date": "16 Oct 2002 15:06:13 -0400", "msg_from": "Karl DeBisschop <kdebisschop@alert.infoplease.com>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Postgres-based system to run .org registry?" }, { "msg_contents": "Karl DeBisschop wrote:\n> On Mon, 2002-10-14 at 16:14, scott.marlowe wrote:\n> \n>>It's on Slashdot, but there's only one post there that mentions the use of \n>>Postgresql.\n>>\n>>On 14 Oct 2002, Robert Treat wrote:\n>>\n>>\n>>>Yep, that's them. This is a big win from a PostgreSQL advocacy position,\n>>>especially since oracle pr made an official statement against the use of\n>>>PostgreSQL. Has this info hit any of the linux oriented news sites\n>>>(linux-today, slashdot, etc...) If not someone from the PostgreSQL\n>>>marketing dept. (wink wink) should come up with a press release.\n>>\n> \n> Anybody have a link where I can find the /. or the Oracle statement?\n\nHere's the Oracle statement:\n\nhttp://forum.icann.org/org-eval/gartner-report/msg00000.html\n\nHope that helps,\n\nMike Mascari\nmascarm@mascari.com\n\n", "msg_date": "Wed, 16 Oct 2002 15:14:25 -0400", "msg_from": "Mike Mascari <mascarm@mascari.com>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Postgres-based system to run .org registry?" }, { "msg_contents": "The slashdot article: \nhttp://slashdot.org/article.pl?sid=02/10/14/1720241&mode=nested&tid=95\n\nThe isoc page: http://www.isoc.org/dotorg/\n\nthe isoc proposal:\n\nhttp://www.isoc.org/dotorg/10-1.shtml\n\nthe Oracle post:\n\nhttp://forum.icann.org/org-eval/gartner-report/msg00000.html\n\nGood day.\n\n\nOn 16 Oct 2002, Karl DeBisschop wrote:\n\n> On Mon, 2002-10-14 at 16:14, scott.marlowe wrote:\n> > It's on Slashdot, but there's only one post there that mentions the use of \n> > Postgresql.\n> > \n> > On 14 Oct 2002, Robert Treat wrote:\n> > \n> > > Yep, that's them. This is a big win from a PostgreSQL advocacy position,\n> > > especially since oracle pr made an official statement against the use of\n> > > PostgreSQL. Has this info hit any of the linux oriented news sites\n> > > (linux-today, slashdot, etc...) If not someone from the PostgreSQL\n> > > marketing dept. (wink wink) should come up with a press release.\n> \n> Anybody have a link where I can find the /. or the Oracle statement?\n> \n> --\n> Karl\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n", "msg_date": "Wed, 16 Oct 2002 13:33:46 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Postgres-based system to run .org registry?" } ]
[ { "msg_contents": "Looking at a 7.2.3 dump of a database I've been using for development, I\nnoticed that a type that was used in a ALTER TABLE ADD COLUMN (for an\nexisting table) was CREATE TYPE **AFTER** the CREATE TABLE for said\ntable. \n\nI suspect this will give me grief on reload. \n\n(I can get around it, but I thought I'd mention the issue). \n\nBasically, I defined a comments table, then decided I wanted to use the\ncontrib/tsearch module, so I added the contrib/tsearch stuff, and then\ndid a ALTER TABLE ADD COLUMN comments_idx txtidx. \n\nIn the pg_dump (and pg_dump -s), I note that the CREATE TYPE for txtidx\nhappens AFTER the CREATE TABLE comments. \n\nI haven't tried to reload this dump script yet.\n\nLER\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n", "msg_date": "14 Oct 2002 23:06:00 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": true, "msg_subject": "PG_DUMP and Adding columns/Types" } ]
[ { "msg_contents": "I have encountered unexpected behavior of DROP USER in 7.2.1.\n\nOne would normally expect, that when DROP USER someuser is issued, all \nassociated data structures will be readjusted, especially ownership and access \nrights.\n\nThis however does not happen.\n\nAfter droping an user, that had ownership of tables, pg_dump complains :\n\n[...]\npg_dump: WARNING: owner of data type table_one appears to be invalid\npg_dump: WARNING: owner of table \"some_seq\" appears to be invalid\n[...]\n\nThe access rights to those tables remain\n\ndatabase=# \\z table_one\n Access privileges for database \"customer\"\n Table | Access privileges \n-------------+------------------------------\n table_one | {=,98=arwdRxt,maria=arwdRxt}\n(1 row)\n\nThere is no way to remove rights of this 'user' 98 using REVOKE etc.\n\nPerhaps full dump/reload will remove the rights, because that user will not be \nfound, but restore may fail due to the error conditions.\n\nAny resolution for this?\n\nRegards,\nDaniel\n\n", "msg_date": "Tue, 15 Oct 2002 11:04:07 +0300", "msg_from": "Daniel Kalchev <daniel@digsys.bg>", "msg_from_op": true, "msg_subject": "DROP USER weirdness in 7.2.1" }, { "msg_contents": "Daniel Kalchev writes:\n\n> One would normally expect, that when DROP USER someuser is issued, all\n> associated data structures will be readjusted, especially ownership and access\n> rights.\n\nPerhaps, but the documentation states otherwise.\n\n> There is no way to remove rights of this 'user' 98 using REVOKE etc.\n>\n> Perhaps full dump/reload will remove the rights, because that user will not be\n> found, but restore may fail due to the error conditions.\n>\n> Any resolution for this?\n\nRecreate the user with the given ID and drop the objects manually.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 15 Oct 2002 20:54:03 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: DROP USER weirdness in 7.2.1" }, { "msg_contents": ">>>Peter Eisentraut said:\n > Daniel Kalchev writes:\n > \n > > One would normally expect, that when DROP USER someuser is issued, all\n > > associated data structures will be readjusted, especially ownership and ac\n cess\n > > rights.\n > \n > Perhaps, but the documentation states otherwise.\n > \n[...]\n > > Any resolution for this?\n > \n > Recreate the user with the given ID and drop the objects manually.\n\nI was able to modify ownership of all tables using ALTER TABLE. However, the \nassociated pg_toastXXXX tables still remain with the wrong ownership.\n\nIn my case, I had to recreate the user, because it had to have rights in a \ndifferent database within the same postgres installation... Nevertheless, it \nwould be much more convenient, if ALL rights associated with the particular \nusers are dropped when the user is dropped and eventually all orphaned objects \nhave their owner set to the DBA (postgres).\n\nIt is not too difficult to imagine, that in real-world database installation \nusers would need to be created and dropped.\n\nDaniel\n\n", "msg_date": "Wed, 16 Oct 2002 10:48:02 +0300", "msg_from": "Daniel Kalchev <daniel@digsys.bg>", "msg_from_op": true, "msg_subject": "Re: DROP USER weirdness in 7.2.1 " } ]
[ { "msg_contents": "Hi everyone,\n\nThanks to the French members of the PostgreSQL Community (mainly\nFran�ois Suter <dba@paragraf.ch>), the French translation of the\nPostgreSQL \"Advocacy and Marketing\" site, is now complete and ready for\npublic use:\n\nhttp://advocacy.postgresql.org/?lang=fr\n\nThat's 4 completed languages at this point, with more coming along.\n\nLet's see how many more can be added... :)\n\nRegards and best wishes,\n\nJustin Clift\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Tue, 15 Oct 2002 19:33:10 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": true, "msg_subject": "French version of the PostgreSQL \"Advocacy and Marketing\" site is\n\tready" }, { "msg_contents": "On Tue, 2002-10-15 at 11:33, Justin Clift wrote:\n\n> That's 4 completed languages at this point, with more coming along.\n\nYo!\n\nGreat work! Shouldn't these announcements also make it to the 'Latest\nNews' section of said website?\n\ncheers\n-- vbi\n\n-- \nthis email is protected by a digital signature http://fortytwo.ch/gpg\n\nNOTE: get my key here: http://www.google.com/search?q=mQGiBDx2a6ERBAC8l", "msg_date": "19 Oct 2002 13:41:31 +0200", "msg_from": "Adrian 'Dagurashibanipal' von Bidder <avbidder@fortytwo.ch>", "msg_from_op": false, "msg_subject": "Re: French version of the PostgreSQL \"Advocacy and" }, { "msg_contents": "Hi Adrian,\n\nYep, they probably should, but the backend web-interface for enabling\ntranslations isn't yet up to scratch, so co-ordinating the translations\nof new entries is kind of difficult for the next month or so.\n\n:-/\n\nRegards and best wishes,\n\nJustin Clift\n\n\nAdrian 'Dagurashibanipal' von Bidder wrote:\n> \n> On Tue, 2002-10-15 at 11:33, Justin Clift wrote:\n> \n> > That's 4 completed languages at this point, with more coming along.\n> \n> Yo!\n> \n> Great work! Shouldn't these announcements also make it to the 'Latest\n> News' section of said website?\n> \n> cheers\n> -- vbi\n> \n> --\n> this email is protected by a digital signature http://fortytwo.ch/gpg\n> \n> NOTE: get my key here: http://www.google.com/search?q=mQGiBDx2a6ERBAC8l\n> \n> ------------------------------------------------------------------------\n> Name: signature.asc\n> signature.asc Type: application/pgp-signature\n> Description: This is a digitally signed message part\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Thu, 31 Oct 2002 11:57:52 +1100", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: French version of the PostgreSQL \"Advocacy and" } ]
[ { "msg_contents": "In any contrib module 'make installcheck' runs infinite time...\n\nFor example, contrib/ltree\n% gmake installcheck\ngmake -C ../../src/test/regress pg_regress\ngmake[1]: О©╫О©╫О©╫О©╫ О©╫ О©╫О©╫О©╫О©╫О©╫О©╫О©╫ `/spool/home/teodor/pgsql/src/test/regress'\ngmake[1]: `pg_regress' О©╫О©╫ О©╫О©╫О©╫О©╫О©╫О©╫О©╫ О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫.\ngmake[1]: О©╫О©╫О©╫О©╫О©╫ О©╫О©╫ О©╫О©╫О©╫О©╫О©╫О©╫О©╫ `/spool/home/teodor/pgsql/src/test/regress'\n../../src/test/regress/pg_regress ltree\n(using postmaster on Unix socket, default port)\n============== dropping database \"regression\" ==============\nDROP DATABASE\n============== creating database \"regression\" ==============\nCREATE DATABASE\nALTER DATABASE\n============== dropping regression test user accounts ==============\n============== installing PL/pgSQL ==============\n============== running regression test queries ==============\ntest ltree ...\n\nIn this time in top:\nCPU states: 100% user, 0.0% nice, 0.0% system, 0.0% interrupt, 0.0% idle\nMem: 116M Active, 33M Inact, 24M Wired, 11M Cache, 29M Buf, 1528K Free\nSwap: 510M Total, 32M Used, 477M Free, 6% Inuse\nkill\n PID USERNAME PRI NICE SIZE RES STATE TIME WCPU CPU COMMAND\n18180 teodor 64 0 1852K 1248K RUN 1:15 95.07% 93.65% psql\n\npostmaster doesn't take a CPU time...\n\n\n-- \nTeodor Sigaev\nteodor@stack.net\n\n\n", "msg_date": "Tue, 15 Oct 2002 20:06:38 +0400", "msg_from": "Teodor Sigaev <teodor@stack.net>", "msg_from_op": true, "msg_subject": "Current CVS - something broken in contrib" }, { "msg_contents": "Teodor Sigaev <teodor@stack.net> writes:\n> In any contrib module 'make installcheck' runs infinite time...\n\nLooks like my fault :-( ... will have it fixed in a few minutes\n(I seem to have broken psql for COPY FROM STDIN :-()\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 15 Oct 2002 12:34:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Current CVS - something broken in contrib " }, { "msg_contents": "Teodor Sigaev <teodor@stack.net> writes:\n> In any contrib module 'make installcheck' runs infinite time...\n\nActually, I had managed to break \\copy, not COPY --- it seems the\nmain regression tests exercise COPY but not \\copy. It might be\na good idea to change copy2.sql to exercise both ...\n\nAnyway, fix committed.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 15 Oct 2002 13:05:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Current CVS - something broken in contrib " } ]
[ { "msg_contents": "I'm currently using 7.3b2 for test and development. I ran into a problem\nusing a dumped schema from pg_dump. After importing the dumped schema,\nany delete or update involving a foreign key results in a relation 0\ndoes not exist error. I noticed that all my foreign key declarations\nwere moved from the table create to separate statements at the bottom of\nthe dump file. Thanks in advance for any insight into this problem you\ncan lend.\n\n-John\n\n\n", "msg_date": "15 Oct 2002 14:53:46 -0400", "msg_from": "John Halderman <john@paymentalliance.net>", "msg_from_op": true, "msg_subject": "foreign key problem with pg_dump under 7.3b2" }, { "msg_contents": "On 15 Oct 2002, John Halderman wrote:\n\n> I'm currently using 7.3b2 for test and development. I ran into a problem\n> using a dumped schema from pg_dump. After importing the dumped schema,\n> any delete or update involving a foreign key results in a relation 0\n> does not exist error. I noticed that all my foreign key declarations\n> were moved from the table create to separate statements at the bottom of\n> the dump file. Thanks in advance for any insight into this problem you\n> can lend.\n\nIf the data has moved from earlier versions (I think 7.0.x) there was a\nbug in an older pg_dump that dropped a piece of information (the\nrelated table) and the loss would be carried along. That\ninformation is now used rather than the name passed in the args\nso said dumps break on b2. Current sources should fill in the missing\ninformation whenever possible.\n\n", "msg_date": "Tue, 15 Oct 2002 12:12:48 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: foreign key problem with pg_dump under 7.3b2" } ]
[ { "msg_contents": "According to the syntax diagram in the documenation, I can write\n\nCOPY table TO STDOUT WITH BINARY OIDS;\n\nShouldn't the \"binary\", being an adjective, be attached to something?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 15 Oct 2002 20:54:16 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "COPY syntax" }, { "msg_contents": "Peter Eisentraut wrote:\n> According to the syntax diagram in the documenation, I can write\n> \n> COPY table TO STDOUT WITH BINARY OIDS;\n> \n> Shouldn't the \"binary\", being an adjective, be attached to something?\n\nUh, it is attached to WITH?\n\nSeriously, yea, it doesn't read well, but it follows the WITH format of\nparameters to a command. \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 15 Oct 2002 21:09:05 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: COPY syntax" }, { "msg_contents": "Bruce Momjian writes:\n\n> > According to the syntax diagram in the documenation, I can write\n> >\n> > COPY table TO STDOUT WITH BINARY OIDS;\n> >\n> > Shouldn't the \"binary\", being an adjective, be attached to something?\n>\n> Uh, it is attached to WITH?\n\nAttached to a noun phrase, like \"mode\" or \"output\". Note that all the\nother things the typically follow WITH in any command are nouns.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Thu, 17 Oct 2002 00:12:08 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: COPY syntax" }, { "msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > > According to the syntax diagram in the documenation, I can write\n> > >\n> > > COPY table TO STDOUT WITH BINARY OIDS;\n> > >\n> > > Shouldn't the \"binary\", being an adjective, be attached to something?\n> >\n> > Uh, it is attached to WITH?\n> \n> Attached to a noun phrase, like \"mode\" or \"output\". Note that all the\n> other things the typically follow WITH in any command are nouns.\n\nShould we add an optional MODE after BINARY?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 16 Oct 2002 18:25:44 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: COPY syntax" }, { "msg_contents": "Bruce Momjian writes:\n > Peter Eisentraut wrote:\n > > Bruce Momjian writes:\n > > > > COPY table TO STDOUT WITH BINARY OIDS;\n > > > > Shouldn't the \"binary\", being an adjective, be attached to something?\n > > > Uh, it is attached to WITH?\n > > Attached to a noun phrase, like \"mode\" or \"output\". Note that all the\n > > other things the typically follow WITH in any command are nouns.\n > Should we add an optional MODE after BINARY?\n\nAre you serious? You'd like to mess up the COPY syntax even further\nfor a purely grammatical reason!\n\nA good few months ago I put formward an idea to change (well migrate\nreally) to \"COPY TABLE\" rather than \"COPY\" - this would allow a well\ndesigned and thoughtout syntax for the new version while retaining old\ncompatibility.\n\negards, Lee Kindness.\n", "msg_date": "Thu, 17 Oct 2002 09:33:54 +0100", "msg_from": "Lee Kindness <lkindness@csl.co.uk>", "msg_from_op": false, "msg_subject": "Re: COPY syntax" }, { "msg_contents": "Lee Kindness wrote:\n> Bruce Momjian writes:\n> > Peter Eisentraut wrote:\n> > > Bruce Momjian writes:\n> > > > > COPY table TO STDOUT WITH BINARY OIDS;\n> > > > > Shouldn't the \"binary\", being an adjective, be attached to something?\n> > > > Uh, it is attached to WITH?\n> > > Attached to a noun phrase, like \"mode\" or \"output\". Note that all the\n> > > other things the typically follow WITH in any command are nouns.\n> > Should we add an optional MODE after BINARY?\n> \n> Are you serious? You'd like to mess up the COPY syntax even further\n> for a purely grammatical reason!\n> \n> A good few months ago I put formward an idea to change (well migrate\n> really) to \"COPY TABLE\" rather than \"COPY\" - this would allow a well\n> designed and thoughtout syntax for the new version while retaining old\n> compatibility.\n\nI don't like the added MODE either, but Peter doesn't seem to like\nBINARY alone, though it seems fine to me.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 17 Oct 2002 14:57:12 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: COPY syntax" }, { "msg_contents": "Lee Kindness writes:\n\n> Are you serious? You'd like to mess up the COPY syntax even further\n> for a purely grammatical reason!\n\nWe already \"messed up\" the COPY syntax in this release to achieve better\nuser friendliness. I do not think it's unreasonable to review this goal\nfrom a variety of angles.\n\n> A good few months ago I put formward an idea to change (well migrate\n> really) to \"COPY TABLE\" rather than \"COPY\" - this would allow a well\n> designed and thoughtout syntax for the new version while retaining old\n> compatibility.\n\nWell, I am the first to agree that the current syntax is not well\ndesigned, but I must admit that I don't quite see what benefit simply\nadding \"TABLE\" would have.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Thu, 17 Oct 2002 21:17:27 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: COPY syntax" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Well, I am the first to agree that the current syntax is not well\n> designed, but I must admit that I don't quite see what benefit simply\n> adding \"TABLE\" would have.\n\nI think the idea was that \"COPY TABLE ...\" could have a new clean syntax\nwithout the warts of the current syntax. TABLE wouldn't be a noise word,\nbut a trigger for a different syntax for what follows.\n\nHowever, COPY's feature set is inherently pretty wart-y. Even if we had\na green field to design syntax in, where exactly is the improvement\ngoing to come, assuming that functionality has to stay the same?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 17 Oct 2002 23:16:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: COPY syntax " } ]
[ { "msg_contents": "On Tue, 2002-10-15 at 15:12, Stephan Szabo wrote:\n> On 15 Oct 2002, John Halderman wrote:\n> \n> > I'm currently using 7.3b2 for test and development. I ran into a\nproblem\n> > using a dumped schema from pg_dump. After importing the dumped\nschema,\n> > any delete or update involving a foreign key results in a relation 0\n> > does not exist error. I noticed that all my foreign key declarations\n> > were moved from the table create to separate statements at the\nbottom of\n> > the dump file. Thanks in advance for any insight into this problem\nyou\n> > can lend.\n> \n> If the data has moved from earlier versions (I think 7.0.x) there was\na\n> bug in an older pg_dump that dropped a piece of information (the\n> related table) and the loss would be carried along. That\n> information is now used rather than the name passed in the args\n> so said dumps break on b2. Current sources should fill in the missing\n> information whenever possible.\n> \n> \nActually we are dumping from b2 to b2. Also the problem doesn't seem to\nbe \nrelated to the data or missing data. I can infer this because I am doing\na schema only dump. After I import this dump i create some test data and\nstill run into the relation 0 does not exist error. I think it has\nsomething to do with the way the dump defines the foreign key\nconstraints and triggers. Thanks again for the help.\n\n-john\n\n", "msg_date": "15 Oct 2002 15:33:35 -0400", "msg_from": "John Halderman <john@paymentalliance.net>", "msg_from_op": true, "msg_subject": "Re: foreign key problem with pg_dump under 7.3b2" } ]
[ { "msg_contents": "On Tue, 2002-10-15 at 15:38, Stephan Szabo wrote:\n> \n> On 15 Oct 2002, John Halderman wrote:\n> \n> > On Tue, 2002-10-15 at 15:12, Stephan Szabo wrote:\n> > > On 15 Oct 2002, John Halderman wrote:\n> > >\n> > > > I'm currently using 7.3b2 for test and development. I ran into a problem\n> > > > using a dumped schema from pg_dump. After importing the dumped schema,\n> > > > any delete or update involving a foreign key results in a relation 0\n> > > > does not exist error. I noticed that all my foreign key declarations\n> > > > were moved from the table create to separate statements at the bottom of\n> > > > the dump file. Thanks in advance for any insight into this problem you\n> > > > can lend.\n> > >\n> > > If the data has moved from earlier versions (I think 7.0.x) there was a\n> > > bug in an older pg_dump that dropped a piece of information (the\n> > > related table) and the loss would be carried along. That\n> > > information is now used rather than the name passed in the args\n> > > so said dumps break on b2. Current sources should fill in the missing\n> > > information whenever possible.\n> > >\n> > >\n> > Actually we are dumping from b2 to b2. Also the problem doesn't seem to be\n> > related to the data or missing data. I can infer this because I am doing\n> > a schema only dump. After I import this dump i create some test data and\n> > still run into the relation 0 does not exist error. I think it has\n> > something to do with the way the dump defines the foreign key\n> > constraints and triggers. Thanks again for the help.\n> \n> Was your old b2 system loaded from a dump? If so, you'd be in the upgrade\n> portion of the problem. Old dumps were incorrect, and as soon as you\n> loaded from one of those dumps all future dumps became incorrect in the\n> same way. Current sources notice that the item is missing and attempts\n> to figure out what it should be.\n> \n> \nInteresting, that may be it. I'll do some testing to verify your theory.\nThank you for your help.\n\n-john.\n\n", "msg_date": "15 Oct 2002 15:36:05 -0400", "msg_from": "John Halderman <john@paymentalliance.net>", "msg_from_op": true, "msg_subject": "Re: foreign key problem with pg_dump under 7.3b2" } ]
[ { "msg_contents": "Hi all,\n\nI'm thinking that there is an improvement to vacuum which could be made\nfor 7.4. VACUUM FULLing large, heavily updated tables is a pain. There's\nvery little an application can do to minimise dead-tuples, particularly if\nthe table is randomly updated. Wouldn't it be beneficial if VACUUM could\nhave a parameter which specified how much of the table is vacuumed. That\nis, you could specify:\n\nVACUUM FULL test 20 precent;\n\nYes, terrible syntax but regardless: this would mean that we could\nspread the vacuum out and not, possibly, be backing up queues. ANALYZE\ncould be modified, if necessary.\n\nThoughts?\n\nGavin\n\n", "msg_date": "Wed, 16 Oct 2002 10:22:06 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": true, "msg_subject": "Vacuum improvement" }, { "msg_contents": "That a good idea. That way, if your database slows during specific\nwindows in time, you can vacuum larger sizes, etc. Seemingly would help\nyou better manage your vacuuming against system loading.\n\n\nGreg\n\nOn Tue, 2002-10-15 at 19:22, Gavin Sherry wrote:\n> Hi all,\n> \n> I'm thinking that there is an improvement to vacuum which could be made\n> for 7.4. VACUUM FULLing large, heavily updated tables is a pain. There's\n> very little an application can do to minimise dead-tuples, particularly if\n> the table is randomly updated. Wouldn't it be beneficial if VACUUM could\n> have a parameter which specified how much of the table is vacuumed. That\n> is, you could specify:\n> \n> VACUUM FULL test 20 precent;\n> \n> Yes, terrible syntax but regardless: this would mean that we could\n> spread the vacuum out and not, possibly, be backing up queues. ANALYZE\n> could be modified, if necessary.\n> \n> Thoughts?\n> \n> Gavin\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster", "msg_date": "15 Oct 2002 20:34:01 -0500", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: Vacuum improvement" }, { "msg_contents": "Gavin Sherry <swm@linuxworld.com.au> writes:\n> have a parameter which specified how much of the table is vacuumed. That\n> is, you could specify:\n> VACUUM FULL test 20 precent;\n\nErm ... but which 20 percent? In other words, how could you arrange for\nrepeated applications of such a command to cover the whole table, and\nnot just retrace an already-cleaned-out portion?\n\nI don't object to the idea in principle, but I am not sure how to\nimplement it in practice.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 15 Oct 2002 23:52:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Vacuum improvement " }, { "msg_contents": "On Tue, Oct 15, 2002 at 11:52:35PM -0400, Tom Lane wrote:\n> Gavin Sherry <swm@linuxworld.com.au> writes:\n> > have a parameter which specified how much of the table is vacuumed. That\n> > is, you could specify:\n> > VACUUM FULL test 20 precent;\n> \n> Erm ... but which 20 percent? In other words, how could you arrange for\n> repeated applications of such a command to cover the whole table, and\n> not just retrace an already-cleaned-out portion?\n\nMaybe each relation block can have a last-vacuumed timestamp? Somewhere\nin the table there would have to be a linked list of least-recently\nvacuumed blocks so the vacuum cleaner does not have to read every\nblock to know which one to clean.\n\nOr maybe some system table can provide information about activity in\neach block since last vacuum. This forces the use of the stat\ncollector...\n\n-- \nAlvaro Herrera (<alvherre[a]dcc.uchile.cl>)\n\"El sabio habla porque tiene algo que decir;\nel tonto, porque tiene que decir algo\" (Platon).\n", "msg_date": "Wed, 16 Oct 2002 01:04:23 -0300", "msg_from": "Alvaro Herrera <alvherre@dcc.uchile.cl>", "msg_from_op": false, "msg_subject": "Re: Vacuum improvement" }, { "msg_contents": "On Wed, 2002-10-16 at 05:22, Gavin Sherry wrote:\n> Hi all,\n> \n> I'm thinking that there is an improvement to vacuum which could be made\n> for 7.4. VACUUM FULLing large, heavily updated tables is a pain. There's\n> very little an application can do to minimise dead-tuples, particularly if\n> the table is randomly updated. Wouldn't it be beneficial if VACUUM could\n> have a parameter which specified how much of the table is vacuumed. That\n> is, you could specify:\n> \n> VACUUM FULL test 20 precent;\n\nWhat about\n\nVACUUM FULL test WORK 5 SLEEP 50;\n\nmeaning to VACUUM FULL the whole table, but to work in small chunks and\nrelaese all locks and let others access the tables between these ?\n\nYou could even fire up a separate thread to \n\nVACUUM [FULL] test WORK 5 SLEEP 50 CONTINUOUS;\n\nTo keep vacuuming a heavily updated table.\n\n------------------\nHannu\n\n\n", "msg_date": "16 Oct 2002 09:16:44 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Vacuum improvement" }, { "msg_contents": "On 16 Oct 2002, Hannu Krosing wrote:\n\n> On Wed, 2002-10-16 at 05:22, Gavin Sherry wrote:\n> > Hi all,\n> > \n> > I'm thinking that there is an improvement to vacuum which could be made\n> > for 7.4. VACUUM FULLing large, heavily updated tables is a pain. There's\n> > very little an application can do to minimise dead-tuples, particularly if\n> > the table is randomly updated. Wouldn't it be beneficial if VACUUM could\n> > have a parameter which specified how much of the table is vacuumed. That\n> > is, you could specify:\n> > \n> > VACUUM FULL test 20 precent;\n> \n> What about\n> \n> VACUUM FULL test WORK 5 SLEEP 50;\n> \n> meaning to VACUUM FULL the whole table, but to work in small chunks and\n> relaese all locks and let others access the tables between these ?\n\nGreat idea. I think this could work as a complement to the idea I had. To\nanswer Tom's question, how would we know what we've vacuumed, we could\nstore the range of tids we've vacuumed in pg_class. Or, we could store the\nblock offset of where we left off vacuuming before and using stats, run\nfor another X% of the heap. Is this possible?\n\nGavin\n\n", "msg_date": "Wed, 16 Oct 2002 17:29:37 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": true, "msg_subject": "Re: Vacuum improvement" }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> meaning to VACUUM FULL the whole table, but to work in small chunks and\n> relaese all locks and let others access the tables between these ?\n\nAFAICS this is impossible for VACUUM FULL. You can't let other accesses\nin and then resume processing, because that invalidates all the state\nyou have about where to put moved tuples.\n\nBut the whole point of developing non-FULL vacuuming was to make\nsomething that could be run concurrently with other stuff. I fail\nto see the point of insisting that frequent vacuums be FULL.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 16 Oct 2002 10:17:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Vacuum improvement " }, { "msg_contents": "On Wed, 2002-10-16 at 02:29, Gavin Sherry wrote:\n> On 16 Oct 2002, Hannu Krosing wrote:\n> \n> > On Wed, 2002-10-16 at 05:22, Gavin Sherry wrote:\n> > > Hi all,\n> > > \n> > > I'm thinking that there is an improvement to vacuum which could be made\n> > > for 7.4. VACUUM FULLing large, heavily updated tables is a pain. There's\n> > > very little an application can do to minimise dead-tuples, particularly if\n> > > the table is randomly updated. Wouldn't it be beneficial if VACUUM could\n> > > have a parameter which specified how much of the table is vacuumed. That\n> > > is, you could specify:\n> > > \n> > > VACUUM FULL test 20 precent;\n> > \n> > What about\n> > \n> > VACUUM FULL test WORK 5 SLEEP 50;\n> > \n> > meaning to VACUUM FULL the whole table, but to work in small chunks and\n> > relaese all locks and let others access the tables between these ?\n> \n> Great idea. I think this could work as a complement to the idea I had. To\n> answer Tom's question, how would we know what we've vacuumed, we could\n> store the range of tids we've vacuumed in pg_class. Or, we could store the\n> block offset of where we left off vacuuming before and using stats, run\n> for another X% of the heap. Is this possible?\n\nWhy couldn't you start your % from the first rotten/dead tuple? Just\nreading through trying to find the first tuple to start counting from\nwouldn't hold locks would it? That keeps you from having to track stats\nand ensures that X% of the tuples will be vacuumed.\n\nGreg", "msg_date": "16 Oct 2002 09:30:01 -0500", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: Vacuum improvement" }, { "msg_contents": "Vacuum full locks the whole table currently. I was thinking if you used a \nsimilar to a hard drive defragment that only 2 rows would need to be locked \nat a time. When you're done vacuum/defragmenting you shorten the file to \ndiscard the dead tuples that are located after your useful data. There might \nbe a need to lock the table for a little while at the end but it seems like \nyou could reduce that time greatly.\n\nI had one table that is heavily updated and it grew to 760 MB even with \nregular vacuuming. A vacuum full reduced it to 1.1 MB. I am running 7.2.0 \n(all my vacuuming is done by superuser).\n\nOn Wednesday 16 October 2002 09:30 am, (Via wrote:\n> On Wed, 2002-10-16 at 02:29, Gavin Sherry wrote:\n> > On 16 Oct 2002, Hannu Krosing wrote:\n> > > On Wed, 2002-10-16 at 05:22, Gavin Sherry wrote:\n> > > > Hi all,\n> > > >\n> > > > I'm thinking that there is an improvement to vacuum which could be\n> > > > made for 7.4. VACUUM FULLing large, heavily updated tables is a pain.\n> > > > There's very little an application can do to minimise dead-tuples,\n> > > > particularly if the table is randomly updated. Wouldn't it be\n> > > > beneficial if VACUUM could have a parameter which specified how much\n> > > > of the table is vacuumed. That is, you could specify:\n> > > >\n> > > > VACUUM FULL test 20 precent;\n> > >\n> > > What about\n> > >\n> > > VACUUM FULL test WORK 5 SLEEP 50;\n> > >\n> > > meaning to VACUUM FULL the whole table, but to work in small chunks and\n> > > relaese all locks and let others access the tables between these ?\n> >\n> > Great idea. I think this could work as a complement to the idea I had. To\n> > answer Tom's question, how would we know what we've vacuumed, we could\n> > store the range of tids we've vacuumed in pg_class. Or, we could store\n> > the block offset of where we left off vacuuming before and using stats,\n> > run for another X% of the heap. Is this possible?\n>\n> Why couldn't you start your % from the first rotten/dead tuple? Just\n> reading through trying to find the first tuple to start counting from\n> wouldn't hold locks would it? That keeps you from having to track stats\n> and ensures that X% of the tuples will be vacuumed.\n>\n> Greg\n\n", "msg_date": "Wed, 16 Oct 2002 11:33:35 -0500", "msg_from": "David Walker <pgsql@grax.com>", "msg_from_op": false, "msg_subject": "Re: Vacuum improvement" }, { "msg_contents": "But doesn't the solution I offer present a possible work around? The\ntable wouldn't need to be locked (I think) until the first dead tuple\nwere located. After that, you would only keep the locks until you've\nscanned X% of the table and shrunk as needed. The result, I think,\nresults in incremental vacuuming with shorter duration locks being\nheld. It's not ideal (locks) but may shorten the duration behind help\nby locks.\n\nI'm trying to figure out if the two approaches can't be combined\nsomehow. That is, a percent with maybe even a max lock duration?\n\nGreg\n\n\n\nOn Wed, 2002-10-16 at 11:33, David Walker wrote:\n> Vacuum full locks the whole table currently. I was thinking if you used a \n> similar to a hard drive defragment that only 2 rows would need to be locked \n> at a time. When you're done vacuum/defragmenting you shorten the file to \n> discard the dead tuples that are located after your useful data. There might \n> be a need to lock the table for a little while at the end but it seems like \n> you could reduce that time greatly.\n> \n> I had one table that is heavily updated and it grew to 760 MB even with \n> regular vacuuming. A vacuum full reduced it to 1.1 MB. I am running 7.2.0 \n> (all my vacuuming is done by superuser).\n> \n> On Wednesday 16 October 2002 09:30 am, (Via wrote:\n> > On Wed, 2002-10-16 at 02:29, Gavin Sherry wrote:\n> > > On 16 Oct 2002, Hannu Krosing wrote:\n> > > > On Wed, 2002-10-16 at 05:22, Gavin Sherry wrote:\n> > > > > Hi all,\n> > > > >\n> > > > > I'm thinking that there is an improvement to vacuum which could be\n> > > > > made for 7.4. VACUUM FULLing large, heavily updated tables is a pain.\n> > > > > There's very little an application can do to minimise dead-tuples,\n> > > > > particularly if the table is randomly updated. Wouldn't it be\n> > > > > beneficial if VACUUM could have a parameter which specified how much\n> > > > > of the table is vacuumed. That is, you could specify:\n> > > > >\n> > > > > VACUUM FULL test 20 precent;\n> > > >\n> > > > What about\n> > > >\n> > > > VACUUM FULL test WORK 5 SLEEP 50;\n> > > >\n> > > > meaning to VACUUM FULL the whole table, but to work in small chunks and\n> > > > relaese all locks and let others access the tables between these ?\n> > >\n> > > Great idea. I think this could work as a complement to the idea I had. To\n> > > answer Tom's question, how would we know what we've vacuumed, we could\n> > > store the range of tids we've vacuumed in pg_class. Or, we could store\n> > > the block offset of where we left off vacuuming before and using stats,\n> > > run for another X% of the heap. Is this possible?\n> >\n> > Why couldn't you start your % from the first rotten/dead tuple? Just\n> > reading through trying to find the first tuple to start counting from\n> > wouldn't hold locks would it? That keeps you from having to track stats\n> > and ensures that X% of the tuples will be vacuumed.\n> >\n> > Greg\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly", "msg_date": "16 Oct 2002 11:52:43 -0500", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: Vacuum improvement" }, { "msg_contents": "On Wed, 2002-10-16 at 12:33, David Walker wrote:\n> Vacuum full locks the whole table currently. I was thinking if you used a \n> similar to a hard drive defragment that only 2 rows would need to be locked \n> at a time. When you're done vacuum/defragmenting you shorten the file to \n> discard the dead tuples that are located after your useful data. There might \n> be a need to lock the table for a little while at the end but it seems like \n> you could reduce that time greatly.\n> \n> I had one table that is heavily updated and it grew to 760 MB even with \n> regular vacuuming. A vacuum full reduced it to 1.1 MB. I am running 7.2.0 \n> (all my vacuuming is done by superuser).\n> \n\nNot that I'm against the idea, but isn't this just a sign that your just\nnot vacuuming frequently enough? \n\nRobert Treat\n\n\n", "msg_date": "16 Oct 2002 13:18:25 -0400", "msg_from": "Robert Treat <xzilla@users.sourceforge.net>", "msg_from_op": false, "msg_subject": "Re: Vacuum improvement" } ]
[ { "msg_contents": "\nIs there any plans to make postgresql multithreading?\n\nThanks in advance (and also for all who commented to my question\nregarding replication.)\n\n\tAnuradha\n\nNB: please don't open fire to declare war on whether multithreading is\nneeded for PGSql or not. I am just expecting a black and white answer\nfrom the `authorities' ;)\n\n-- \n\nDebian GNU/Linux (kernel 2.4.18-xfs-1.1)\n\nIf you can survive death, you can probably survive anything.\n\n", "msg_date": "Wed, 16 Oct 2002 10:56:53 +0600", "msg_from": "Anuradha Ratnaweera <anuradha@lklug.pdn.ac.lk>", "msg_from_op": true, "msg_subject": "Postgresql and multithreading" }, { "msg_contents": "Anuradha Ratnaweera wrote:\n> \n> Is there any plans to make postgresql multithreading?\n> \n> Thanks in advance (and also for all who commented to my question\n> regarding replication.)\n> \n> \tAnuradha\n> \n> NB: please don't open fire to declare war on whether multithreading is\n> needed for PGSql or not. I am just expecting a black and white answer\n> from the `authorities' ;)\n\nWe don't think it is needed, except perhaps for Win32 and Solaris, which\nhave slow process creation times.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 16 Oct 2002 00:59:57 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Postgresql and multithreading" }, { "msg_contents": "On Wed, Oct 16, 2002 at 12:59:57AM -0400, Bruce Momjian wrote:\n> Anuradha Ratnaweera wrote:\n> > \n> > Is there any plans to make postgresql multithreading?\n> \n> We don't think it is needed, except perhaps for Win32 and Solaris, which\n> have slow process creation times.\n\nThanks, Bruce. But what I want to know is whether multithreading is\nlikely to get into in postgresql, say somewhere in 8.x, or even in 9.x?\n(as they did with Apache). Are there any plans to do so, or is postgres\ngoing to remain rather a multi-process application?\n\n\tAnuradha\n\n-- \n\nDebian GNU/Linux (kernel 2.4.18-xfs-1.1)\n\nOne nice thing about egotists: they don't talk about other people.\n\n", "msg_date": "Wed, 16 Oct 2002 11:21:22 +0600", "msg_from": "Anuradha Ratnaweera <anuradha@lklug.pdn.ac.lk>", "msg_from_op": true, "msg_subject": "Re: Postgresql and multithreading" }, { "msg_contents": "On Wed, 16 Oct 2002, Anuradha Ratnaweera wrote:\n\n> \n> Is there any plans to make postgresql multithreading?\n> \n> Thanks in advance (and also for all who commented to my question\n> regarding replication.)\n> \n> \tAnuradha\n> \n> NB: please don't open fire to declare war on whether multithreading is\n> needed for PGSql or not. I am just expecting a black and white answer\n> from the `authorities' ;)\n\nThis has been discussed at length in the past. The answer has always been\nno. Consult the archives for extensive heated discussion :-).\n\nGavin\n\n", "msg_date": "Wed, 16 Oct 2002 15:24:03 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: Postgresql and multithreading" }, { "msg_contents": "Anuradha Ratnaweera wrote:\n> On Wed, Oct 16, 2002 at 12:59:57AM -0400, Bruce Momjian wrote:\n> > Anuradha Ratnaweera wrote:\n> > > \n> > > Is there any plans to make postgresql multithreading?\n> > \n> > We don't think it is needed, except perhaps for Win32 and Solaris, which\n> > have slow process creation times.\n> \n> Thanks, Bruce. But what I want to know is whether multithreading is\n> likely to get into in postgresql, say somewhere in 8.x, or even in 9.x?\n> (as they did with Apache). Are there any plans to do so, or is postgres\n> going to remain rather a multi-process application?\n\nIt may be optional some day, most likely for Win32 at first, but we see\nlittle value to it on most other platforms; of course, we may be wrong.\nI am also not sure if it is a big win on Apache either; I think the\njury is still out on that one, hence the slow adoption of 2.X, and we\ndon't want to add threads and make a mess of the code or slow it down,\nwhich does often happen.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 16 Oct 2002 01:25:23 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Postgresql and multithreading" }, { "msg_contents": "On Wed, Oct 16, 2002 at 01:25:23AM -0400, Bruce Momjian wrote:\n> Anuradha Ratnaweera wrote:\n>\n> > ... what I want to know is whether multithreading is likely to get\n> > into in postgresql, say somewhere in 8.x, or even in 9.x?\n> \n> It may be optional some day, most likely for Win32 at first, but we see\n> little value to it on most other platforms; of course, we may be wrong.\n\nIn that case, I wonder if it is worth folking a new project to add\nthreading support to the backend? Of course, keeping in sync with the\noriginal would be lot of work.\n\nIn that way, one should be able to test the hypothesis (whether threads\nimprove things, or the other way round - if one likes it it that way :))\nwithout messing around with stable postgres code, as they did and do\nwith postgresql-R.\n\nAnd a minor question is wheter it is legal to keep the _changes_ in such\na project GPL?\n\n> I am also not sure if it is a big win on Apache either; I think the\n> jury is still out on that one, hence the slow adoption of 2.X,\n\nAs far as we are concened, it is the stability, rather than speed which\nstill keeps us in 1.3.\n\n> and we don't want to add threads and make a mess of the code or slow\n> it down, which does often happen.\n\nFully agreed.\n\n\tAnuradha\n\n-- \n\nDebian GNU/Linux (kernel 2.4.18-xfs-1.1)\n\n\tEquality is not when a female Einstein gets promoted to assistant\nprofessor; equality is when a female schlemiel moves ahead as fast as a\nmale schlemiel.\n\t\t-- Ewald Nyquist\n\n", "msg_date": "Wed, 16 Oct 2002 11:34:43 +0600", "msg_from": "Anuradha Ratnaweera <anuradha@lklug.pdn.ac.lk>", "msg_from_op": true, "msg_subject": "Re: Postgresql and multithreading" }, { "msg_contents": "Anuradha Ratnaweera wrote:\n> On Wed, Oct 16, 2002 at 01:25:23AM -0400, Bruce Momjian wrote:\n> > Anuradha Ratnaweera wrote:\n> >\n> > > ... what I want to know is whether multithreading is likely to get\n> > > into in postgresql, say somewhere in 8.x, or even in 9.x?\n> > \n> > It may be optional some day, most likely for Win32 at first, but we see\n> > little value to it on most other platforms; of course, we may be wrong.\n> \n> In that case, I wonder if it is worth folking a new project to add\n> threading support to the backend? Of course, keeping in sync with the\n> original would be lot of work.\n\nProbably not, but you can try.\n\n> In that way, one should be able to test the hypothesis (whether threads\n> improve things, or the other way round - if one likes it it that way :))\n> without messing around with stable postgres code, as they did and do\n> with postgresql-R.\n\nI guess.\n\n> And a minor question is wheter it is legal to keep the _changes_ in such\n> a project GPL?\n\nWe don't think we change the license, and we are happy with BSD. It\ncertainly will never be merged in with a GPL, I can say that for sure.\n\n> > I am also not sure if it is a big win on Apache either; I think the\n> > jury is still out on that one, hence the slow adoption of 2.X,\n> \n> As far as we are concened, it is the stability, rather than speed which\n> still keeps us in 1.3.\n\nYou could easily lose stability with threads -- don't think they are a\nfree ride --- they aren't, and no, I don't feel like regurgitating what\nis already a 'thread' link on the TODO list.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 16 Oct 2002 01:38:59 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Postgresql and multithreading" }, { "msg_contents": "On Wed, 16 Oct 2002, Anuradha Ratnaweera wrote:\n\n> On Wed, Oct 16, 2002 at 01:25:23AM -0400, Bruce Momjian wrote:\n> > Anuradha Ratnaweera wrote:\n> >\n> > > ... what I want to know is whether multithreading is likely to get\n> > > into in postgresql, say somewhere in 8.x, or even in 9.x?\n> > \n> > It may be optional some day, most likely for Win32 at first, but we see\n> > little value to it on most other platforms; of course, we may be wrong.\n> \n> In that case, I wonder if it is worth folking a new project to add\n> threading support to the backend? Of course, keeping in sync with the\n> original would be lot of work.\n\nhttp://sourceforge.net/projects/mtpgsql\n\n> And a minor question is wheter it is legal to keep the _changes_ in such\n> a project GPL?\n\nDo you mean 'relicence the forked copy'?\n\nGavin\n\n", "msg_date": "Wed, 16 Oct 2002 15:40:47 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: Postgresql and multithreading" }, { "msg_contents": "On Wed, Oct 16, 2002 at 03:40:47PM +1000, Gavin Sherry wrote:\n> On Wed, 16 Oct 2002, Anuradha Ratnaweera wrote:\n> \n> > And a minor question is wheter it is legal to keep the _changes_ in such\n> > a project GPL?\n> \n> Do you mean 'relicence the forked copy'?\n\nNope. To keep the `original' code licence as it is and to release the\nchanges GPL? Is the question sane at first place?\n\n\tAnuradha\n\n-- \n\nDebian GNU/Linux (kernel 2.4.18-xfs-1.1)\n\nYou got to be very careful if you don't know where you're going,\nbecause you might not get there.\n\t\t-- Yogi Berra\n\n", "msg_date": "Wed, 16 Oct 2002 11:45:11 +0600", "msg_from": "Anuradha Ratnaweera <anuradha@lklug.pdn.ac.lk>", "msg_from_op": true, "msg_subject": "Re: Postgresql and multithreading" }, { "msg_contents": "Anuradha Ratnaweera wrote:\n> On Wed, Oct 16, 2002 at 03:40:47PM +1000, Gavin Sherry wrote:\n> > On Wed, 16 Oct 2002, Anuradha Ratnaweera wrote:\n> > \n> > > And a minor question is wheter it is legal to keep the _changes_ in such\n> > > a project GPL?\n> > \n> > Do you mean 'relicence the forked copy'?\n> \n> Nope. To keep the `original' code licence as it is and to release the\n> changes GPL? Is the question sane at first place?\n\nThat would be a pretty big mess, I think. People would add your patch\nto our BSD code and it would be GPL. It could be done, of course.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 16 Oct 2002 01:46:53 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Postgresql and multithreading" }, { "msg_contents": "\nLet me add one more thing on this \"thread\". This is one email in a long\nlist of \"Oh, gee, you aren't using that wizz-bang new\nsync/thread/aio/raid/raw feature\" discussion where someone shows up and\nwants to know why. Does anyone know how to address these, efficiently?\n\nIf we discuss it, it ends up causing a lot of effort on our part for the\nrequestor to finally say, \"Oh, gee, I didn't realize that.\" It isn't\nour job to explain that wizz-bang isn't always great. Maybe that should\nbe our reply, \"Wizz-bang isn't always great\" and leave it at that. Of\ncourse, some will leave thinking we are just idiots, but then again, it\ntakes one to know one. It is sort of like walking into a chess match\nand asking Bobby Fisher why he didn't move that pawn. :-)\n\n---------------------------------------------------------------------------\n\nAnuradha Ratnaweera wrote:\n> On Wed, Oct 16, 2002 at 03:40:47PM +1000, Gavin Sherry wrote:\n> > On Wed, 16 Oct 2002, Anuradha Ratnaweera wrote:\n> > \n> > > And a minor question is wheter it is legal to keep the _changes_ in such\n> > > a project GPL?\n> > \n> > Do you mean 'relicence the forked copy'?\n> \n> Nope. To keep the `original' code licence as it is and to release the\n> changes GPL? Is the question sane at first place?\n> \n> \tAnuradha\n> \n> -- \n> \n> Debian GNU/Linux (kernel 2.4.18-xfs-1.1)\n> \n> You got to be very careful if you don't know where you're going,\n> because you might not get there.\n> \t\t-- Yogi Berra\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 16 Oct 2002 01:51:28 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Postgresql and multithreading" }, { "msg_contents": "On Wed, Oct 16, 2002 at 01:51:28AM -0400, Bruce Momjian wrote:\n> \n> Let me add one more thing on this \"thread\". This is one email in a\n> long list of \"Oh, gee, you aren't using that wizz-bang new\n> sync/thread/aio/raid/raw feature\" discussion where someone shows up\n> and wants to know why. Does anyone know how to address these,\n> efficiently?\n\nIf somebody pops up asks such dumb questions without even looking at the\nFAQ, it is bad, if not idiotic, because it takes useful time away from\nthe developers.\n\nBut my question was not `why don't you implement this feature?`, but `do\nyou have plans to implement this feature in the future?', and in the\nopen source spirit of `if something is not there, go implement it\nyourself - without troubling developers' ;)\n\nAlso, I have read the section 1.9 of the developers FAQ (Why don't we\nuse threads in the backend?) long, long ago.\n\n> If we discuss it, it ends up causing a lot of effort on our part for\n> the requestor to finally say, \"Oh, gee, I didn't realize that.\"\n\nPlease don't. See the \"NB\" at end of my first mail of this thread.\n\n\tAnuradha\n\n-- \n\nDebian GNU/Linux (kernel 2.4.18-xfs-1.1)\n\nQOTD:\n\t\"I'll listen to reason when it comes out on CD.\"\n\n", "msg_date": "Wed, 16 Oct 2002 12:13:15 +0600", "msg_from": "Anuradha Ratnaweera <anuradha@lklug.pdn.ac.lk>", "msg_from_op": true, "msg_subject": "Re: Postgresql and multithreading" }, { "msg_contents": "On 16 Oct 2002 at 1:25, Bruce Momjian wrote:\n\n> Anuradha Ratnaweera wrote:\n> > Thanks, Bruce. But what I want to know is whether multithreading is\n> > likely to get into in postgresql, say somewhere in 8.x, or even in 9.x?\n> > (as they did with Apache). Are there any plans to do so, or is postgres\n> > going to remain rather a multi-process application?\n> It may be optional some day, most likely for Win32 at first, but we see\n> little value to it on most other platforms; of course, we may be wrong.\n> I am also not sure if it is a big win on Apache either; I think the\n\nWell, I have done some stress testing on 1.3.26 and 2.0.39. Under same hardware \nand network setup and same test case, 1.3.26 maxed at 475-500 requests/sec and \n2.0.39 gave flat 800 requests/sec.\n\nYes, under light load, there is hardly any difference. But Apache2 series is \ndefinitely an improvement.\n\n> jury is still out on that one, hence the slow adoption of 2.X, and we\n> don't want to add threads and make a mess of the code or slow it down,\n> which does often happen.\n\nWell, slow adoption rate is attributed to 'apache 1.3.x is good enough for us' \nsyndrome, as far as I can see from news. Once linux distros start shipping with \napache 2.x series *only*, the upgrade cycle will start rolling, I guess.\n\nBye\n Shridhar\n\n--\nProgramming Department:\tMistakes made while you wait.\n\n", "msg_date": "Wed, 16 Oct 2002 11:57:13 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Re: Postgresql and multithreading" }, { "msg_contents": "On 16 Oct 2002 at 15:40, Gavin Sherry wrote:\n\n> > In that case, I wonder if it is worth folking a new project to add\n> > threading support to the backend? Of course, keeping in sync with the\n> > original would be lot of work.\n> \n> http://sourceforge.net/projects/mtpgsql\n\nLast discussion that happened for multithreading was not to add threads per \nconnection like mysql, but to split tasks between threads so that IO blocking \nand efficiently using SMP abilities would be taken care off IIRC..\n\nOne thread per connection isn't going to change much, at least for mainstream \npostgresql. For embedded, it might be a necessity..\n\nBye\n Shridhar\n\n--\nQOTD:\t\"You're so dumb you don't even have wisdom teeth.\"\n\n", "msg_date": "Wed, 16 Oct 2002 12:00:18 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Re: Postgresql and multithreading" }, { "msg_contents": "Bruce Momjian wrote:\n> Anuradha Ratnaweera wrote:\n<snip>\n> > Nope. To keep the `original' code licence as it is and to release the\n> > changes GPL? Is the question sane at first place?\n> \n> That would be a pretty big mess, I think. People would add your patch\n> to our BSD code and it would be GPL. It could be done, of course.\n\nDon't think so. The patches would be \"derived code\" that only exist\nbecause of the BSD licensed PostgreSQL base.\n\nBeing \"derived code\" they'd have to be released as BSD and GPL wouldn't\nenter the picture, regardless if they're released separately as add-on\npatches or not.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n \n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Wed, 16 Oct 2002 18:34:52 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Postgresql and multithreading" }, { "msg_contents": "On Wed, 2002-10-16 at 01:27, Shridhar Daithankar wrote:\n> Well, slow adoption rate is attributed to 'apache 1.3.x is good enough for us' \n> syndrome, as far as I can see from news. Once linux distros start shipping with \n> apache 2.x series *only*, the upgrade cycle will start rolling, I guess.\n\n\nI think that's part of it. I think the other part is that by the time\nyou're getting to huge r/s numbers, typical web site bandwidth is\nalready used up. So, what's the point in adding more breathing room\nwhen you don't have the bandwidth to use it anyways.\n\nGreg", "msg_date": "16 Oct 2002 09:21:54 -0500", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: Postgresql and multithreading" }, { "msg_contents": "On Wed, 2002-10-16 at 04:34, Justin Clift wrote:\n> Bruce Momjian wrote:\n> > Anuradha Ratnaweera wrote:\n> <snip>\n> > > Nope. To keep the `original' code licence as it is and to release the\n> > > changes GPL? Is the question sane at first place?\n> > \n> > That would be a pretty big mess, I think. People would add your patch\n> > to our BSD code and it would be GPL. It could be done, of course.\n> \n> Don't think so. The patches would be \"derived code\" that only exist\n> because of the BSD licensed PostgreSQL base.\n> \n> Being \"derived code\" they'd have to be released as BSD and GPL wouldn't\n> enter the picture, regardless if they're released separately as add-on\n> patches or not.\n> \n\nI'm pretty sure BSD allows you to relicense derived code as you see fit.\nHowever, any derived project that was released GPL would have a hell of\na time ever getting put back into the main source (short of\nrelicensing).\n\nRobert Treat\n\n\n", "msg_date": "16 Oct 2002 10:37:02 -0400", "msg_from": "Robert Treat <rtreat@webmd.net>", "msg_from_op": false, "msg_subject": "Re: Postgresql and multithreading" }, { "msg_contents": "On Wed, 2002-10-16 at 16:37, Robert Treat wrote:\n\n> I'm pretty sure BSD allows you to relicense derived code as you see fit.\n> However, any derived project that was released GPL would have a hell of\n> a time ever getting put back into the main source (short of\n> relicensing).\n\nExactly. This is one of the reasons why BSD license is very much liked\nby proprietary software vendors (think MSFT), unlike the GPL which\ndoesn't allow someone to change and redistribute their work with\nrestrictive licenses.\n\nCheers,\nTycho\n\n(BTW: I'm not asking to change the license of Postgresql, I know the -\ndogmatic - answer to that one. So please don't misunderstand my mail)\n\n-- \nTycho Fruru\t\t\t tycho@fruru.com\n\"Prediction is extremely difficult. Especially about the future.\"\n - Niels Bohr\n\n\n\n", "msg_date": "16 Oct 2002 16:50:35 +0200", "msg_from": "Tycho Fruru <tycho@fruru.com>", "msg_from_op": false, "msg_subject": "Re: Postgresql and multithreading" }, { "msg_contents": "Bruce Momjian wrote:\n> Let me add one more thing on this \"thread\". This is one email in a long\n> list of \"Oh, gee, you aren't using that wizz-bang new\n> sync/thread/aio/raid/raw feature\" discussion where someone shows up and\n> wants to know why. Does anyone know how to address these, efficiently?\n\nYou've brought up good points here. I'm sure that you consider me guilty of\nthis with regard to my aio discussions. So I might offer a few specific\nsuggestions.\n\n1) Someone's taking the time to generate a summary of the current thinking\nwith regard to a particular \"whiz-bang\" feature. - I can do this as a first\npass for aio, if you think it's a good idea.\n\n2) Including the pros and cons of the feature/implementation and how close\nthe group is to deciding whether something would be worth doing. - I can\nalso do this.\n\n3) A set of criteria that would need to be met or demonstrated before a\nfeature would be considered good enough for inclusion into the main code.\n\nIf there was a separate section of the document \"Developer's FAQ\" that\nhandled this, this would help.\n\n4) A development/feature philosophy section. Maybe three or four paragraphs\nwith one specific example: perhaps multi-threading.\n\n5) I'd also suggest changing some specfics in the FAQ Section 1.2 where\nthere is currently:\n\n* The usual process for source additions is:\n*\n* Review the TODO list.\n* Discuss hackers the desirability of the fix/feature.\n* How should it behave in complex circumstances?\n\nAdd here that a check should be made to the new section in the FAQ\ndescribed above. Also, section 1.1 has:\n\n* Discussion on the patch typically happens here. If the patch adds a\n* major feature, it would be a good idea to talk about it first on\n* the HACKERS list, in order to increase the chances of it being\n* accepted, as well as toavoid duplication of effort.\n\nWe should perhaps add here a section describing the phenomenon you describe\nregarding \"whiz-bang\" features.\n\nI tried to prepare as best I could before bringing anything forward to\nHACKERS. In particular, I read the last two years of archives with anything\nto do with the WAL log and looked at the current code, read the TODOs, read\na fair amount of discussions about aio. etc. So I was attempting to comply\nwith my interpretation of the process. Yet, despite these efforts, you no\ndoubt consider me guilty of creating unnecessary work, an outcome neither\nof us desired.\n\nI'm undeterred in my desire to come up with something meaningful and am\nworking on some interesting tests. I do, however, now know that the level\nof scepticism and the standards of architectural purity are high. I\nconsider this good all around.\n\n- Curtis\n\n", "msg_date": "Wed, 16 Oct 2002 14:08:21 -0400", "msg_from": "\"Curtis Faith\" <curtis@galtair.com>", "msg_from_op": false, "msg_subject": "Re: Postgresql and multithreading" }, { "msg_contents": "On Wed, 2002-10-16 at 23:08, Curtis Faith wrote:\n> Bruce Momjian wrote:\n> I tried to prepare as best I could before bringing anything forward to\n> HACKERS. In particular, I read the last two years of archives with anything\n> to do with the WAL log and looked at the current code, read the TODOs, read\n> a fair amount of discussions about aio. etc. So I was attempting to comply\n> with my interpretation of the process. Yet, despite these efforts, you no\n> doubt consider me guilty of creating unnecessary work, an outcome neither\n> of us desired.\n\nBut that \"unneccessary work\" resulted in Tom finding and fixing an\nunintended behaviour (aka a performance bug) that prevented postgres\nfrom ever doing more than 1 commit per disk revolution on non-lieing\nSCSI disks ;)\n\n> I'm undeterred in my desire to come up with something meaningful and am\n> working on some interesting tests. I do, however, now know that the level\n> of scepticism and the standards of architectural purity are high. I\n> consider this good all around.\n\nI still have big expectations for use of aio, especially considering\nthat at least for free OSes one is not forced to stop at the DB/OS\nboundary, but are free to go and improve the os side implementation as\nwell if it is needed.\n\nBut still some empirical tests are probably needed - if we can keep IO\noccupied for 99% in a meaningful way withou aio, then the time is\nprobably better spent on something else ;)\n\n--------------------\nHannu\n\n\n", "msg_date": "17 Oct 2002 00:25:37 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Postgresql and multithreading" }, { "msg_contents": "On Wed, Oct 16, 2002 at 02:08:21PM -0400, Curtis Faith wrote:\n> \n> 2) Including the pros and cons of the feature/implementation and how close\n> the group is to deciding whether something would be worth doing. - I can\n> also do this.\n\nThe pros and cons of many such features have been discussed over the\nlists as well as on the FAQs. But the second matter, the group's\nlikehood their implementation cannot always be deduced from those\ncommunications or from docs.\n\nTherefore suggested material into the FAQs are going to be extremely\nuseful to like-to-be developers. They also would hopefully reduce\nunnecessary traffic on the list.\n\n\tAnuradha\n\n-- \n\nDebian GNU/Linux (kernel 2.4.18-xfs-1.1)\n\nI have found little that is good about human beings. In my experience\nmost of them are trash.\n\t\t-- Sigmund Freud\n\n", "msg_date": "Thu, 17 Oct 2002 12:39:37 +0600", "msg_from": "Anuradha Ratnaweera <anuradha@lklug.pdn.ac.lk>", "msg_from_op": true, "msg_subject": "Re: Postgresql and multithreading" }, { "msg_contents": "On Wed, 16 Oct 2002, Bruce Momjian wrote:\n\n> It may be optional some day, most likely for Win32 at first, but we see\n> little value to it on most other platforms; of course, we may be wrong.\n> I am also not sure if it is a big win on Apache either; I think the\n> jury is still out on that one, hence the slow adoption of 2.X, and we\n> don't want to add threads and make a mess of the code or slow it down,\n> which does often happen.\n\nActually, Apache2 is both multi-process and multi-thread, even in the\n'multi-thread' model ... I've played a bit with it, and it works at a\nsort of 'half way point' taking advantage of advantages of both ... in\nfact, I really wish someone would look at it seriously, since it mimics\nalot of what we do to start with ...\n\nold apache - one parent process (ie. postmaster) with child process (ie.\n postgres) actually handling the work\n\nnew apache - one parent process (ie. postmaster) with child processes (ie.\n postgres) actually handling the work, with a twist .. each\n child process can handle x threaded processes\n\nso, in a heavily loaded web server, you might see 10 httpd processes\nrunning, each of which handling 15 threaded connections ...\n\neven getting away from multiple db connections per child process, I could\nsee some other areas where multi-threading could be useful, assuming that\nmy limited knowleddge of threading is remotely correct ... a big/cool one\ncould be:\n\ndistributed/clustered databases\n\n\ta database could be setup on multiple servers, where the tables\nare created as 'CREATE TABLE newtable ON SERVER serverb', so that when you\nconnect to that table, the child process knows to auto-matically establish\nconnections to the remote servers to pull data in\n\n\tthis would also work for inter-database queries, that several ppl\nin the past have asked for\n\n\n\n\n", "msg_date": "Fri, 18 Oct 2002 00:18:19 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Postgresql and multithreading" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Let me add one more thing on this \"thread\". This is one email in a long\n> list of \"Oh, gee, you aren't using that wizz-bang new\n> sync/thread/aio/raid/raw feature\" discussion where someone shows up and\n> wants to know why. Does anyone know how to address these, efficiently?\n\nSimple: respond to 'em all with a one-line answer: \"convince us why we\nshould use it\". The burden of proof always seems to fall on the wrong\nend in these discussions.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 17 Oct 2002 23:20:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Postgresql and multithreading " }, { "msg_contents": "On Wed, 16 Oct 2002, Anuradha Ratnaweera wrote:\n\n> On Wed, Oct 16, 2002 at 01:25:23AM -0400, Bruce Momjian wrote:\n> > Anuradha Ratnaweera wrote:\n> >\n> > > ... what I want to know is whether multithreading is likely to get\n> > > into in postgresql, say somewhere in 8.x, or even in 9.x?\n> >\n> > It may be optional some day, most likely for Win32 at first, but we see\n> > little value to it on most other platforms; of course, we may be wrong.\n>\n> In that case, I wonder if it is worth folking a new project to add\n> threading support to the backend? Of course, keeping in sync with the\n> original would be lot of work.\n\nActually, if you go through the archives, there has been talk about what\nwould have to be done to the main source tree towards getting threading\nincluded ... as well as a lengthy discussion about the steps involved.\n\nthe first and foremost issue that needs to be addressed is cleaning up the\nglobal variables that are used throughout ...\n\n", "msg_date": "Fri, 18 Oct 2002 00:21:59 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Postgresql and multithreading" }, { "msg_contents": "On Thu, 2002-10-17 at 22:20, Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Let me add one more thing on this \"thread\". This is one email in a long\n> > list of \"Oh, gee, you aren't using that wizz-bang new\n> > sync/thread/aio/raid/raw feature\" discussion where someone shows up and\n> > wants to know why. Does anyone know how to address these, efficiently?\n> \n> Simple: respond to 'em all with a one-line answer: \"convince us why we\n> should use it\". The burden of proof always seems to fall on the wrong\n> end in these discussions.\n> \n> \t\t\tregards, tom lane\n\n\nThat may be easier said that done. If you don't know what the\nobjections are, it's hard to argue your case. If you do know and\nunderstand the objections, chances are you already know the code very\nwell and/or have the mailing lists for a very long time. This basically\nmeans, you don't want to hear from anyone unless they are \"one\" with the\ncode. That seems and sounds very anti-open source.\n\nAfter it's all said and done, I think you guys are barking up the wrong\ntree. Open Source is all about sharing ideas. Many times I've seen\nideas expressed here that were not exact hits yet help facilitate\ndiscussion, understanding on the topics in general and in some cases may\neven spur other ideas or associated code fixes/improvements. When I\nfirst started on this list, I was scolded rather harshly for not asking\nall of my questions on the list. Originally, I was told to ask\nreasonable questions so that everyone can learn. Now, it seems, that\npeople don't want to answer questions at all as it's bothering the\ndevelopers.\n\nCommonly asked items, such as threading, seems like they are being\naddressed rather well without core developer participation. Right now,\nI'm not seeing any down sides to what's currently in place. If the core\ndevelopers still feel like they are spending more time then they like,\nthen perhaps those that following the mailing list can step forward a\nlittle more to address general questions and defer when needed. The\ntopic, such as threading, was previously addressed yet people still\nfollowed up on the topic. Perhaps those that don't want to be bothered\nshould allow more time for others to address the topic and leave it\nalone once it has been addressed. That alone seems like it would be a\nhuge time saver for the developers and a better use of resources.\n\n\nGreg", "msg_date": "18 Oct 2002 08:47:56 -0500", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: Postgresql and multithreading" }, { "msg_contents": "Greg Copeland <greg@copelandconsulting.net> writes:\n> On Thu, 2002-10-17 at 22:20, Tom Lane wrote:\n>> Simple: respond to 'em all with a one-line answer: \"convince us why we\n>> should use it\". The burden of proof always seems to fall on the wrong\n>> end in these discussions.\n\n> ... Now, it seems, that\n> people don't want to answer questions at all as it's bothering the\n> developers.\n\nNot at all. But rehashing issues that have been talked out repeatedly\nis starting to bug some of us ;-). Perhaps the correct \"standard\nanswer\" is more like \"this has been discussed before, please read the\nlist archives\".\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 18 Oct 2002 10:28:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Postgresql and multithreading " }, { "msg_contents": "Tom Lane wrote:\n> Greg Copeland <greg@copelandconsulting.net> writes:\n> > On Thu, 2002-10-17 at 22:20, Tom Lane wrote:\n> >> Simple: respond to 'em all with a one-line answer: \"convince us why we\n> >> should use it\". The burden of proof always seems to fall on the wrong\n> >> end in these discussions.\n> \n> > ... Now, it seems, that\n> > people don't want to answer questions at all as it's bothering the\n> > developers.\n> \n> Not at all. But rehashing issues that have been talked out repeatedly\n> is starting to bug some of us ;-). Perhaps the correct \"standard\n> answer\" is more like \"this has been discussed before, please read the\n> list archives\".\n\nI need to add something to the developers FAQ on this. I will do it\nsoon.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 18 Oct 2002 11:39:18 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Postgresql and multithreading" }, { "msg_contents": "On Fri, 2002-10-18 at 09:28, Tom Lane wrote:\n> Greg Copeland <greg@copelandconsulting.net> writes:\n> > On Thu, 2002-10-17 at 22:20, Tom Lane wrote:\n> >> Simple: respond to 'em all with a one-line answer: \"convince us why we\n> >> should use it\". The burden of proof always seems to fall on the wrong\n> >> end in these discussions.\n> \n> > ... Now, it seems, that\n> > people don't want to answer questions at all as it's bothering the\n> > developers.\n> \n> Not at all. But rehashing issues that have been talked out repeatedly\n> is starting to bug some of us ;-). Perhaps the correct \"standard\n> answer\" is more like \"this has been discussed before, please read the\n> list archives\".\n\nI agree. That sounds like a much more reasonable response. In fact, if\nyou were to simply let the fledglings respond, it would completely take\nyou guys out of the loop.\n\nPerhaps something like a Wiki or FAQ-O-Matic can be added whereby, the\nuser base can help maintain it? That would seemingly help take some\nload off of Bruce too.\n\nGreg\n\n\n", "msg_date": "18 Oct 2002 10:51:08 -0500", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: Postgresql and multithreading" }, { "msg_contents": "\n On the recurring debate of threading vs. forking, I was giving it a fwe\nthoughts a few days ago, particularly with concern to Linux's memory model.\n\n On IA32 platforms with over 4 gigs of memory, any one process can only\n\"see\" up to 3 or 4 gigs of that. Having each postmaster fork off as a new\nprocess obviously would allow a person to utilize very copious quantities of\nmemory, assuming that (a) they were dealing with concurrent PG sessions, and\n(b) PG had reason to use the memory.\n\n I'm not entirely clear on threading in Linux - would it provide the same\nbenefits, or would it suddenly lock you into a 3-gig memory space?\n\nsteve\n\n\n", "msg_date": "Fri, 18 Oct 2002 13:11:16 -0600", "msg_from": "\"Steve Wolfe\" <nw@codon.com>", "msg_from_op": false, "msg_subject": "Re: Postgresql and multithreading" }, { "msg_contents": "On Fri, Oct 18, 2002 at 10:28:38AM -0400, Tom Lane wrote:\n> Greg Copeland <greg@copelandconsulting.net> writes:\n> > On Thu, 2002-10-17 at 22:20, Tom Lane wrote:\n> >> Simple: respond to 'em all with a one-line answer: \"convince us why we\n> >> should use it\". The burden of proof always seems to fall on the wrong\n> >> end in these discussions.\n> \n> > ... Now, it seems, that\n> > people don't want to answer questions at all as it's bothering the\n> > developers.\n> \n> Not at all. But rehashing issues that have been talked out repeatedly\n> is starting to bug some of us ;-). Perhaps the correct \"standard\n> answer\" is more like \"this has been discussed before, please read the\n> list archives\".\n\nLet me explain my posting which started this `thread':\n\n- The developer's FAQ section 1.9 explains why PostgreSQL doesn't use\n threads (and many times it has been discussed on the list).\n\n- The TODO list has an item `Experiment with multi-threaded backend' and\n points to a mailing list discussion about the implementation by Myron\n Scott. His final comment is that he didn't `gain much performance'\n and `ended up with some pretty unmanagable code'. He also says that\n he wouldn't `personally try this again ... but there probably was a\n better way'.\n\n- I was going through the TODO list, and was wondering if I should try\n on this. But before doing that, naturally, I wanted to figure out if\n any of the core developers themselves have any plans of doing it.\n\nNow, I am trying hard to figure out why this `are you going to do this?\notherwise I can try it', type posting was not differentiated from\nnumerous `why don't YOU implement this feature' type postings ;)\n\n\tAnuradha\n\n-- \n\nDebian GNU/Linux (kernel 2.4.18-xfs-1.1)\n\n\"Life is too important to take seriously.\"\n\t\t-- Corky Siegel\n\n", "msg_date": "Sat, 19 Oct 2002 12:58:44 +0600", "msg_from": "Anuradha Ratnaweera <anuradha@lklug.pdn.ac.lk>", "msg_from_op": true, "msg_subject": "Re: Postgresql and multithreading" }, { "msg_contents": "Anuradha Ratnaweera <anuradha@lklug.pdn.ac.lk> writes:\n> Let me explain my posting which started this `thread':\n\n> - The developer's FAQ section 1.9 explains why PostgreSQL doesn't use\n> threads (and many times it has been discussed on the list).\n\n> - The TODO list has an item `Experiment with multi-threaded backend' and\n> points to a mailing list discussion about the implementation by Myron\n> Scott. His final comment is that he didn't `gain much performance'\n> and `ended up with some pretty unmanagable code'. He also says that\n> he wouldn't `personally try this again ... but there probably was a\n> better way'.\n\n> - I was going through the TODO list, and was wondering if I should try\n> on this. But before doing that, naturally, I wanted to figure out if\n> any of the core developers themselves have any plans of doing it.\n\n> Now, I am trying hard to figure out why this `are you going to do this?\n> otherwise I can try it', type posting was not differentiated from\n> numerous `why don't YOU implement this feature' type postings ;)\n\nWell, if you'd actually said the above, we'd probably have replied to\nthe effect of \"we still think it's an unpromising project, but try it\nif you like\". By my reading, your earliest postings in this thread\nshowed no sign of any familiarity at all with the history:\n\nhttp://archives.postgresql.org/pgsql-hackers/2002-10/msg00704.php\nhttp://archives.postgresql.org/pgsql-hackers/2002-10/msg00707.php\nhttp://archives.postgresql.org/pgsql-hackers/2002-10/msg00711.php\n\nand so you got the sort of response that's usually given to clueless\nnewbies...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 19 Oct 2002 12:30:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Postgresql and multithreading " }, { "msg_contents": "On 18 Oct 2002 at 13:11, Steve Wolfe wrote:\n\n> \n> On the recurring debate of threading vs. forking, I was giving it a fwe\n> thoughts a few days ago, particularly with concern to Linux's memory model.\n> \n> On IA32 platforms with over 4 gigs of memory, any one process can only\n> \"see\" up to 3 or 4 gigs of that. Having each postmaster fork off as a new\n> process obviously would allow a person to utilize very copious quantities of\n> memory, assuming that (a) they were dealing with concurrent PG sessions, and\n> (b) PG had reason to use the memory.\n\nWell IIRC PG can not use more than 2Gigs of memory or 250K shared buffers \n(Unless you alter the buffer size itself). This does not become an issue in \nitself.\n\n \n> I'm not entirely clear on threading in Linux - would it provide the same\n> benefits, or would it suddenly lock you into a 3-gig memory space?\n\nWell, if you need to allocate 3Gig of memory to single process like postgresql, \nit's time to get a 64bit CPU. IIRC linux run on quite a few of them.\n\nHTH\nBye\n Shridhar\n\n--\nQOTD:\t\"Oh, no, no... I'm not beautiful. Just very, very pretty.\"\n\n", "msg_date": "Mon, 21 Oct 2002 19:00:08 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Re: Postgresql and multithreading" }, { "msg_contents": "\"Steve Wolfe\" <nw@codon.com> writes:\n\n> On the recurring debate of threading vs. forking, I was giving it a fwe\n> thoughts a few days ago, particularly with concern to Linux's memory model.\n> \n> On IA32 platforms with over 4 gigs of memory, any one process can only\n> \"see\" up to 3 or 4 gigs of that. Having each postmaster fork off as a new\n> process obviously would allow a person to utilize very copious quantities of\n> memory, assuming that (a) they were dealing with concurrent PG sessions, and\n> (b) PG had reason to use the memory.\n> \n> I'm not entirely clear on threading in Linux - would it provide the same\n> benefits, or would it suddenly lock you into a 3-gig memory space?\n\nLinux threads are basically processes that share the same VM space, so\nyou'd be limited to 3GB or whatever, since that's what a VM space can\n\"see\".\n\n-Doug\n", "msg_date": "21 Oct 2002 11:27:19 -0400", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: Postgresql and multithreading" }, { "msg_contents": "\nThis in many ways is a bogus argument in that 1) postgresql runs on more \nthen just Linux and 2) amount of memmory that can be addressed by a \nprocess is tunable up to the point that it reaches a hardware limitation.\n\nIt also should be noted that when a process reaches such a size that it \nbetter have a good reason. Now let us do a gedanken experiment and say \nyou do have a good reason - fork a couple of these and your machine will \nthrash like nothing else ... also that whole hardware limitation will come \ninto play more sooner then later ... \n\n\nOn 21 Oct 2002, Doug McNaught wrote:\n\n> \"Steve Wolfe\" <nw@codon.com> writes:\n> \n> > On the recurring debate of threading vs. forking, I was giving it a fwe\n> > thoughts a few days ago, particularly with concern to Linux's memory model.\n> > \n> > On IA32 platforms with over 4 gigs of memory, any one process can only\n> > \"see\" up to 3 or 4 gigs of that. Having each postmaster fork off as a new\n> > process obviously would allow a person to utilize very copious quantities of\n> > memory, assuming that (a) they were dealing with concurrent PG sessions, and\n> > (b) PG had reason to use the memory.\n> > \n> > I'm not entirely clear on threading in Linux - would it provide the same\n> > benefits, or would it suddenly lock you into a 3-gig memory space?\n> \n> Linux threads are basically processes that share the same VM space, so\n> you'd be limited to 3GB or whatever, since that's what a VM space can\n> \"see\".\n> \n> -Doug\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n//========================================================\\\\\n|| D. Hageman <dhageman@dracken.com> ||\n\\\\========================================================//\n\n", "msg_date": "Mon, 21 Oct 2002 11:02:16 -0500 (CDT)", "msg_from": "\"D. Hageman\" <dhageman@dracken.com>", "msg_from_op": false, "msg_subject": "Re: Postgresql and multithreading" }, { "msg_contents": "\"D. Hageman\" <dhageman@dracken.com> writes:\n\n> This in many ways is a bogus argument in that 1) postgresql runs on more \n> then just Linux and 2) amount of memmory that can be addressed by a \n> process is tunable up to the point that it reaches a hardware limitation.\n\n1) The OP specifically asked about Linux threads.\n2) True up to a point--Linux (and most other Unices) reserve some\n part of the VM address space for the kernel. On 64-bit this is a\n non-issue, on 32-bit it's quite important now that you can put 4+GB\n in a machine. \n\n> It also should be noted that when a process reaches such a size that it \n> better have a good reason. Now let us do a gedanken experiment and say \n> you do have a good reason - fork a couple of these and your machine will \n> thrash like nothing else ... also that whole hardware limitation will come \n> into play more sooner then later ... \n\nTrue enough. The only real use I can see for gobs of memory on a\n32-bit PAE machine with PG is to give each process its own big hunk of\n'sortmem' for doing large sorts. If you have 64 GB in the machine\nsetting 'sortmem' to 1GB or so starts to look reasonable...\n\n-Doug\n", "msg_date": "21 Oct 2002 12:25:16 -0400", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: Postgresql and multithreading" }, { "msg_contents": "\nI have updated the developers FAQ item 1.9 to address this:\n\n\thttp://developer.postgresql.org/readtext.php?src/FAQ/FAQ_DEV.html+Developers-FAQ\n\n---------------------------------------------------------------------------\n\nAnuradha Ratnaweera wrote:\n> On Wed, Oct 16, 2002 at 01:51:28AM -0400, Bruce Momjian wrote:\n> > \n> > Let me add one more thing on this \"thread\". This is one email in a\n> > long list of \"Oh, gee, you aren't using that wizz-bang new\n> > sync/thread/aio/raid/raw feature\" discussion where someone shows up\n> > and wants to know why. Does anyone know how to address these,\n> > efficiently?\n> \n> If somebody pops up asks such dumb questions without even looking at the\n> FAQ, it is bad, if not idiotic, because it takes useful time away from\n> the developers.\n> \n> But my question was not `why don't you implement this feature?`, but `do\n> you have plans to implement this feature in the future?', and in the\n> open source spirit of `if something is not there, go implement it\n> yourself - without troubling developers' ;)\n> \n> Also, I have read the section 1.9 of the developers FAQ (Why don't we\n> use threads in the backend?) long, long ago.\n> \n> > If we discuss it, it ends up causing a lot of effort on our part for\n> > the requestor to finally say, \"Oh, gee, I didn't realize that.\"\n> \n> Please don't. See the \"NB\" at end of my first mail of this thread.\n> \n> \tAnuradha\n> \n> -- \n> \n> Debian GNU/Linux (kernel 2.4.18-xfs-1.1)\n> \n> QOTD:\n> \t\"I'll listen to reason when it comes out on CD.\"\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 5 Nov 2002 17:11:05 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Postgresql and multithreading" } ]
[ { "msg_contents": "\n> > > > To work\n> > > > around this you can use explicit cursors (see the DECLARE CURSOR,\n> > > > FETCH, and MOVE sql commands for postgres).\n\nI'm unable to get this to work using the default distribution JDBC driver. \n(7.2). Here's a code snippet\n\n conn.setAutoCommit(false) ;\n stmt.execute(\"BEGIN\") ;\n stmt.execute(\"DECLARE mysursor CURSOR FOR SELECT icol FROM mtable\") ;\n ResultSet rs = null ;\n if (stmt.execute(\"FETCH 10000 IN mycursor\"))\n\trs = stmt.getResultSet() ;\n\nThe FETCH statement returns an update count of 1, but no ResultSet.\nIf I try executeQuery, a \"no rows found\" exception is thrown.\n\nEquivalent code in the C library interface works just fine.\n\nI need a workaround, because default ResultSet processing in the JDBC\ndriver (and also the jxDBCon driver) pretty much blow out the memory \nof the JVM. \n", "msg_date": "Wed, 16 Oct 2002 07:55:07 -0400", "msg_from": "Dave Tenny <tenny@attbi.com>", "msg_from_op": true, "msg_subject": "Re: [JDBC] Out of memory error on huge resultset" }, { "msg_contents": "This code work for me :\n\tConnection db = DriverManager.getConnection(url,user,passwd);\n\tPreparedStatement st = db.prepareStatement(\"begin;declare c1 cursor for \nselect * from a\");\n\tst.execute();\n\tst = db.prepareStatement(\"fetch 100 in c1\");\n\tResultSet rs = st.executeQuery();\n\t//rs.setFetchSize(100);\n\twhile (rs.next() ) {\n\t\ts = rs.getString(1);\n\t\tSystem.out.println(s);\n\t}\n\tst = db.prepareStatement(\"commit\");\n\tst.execute();\n\tst.close();\n\tdb.close();\n\nregards\nHaris Peco\nOn Wednesday 16 October 2002 01:55 pm, Dave Tenny wrote:\n> > > > > To work\n> > > > > around this you can use explicit cursors (see the DECLARE CURSOR,\n> > > > > FETCH, and MOVE sql commands for postgres).\n>\n> I'm unable to get this to work using the default distribution JDBC driver.\n> (7.2). Here's a code snippet\n>\n> conn.setAutoCommit(false) ;\n> stmt.execute(\"BEGIN\") ;\n> stmt.execute(\"DECLARE mysursor CURSOR FOR SELECT icol FROM mtable\") ;\n> ResultSet rs = null ;\n> if (stmt.execute(\"FETCH 10000 IN mycursor\"))\n> \trs = stmt.getResultSet() ;\n>\n> The FETCH statement returns an update count of 1, but no ResultSet.\n> If I try executeQuery, a \"no rows found\" exception is thrown.\n>\n> Equivalent code in the C library interface works just fine.\n>\n> I need a workaround, because default ResultSet processing in the JDBC\n> driver (and also the jxDBCon driver) pretty much blow out the memory\n> of the JVM.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n", "msg_date": "Wed, 16 Oct 2002 14:40:07 +0200", "msg_from": "snpe <snpe@snpe.co.yu>", "msg_from_op": false, "msg_subject": "Re: [JDBC] Out of memory error on huge resultset" } ]
[ { "msg_contents": "\n Hi,\n\n I have SQL query:\n\n SELECT * FROM ii WHERE i1='a' AND i2='b';\n\n There're indexes on i1 and i2. I know best solution is use one\n index on both (i1, i2).\n\n The EXPLAIN command show that optimalizer wants to use one index:\n\ntest=# explain SELECT * FROM ii WHERE i1='a' AND i1='b';\n QUERY PLAN \n---------------------------------------------------------------------------------\n Index Scan using i1 on ii (cost=0.00..4.83 rows=1 width=24)\n Index Cond: ((i1 = 'a'::character varying) AND (i1 = 'b'::character varying))\n\n\n It's right and I undererstand why not use both indexes. But I talked\nabout it with one Oracle user and he said me Oracle knows use both indexes \nand results from both index scans are mergeted to final result -- this is maybe \nused if full access to table (too big rows?) is more expensive than 2x index \nscan and final merge. Is in PG possible something like this? And within \nquery/table? I know about it in JOIN (and subselect maybe) only, but in \nthe \"standard\" WHERE?\n\ntest=# explain SELECT * FROM ii a JOIN ii b ON a.i1=b.i2;\n QUERY PLAN \n--------------------------------------------------------------------------\n Merge Join (cost=0.00..171.50 rows=5000 width=48)\n Merge Cond: (\"outer\".i1 = \"inner\".i2)\n -> Index Scan using i1 on ii a (cost=0.00..52.00 rows=1000 width=24)\n -> Index Scan using i2 on ii b (cost=0.00..52.00 rows=1000 width=24)\n\n\n Thanks,\n\n Karel\n \n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Wed, 16 Oct 2002 15:19:28 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": true, "msg_subject": "index theory" }, { "msg_contents": "On Wed, 2002-10-16 at 09:19, Karel Zak wrote:\n> \n> Hi,\n> \n> I have SQL query:\n> \n> SELECT * FROM ii WHERE i1='a' AND i2='b';\n> \n> There're indexes on i1 and i2. I know best solution is use one\n> index on both (i1, i2).\n> \n> The EXPLAIN command show that optimalizer wants to use one index:\n> \n> test=# explain SELECT * FROM ii WHERE i1='a' AND i1='b';\n> QUERY PLAN \n> ---------------------------------------------------------------------------------\n> Index Scan using i1 on ii (cost=0.00..4.83 rows=1 width=24)\n> Index Cond: ((i1 = 'a'::character varying) AND (i1 = 'b'::character varying))\n\nI think you typo'd. i1='a' AND i1='b' turns into 'a' = 'b' which\ncertainly isn't true in any alphabets I know of.\n\n-- \n Rod Taylor\n\n", "msg_date": "16 Oct 2002 09:25:37 -0400", "msg_from": "Rod Taylor <rbt@rbt.ca>", "msg_from_op": false, "msg_subject": "Re: index theory" }, { "msg_contents": "On Wed, Oct 16, 2002 at 09:25:37AM -0400, Rod Taylor wrote:\n> On Wed, 2002-10-16 at 09:19, Karel Zak wrote:\n> > \n> > Hi,\n> > \n> > I have SQL query:\n> > \n> > SELECT * FROM ii WHERE i1='a' AND i2='b';\n> > \n> > There're indexes on i1 and i2. I know best solution is use one\n> > index on both (i1, i2).\n> > \n> > The EXPLAIN command show that optimalizer wants to use one index:\n> > \n> > test=# explain SELECT * FROM ii WHERE i1='a' AND i1='b';\n> > QUERY PLAN \n> > ---------------------------------------------------------------------------------\n> > Index Scan using i1 on ii (cost=0.00..4.83 rows=1 width=24)\n> > Index Cond: ((i1 = 'a'::character varying) AND (i1 = 'b'::character varying))\n> \n> I think you typo'd. i1='a' AND i1='b' turns into 'a' = 'b' which\n> certainly isn't true in any alphabets I know of.\n\n Oh... sorry, right is:\n\ntest=# explain SELECT * FROM ii WHERE i1='a' AND i2='b';\n QUERY PLAN \n---------------------------------------------------------------\n Index Scan using i2 on ii (cost=0.00..17.08 rows=1 width=24)\n Index Cond: (i2 = 'b'::character varying)\n Filter: (i1 = 'a'::character varying)\n\n The query is not important ... it's dummy example only. I think about two \nindexes on one table for access to table.\n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Wed, 16 Oct 2002 15:31:14 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": true, "msg_subject": "Re: index theory" }, { "msg_contents": "Karel Zak kirjutas K, 16.10.2002 kell 15:19:\n> \n> Hi,\n> \n> I have SQL query:\n> \n> SELECT * FROM ii WHERE i1='a' AND i2='b';\n> \n> There're indexes on i1 and i2. I know best solution is use one\n> index on both (i1, i2).\n> \n> The EXPLAIN command show that optimalizer wants to use one index:\n> \n> test=# explain SELECT * FROM ii WHERE i1='a' AND i1='b';\n> QUERY PLAN \n> ---------------------------------------------------------------------------------\n> Index Scan using i1 on ii (cost=0.00..4.83 rows=1 width=24)\n> Index Cond: ((i1 = 'a'::character varying) AND (i1 = 'b'::character varying))\n> \n> \n> It's right and I undererstand why not use both indexes. But I talked\n> about it with one Oracle user and he said me Oracle knows use both indexes \n> and results from both index scans are mergeted to final result -- this is maybe \n> used if full access to table (too big rows?) is more expensive than 2x index \n> scan and final merge. Is in PG possible something like this?\n\nThere has been some talk about using bitmaps generated from indexes as\nan intermediate step.\n\n----------\nHannu\n", "msg_date": "16 Oct 2002 17:47:00 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: index theory" } ]
[ { "msg_contents": "\nRight now we assume \\XXX is octal. We could support \\x as hex because\n\\x isn't any special backslash character. However, no one has ever\nasked for this. Does anyone else think this would be benficial?\n\n---------------------------------------------------------------------------\n\nIgor Georgiev wrote:\n> 1. Why i do this:\n> I try to migrate a database with a 200 tables from Sybase SQL Anywhere to PostgreSQL,\n> but SQL Anywhere escapes special characters like a HEX values ( like \\x0D \\x2C ..... ).\n> PostgreSQL COPY FROM recognize only OCT values ( lie \\001 ... )\n> 2. How-to it' easy :)))\n> 2.1 - Open $UrSourceDir/src/backend/commands/copy.c \n> 2.2 - Add #include <ctype.h> in te begining\n> 2.3 find function \n> static char *\n> CopyReadAttribute(FILE *fp, bool *isnull, char *delim, int *newline, char *null_print)\n> /*----------------*/\n> /*-------- Add this code before it --------*/\n> static int \n> HEXVALUE( int c )\n> {\n> if (isdigit(c))\n> {\n> c -= '0';\n> }\n> else\n> {\n> if (islower(c))\n> c= c-'a'+10;\n> else\n> c= c-'A'+10;\n> }\n> return(c);\n> }\n> 2.4 in body of CopyReadAttribute \n> find this code and modify it like this\n> if (c == '\\\\')\n> {\n> c = CopyGetChar(fp);\n> if (c == EOF)\n> goto endOfFile;\n> switch (c)\n> {\n> /*------ Here is my additional code ------*/\n> case 'x':\n> case 'X':\n> {\n> int val;\n> CopyDonePeek(fp, c, true /*pick up*/); /* Get x always */\n> c = CopyPeekChar(fp); /* Get next */\n> if (isxdigit(c))\n> {\n> val = HEXVALUE(c);\n> c = CopyPeekChar(fp);\n> if (isxdigit(c))\n> {\n> val = (val << 4) + HEXVALUE(c);\n> CopyDonePeek(fp, c, true /*pick up*/);\n> }\n> else\n> {\n> if (c == EOF)\n> goto endOfFile;\n> CopyDonePeek(fp, c, false /*put back*/);\n> }\n> }\n> else\n> {\n> if (c == EOF)\n> goto endOfFile;\n> CopyDonePeek(fp, c, false /*put back*/);\n> }\n> c = val;\n> }\n> break;\n> /*------ End of my additional code ------*/\n> case '0':\n> case '1':\n> case '2':\n> case '3':\n> case '4':\n> case '5':\n> case '6':\n> case '7':\n> {\n> int val;\n> val = OCTVALUE(c);\n> 2.4 he he now make , make install ....\n> 3. An idea to developers : maybe u include this addition to COPY in future releases\n> 10x\n> \n> P.S. Excuse me for my English ( i'm better in C :)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 16 Oct 2002 12:55:22 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: \"COPY FROM\" recognize \\xDD sequence - addition to copy.c" }, { "msg_contents": "1. Why i do this:\n I try to migrate a database with a 200 tables from Sybase SQL Anywhere to PostgreSQL,\n but SQL Anywhere escapes special characters like a HEX values ( like \\x0D \\x2C ..... ).\n PostgreSQL COPY FROM recognize only OCT values ( lie \\001 ... )\n2. How-to it' easy :)))\n 2.1 - Open $UrSourceDir/src/backend/commands/copy.c \n 2.2 - Add #include <ctype.h> in te begining\n 2.3 find function \n static char *\n CopyReadAttribute(FILE *fp, bool *isnull, char *delim, int *newline, char *null_print)\n /*----------------*/\n /*-------- Add this code before it --------*/\n static int \n HEXVALUE( int c )\n {\n if (isdigit(c))\n {\n c -= '0';\n }\n else\n {\n if (islower(c))\n c= c-'a'+10;\n else\n c= c-'A'+10;\n }\n return(c);\n }\n 2.4 in body of CopyReadAttribute \n find this code and modify it like this\n if (c == '\\\\')\n {\n c = CopyGetChar(fp);\n if (c == EOF)\n goto endOfFile;\n switch (c)\n {\n /*------ Here is my additional code ------*/\n case 'x':\n case 'X':\n {\n int val;\n CopyDonePeek(fp, c, true /*pick up*/); /* Get x always */\n c = CopyPeekChar(fp); /* Get next */\n if (isxdigit(c))\n {\n val = HEXVALUE(c);\n c = CopyPeekChar(fp);\n if (isxdigit(c))\n {\n val = (val << 4) + HEXVALUE(c);\n CopyDonePeek(fp, c, true /*pick up*/);\n }\n else\n {\n if (c == EOF)\n goto endOfFile;\n CopyDonePeek(fp, c, false /*put back*/);\n }\n }\n else\n {\n if (c == EOF)\n goto endOfFile;\n CopyDonePeek(fp, c, false /*put back*/);\n }\n c = val;\n }\n break;\n /*------ End of my additional code ------*/\n case '0':\n case '1':\n case '2':\n case '3':\n case '4':\n case '5':\n case '6':\n case '7':\n {\n int val;\n val = OCTVALUE(c);\n 2.4 he he now make , make install ....\n3. An idea to developers : maybe u include this addition to COPY in future releases\n 10x\n\nP.S. Excuse me for my English ( i'm better in C :)\n\n\n\n\n\n\n\n1. Why i do \nthis:\n    I try to migrate a database with \na 200 tables from Sybase SQL Anywhere to PostgreSQL,\n    but SQL Anywhere escapes special \ncharacters like a HEX values ( like \\x0D \\x2C ..... ).\n    PostgreSQL COPY FROM recognize \nonly OCT values ( lie \\001 ... )\n2. How-to it' easy :)))\n    2.1 -   Open \n$UrSourceDir/src/backend/commands/copy.c    \n    2.2 -   Add #include \n<ctype.h> in te begining\n    2.3  find \nfunction    \n        static char \n*\n        \nCopyReadAttribute(FILE *fp, bool *isnull, char *delim, int *newline, char \n*null_print)\n        \n/*----------------*/\n        \n/*-------- Add this code before it --------*/\n        static int \n\n        HEXVALUE( int \nc )\n        \n{\n        \n    if (isdigit(c))\n        \n    {\n        \n        c -= '0';\n        \n    }\n        \n    else\n        \n    {\n        \n        if (islower(c))\n        \n            c= \nc-'a'+10;\n        \n        else\n        \n            c= \nc-'A'+10;\n        \n    }\n        \n    return(c);\n        \n}\n    2.4 in body of CopyReadAttribute \n\n        find this \ncode and modify it like this\n    if (c == '\\\\')\n    {\n        c = \nCopyGetChar(fp);\n        if (c == \nEOF)\n        goto \nendOfFile;\n        switch \n(c)\n            \n{\n            \n/*------ Here is my additional code ------*/\n            \ncase 'x':\n            \ncase 'X':\n            \n    {\n            \n    int val;\n            \n    CopyDonePeek(fp, c, true /*pick up*/); /* Get x always \n*/\n            \n    c = CopyPeekChar(fp); /* Get next */\n            \n    if (isxdigit(c))\n            \n    {\n            \n        val = HEXVALUE(c);\n            \n        c = CopyPeekChar(fp);\n            \n        if (isxdigit(c))\n            \n        {\n            \n            val = (val << 4) \n+ HEXVALUE(c);\n            \n            CopyDonePeek(fp, c, \ntrue /*pick up*/);\n            \n        }\n            \n        else\n            \n        {\n            \n        if (c == EOF)\n            \n            goto \nendOfFile;\n            \n            CopyDonePeek(fp, c, \nfalse /*put back*/);\n            \n        }\n                }\n            \n    else\n            \n    {\n            \n        if (c == EOF)\n            \n            goto \nendOfFile;\n            \n        CopyDonePeek(fp, c, false /*put \nback*/);\n            \n    }\n            \n    c = val;\n            \n}\n        break;\n        \n/*------ End of my additional code ------*/\n        case \n'0':\n        case \n'1':\n        case \n'2':\n        case \n'3':\n        case \n'4':\n        case \n'5':\n        case \n'6':\n        case \n'7':\n            \n    {\n            \n    int val;\n            \n    val = OCTVALUE(c);\n    2.4 he he now make , make install \n....\n3. An idea to developers : maybe u include this \naddition to COPY in future releases\n    10x\n \nP.S. Excuse me for my English ( i'm better in C \n:)", "msg_date": "Wed, 16 Oct 2002 19:48:05 +0200", "msg_from": "\"Igor Georgiev\" <gory@alphasoft-bg.com>", "msg_from_op": false, "msg_subject": "\"COPY FROM\" recognize \\xDD sequence - addition to copy.c & idea 4\n\tdevelopers" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Right now we assume \\XXX is octal. We could support \\x as hex because\n> \\x isn't any special backslash character. However, no one has ever\n> asked for this. Does anyone else think this would be benficial?\n\nWell, it seems pretty localized and harmless. If it lets us import\nSybase dumps, doesn't that boost our plans for world domination? ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 16 Oct 2002 23:25:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: \"COPY FROM\" recognize \\xDD sequence - addition to copy.c " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Right now we assume \\XXX is octal. We could support \\x as hex because\n> > \\x isn't any special backslash character. However, no one has ever\n> > asked for this. Does anyone else think this would be benficial?\n> \n> Well, it seems pretty localized and harmless. If it lets us import\n> Sybase dumps, doesn't that boost our plans for world domination? ;-)\n\nYes, it does. I will add it to TODO:\n\n\t o Allow copy to understand \\x as hex\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 16 Oct 2002 23:58:14 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: \"COPY FROM\" recognize \\xDD sequence - addition to copy.c" }, { "msg_contents": "On Wed, 16 Oct 2002, Tom Lane wrote:\n\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Right now we assume \\XXX is octal. We could support \\x as hex because\n> > \\x isn't any special backslash character. However, no one has ever\n> > asked for this. Does anyone else think this would be benficial?\n> \n> Well, it seems pretty localized and harmless. If it lets us import\n> Sybase dumps, doesn't that boost our plans for world domination? ;-)\n\nCan we add:\n\n\t\to\tWorld Domination\n\nTo the TODO list?\n\nGavin\n\n", "msg_date": "Thu, 17 Oct 2002 14:46:19 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: \"COPY FROM\" recognize \\xDD sequence - addition to" }, { "msg_contents": "Gavin Sherry wrote:\n> On Wed, 16 Oct 2002, Tom Lane wrote:\n> \n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > Right now we assume \\XXX is octal. We could support \\x as hex because\n> > > \\x isn't any special backslash character. However, no one has ever\n> > > asked for this. Does anyone else think this would be benficial?\n> > \n> > Well, it seems pretty localized and harmless. If it lets us import\n> > Sybase dumps, doesn't that boost our plans for world domination? ;-)\n> \n> Can we add:\n> \n> \t\to\tWorld Domination\n> \n> To the TODO list?\n\nThat's already in there, but in subliminal type.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 17 Oct 2002 14:55:34 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: \"COPY FROM\" recognize \\xDD sequence - addition to" } ]
[ { "msg_contents": "1. Why i do this:\n I try to migrate a database with a 200 tables from Sybase SQL Anywhere to PostgreSQL,\n but SQL Anywhere escapes special characters like a HEX values ( like \\x0D \\x2C ..... ).\n PostgreSQL COPY FROM recognize only OCT values ( lie \\001 ... )\n2. How-to it' easy :)))\n 2.1 - Open $UrSourceDir/src/backend/commands/copy.c \n 2.2 - Add #include <ctype.h> in te begining\n 2.3 find function \n static char *\n CopyReadAttribute(FILE *fp, bool *isnull, char *delim, int *newline, char *null_print)\n /*----------------*/\n /*-------- Add this code before it --------*/\n static int \n HEXVALUE( int c )\n {\n if (isdigit(c))\n {\n c -= '0';\n }\n else\n {\n if (islower(c))\n c= c-'a'+10;\n else\n c= c-'A'+10;\n }\n return(c);\n }\n 2.4 in body of CopyReadAttribute \n find this code and modify it like this\n if (c == '\\\\')\n {\n c = CopyGetChar(fp);\n if (c == EOF)\n goto endOfFile;\n switch (c)\n {\n /*------ Here is my additional code ------*/\n case 'x':\n case 'X':\n {\n int val;\n CopyDonePeek(fp, c, true /*pick up*/); /* Get x always */\n c = CopyPeekChar(fp); /* Get next */\n if (isxdigit(c))\n {\n val = HEXVALUE(c);\n c = CopyPeekChar(fp);\n if (isxdigit(c))\n {\n val = (val << 4) + HEXVALUE(c);\n CopyDonePeek(fp, c, true /*pick up*/);\n }\n else\n {\n if (c == EOF)\n goto endOfFile;\n CopyDonePeek(fp, c, false /*put back*/);\n }\n }\n else\n {\n if (c == EOF)\n goto endOfFile;\n CopyDonePeek(fp, c, false /*put back*/);\n }\n c = val;\n }\n break;\n /*------ End of my additional code ------*/\n case '0':\n case '1':\n case '2':\n case '3':\n case '4':\n case '5':\n case '6':\n case '7':\n {\n int val;\n val = OCTVALUE(c);\n 2.4 he he now make , make install ....\n3. An idea to developers : maybe u include this addition to COPY in future releases\n 10x\n\nP.S. Excuse me for my English ( i'm better in C :)\n\n\n\n\n\n\n\n1. Why i do this:\n    I try to migrate a database with \na 200 tables from Sybase SQL Anywhere to PostgreSQL,\n    but SQL Anywhere escapes special \ncharacters like a HEX values ( like \\x0D \\x2C ..... ).\n    PostgreSQL COPY FROM recognize \nonly OCT values ( lie \\001 ... )\n2. How-to it' easy :)))\n    2.1 -   Open \n$UrSourceDir/src/backend/commands/copy.c    \n    2.2 -   Add #include \n<ctype.h> in te begining\n    2.3  find \nfunction    \n        static char \n*\n        \nCopyReadAttribute(FILE *fp, bool *isnull, char *delim, int *newline, char \n*null_print)\n        \n/*----------------*/\n        \n/*-------- Add this code before it --------*/\n        static int \n\n        HEXVALUE( int \nc )\n        \n{\n        \n    if (isdigit(c))\n        \n    {\n        \n        c -= '0';\n        \n    }\n        \n    else\n        \n    {\n        \n        if (islower(c))\n        \n            c= \nc-'a'+10;\n        \n        else\n        \n            c= \nc-'A'+10;\n        \n    }\n        \n    return(c);\n        \n}\n    2.4 in body of CopyReadAttribute \n\n        find this \ncode and modify it like this\n    if (c == '\\\\')\n    {\n        c = \nCopyGetChar(fp);\n        if (c == \nEOF)\n        goto \nendOfFile;\n        switch \n(c)\n            \n{\n            \n/*------ Here is my additional code ------*/\n            \ncase 'x':\n            \ncase 'X':\n            \n    {\n            \n    int val;\n            \n    CopyDonePeek(fp, c, true /*pick up*/); /* Get x always \n*/\n            \n    c = CopyPeekChar(fp); /* Get next */\n            \n    if (isxdigit(c))\n            \n    {\n            \n        val = HEXVALUE(c);\n            \n        c = CopyPeekChar(fp);\n            \n        if (isxdigit(c))\n            \n        {\n            \n            val = (val << 4) \n+ HEXVALUE(c);\n            \n            CopyDonePeek(fp, c, \ntrue /*pick up*/);\n            \n        }\n            \n        else\n            \n        {\n            \n        if (c == EOF)\n            \n            goto \nendOfFile;\n            \n            CopyDonePeek(fp, c, \nfalse /*put back*/);\n            \n        }\n                }\n            \n    else\n            \n    {\n            \n        if (c == EOF)\n            \n            goto \nendOfFile;\n            \n        CopyDonePeek(fp, c, false /*put \nback*/);\n            \n    }\n            \n    c = val;\n            \n}\n        break;\n        \n/*------ End of my additional code ------*/\n        case \n'0':\n        case \n'1':\n        case \n'2':\n        case \n'3':\n        case \n'4':\n        case \n'5':\n        case \n'6':\n        case \n'7':\n            \n    {\n            \n    int val;\n            \n    val = OCTVALUE(c);\n    2.4 he he now make , make install \n....\n3. An idea to developers : maybe u include this \naddition to COPY in future releases\n    10x\n \nP.S. Excuse me for my English ( i'm better in C \n:)", "msg_date": "Wed, 16 Oct 2002 19:43:27 +0200", "msg_from": "\"Igor Georgiev\" <gory@alphasoft-bg.com>", "msg_from_op": true, "msg_subject": "\"COPY FROM\" recognize \\xDD sequence - addition to copy.c & idea 4\n\tdevelopers" } ]
[ { "msg_contents": "As of current CVS, PL/Perl doesn't seem to compile against Perl 5.8. I\nget the following compile error:\n\ngcc -O2 -g -fpic -I. -I/usr/lib/perl/5.8.0/CORE -I../../../src/include -c -o plperl.o plperl.c -MMD\nIn file included from /usr/lib/perl/5.8.0/CORE/op.h:480,\n from /usr/lib/perl/5.8.0/CORE/perl.h:2209,\n from plperl.c:61:\n/usr/lib/perl/5.8.0/CORE/reentr.h:602: field `_crypt_struct' has incomplete type\n/usr/lib/perl/5.8.0/CORE/reentr.h:747: confused by earlier errors, bailing out\nmake[3]: *** [plperl.o] Error 1\n\nThis is running GCC 3.2 and Perl 5.8.0 on Debian unstable.\n\nThere's a thread about a similar topic on p5p:\n\n http://archive.develooper.com/perl5-porters@perl.org/msg75480.html\n\nThe thread suggests a trivial fix: adding -D_GNU_SOURCE to the CFLAGS\nfor the affected files. I checked, and this gets PL/Perl to compile\ncorrectly. That doesn't seem like the right fix, though. Does anyone\nhave any comments on how to fix this properly?\n\nRegardless of the solution we choose, I think this needs to be fixed\nbefore 7.3 is released.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n", "msg_date": "16 Oct 2002 22:40:49 -0400", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": true, "msg_subject": "PL/Perl and Perl 5.8" }, { "msg_contents": "Neil Conway <neilc@samurai.com> writes:\n> As of current CVS, PL/Perl doesn't seem to compile against Perl 5.8.\n\nBuilds fine on HPUX 10.20 with Perl 5.8.0 and gcc 2.95.3.\n\n> There's a thread about a similar topic on p5p:\n> http://archive.develooper.com/perl5-porters@perl.org/msg75480.html\n\nThis thread makes it sound like it's Perl's problem not ours ...\n\n> The thread suggests a trivial fix: adding -D_GNU_SOURCE to the CFLAGS\n> for the affected files. I checked, and this gets PL/Perl to compile\n> correctly. That doesn't seem like the right fix, though.\n\nIn view of opposing comments like\nhttp://www.xray.mpe.mpg.de/mailing-lists/perl5-porters/2002-03/msg01452.html\nI think we should stay out of this. It is not our business to get\nrandom Perl code to compile on random OS installations, and certainly\nnot our business to interject random symbol definitions that might well\nbreak whatever solution the Perl guys themselves decide on.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 17 Oct 2002 00:05:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PL/Perl and Perl 5.8 " }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n> Neil Conway <neilc@samurai.com> writes:\n> > As of current CVS, PL/Perl doesn't seem to compile against Perl 5.8.\n> \n> Builds fine on HPUX 10.20 with Perl 5.8.0 and gcc 2.95.3.\n\nIt may also depend on the way Perl is configured. I've attached the\noutput of 'perl -V' on my system (using Debian's default Perl\npackages).\n\n> > The thread suggests a trivial fix: adding -D_GNU_SOURCE to the CFLAGS\n> > for the affected files. I checked, and this gets PL/Perl to compile\n> > correctly. That doesn't seem like the right fix, though.\n> \n> In view of opposing comments like\n> http://www.xray.mpe.mpg.de/mailing-lists/perl5-porters/2002-03/msg01452.html\n> I think we should stay out of this.\n\nWell, I'd read that thread as saying that \"Apache breaks when compiled\nwith -D_GNU_SOURCE\", not claiming that there is something inherently\nwrong with defining _GNU_SOURCE as a fix for the Perl problem.\n\n> It is not our business to get random Perl code to compile on random\n> OS installations, and certainly not our business to interject random\n> symbol definitions that might well break whatever solution the Perl\n> guys themselves decide on.\n\nWell, I'm not happy with defining _GNU_SOURCE, but I don't agree that\njust saying \"it's a Perl problem\" is a good answer. That may well be\nthe case, but it doesn't change the fact that a lot of people are\nrunning 5.8.0, and will probably continue to do so during the 7.3\nlifecycle[1]. We work around braindamage on other systems -- strictely\nspeaking, we could say \"the snprintf() bug with 64-bit Solaris is a\nSun libc problem\", for example.\n\nPerhaps we can include a test for this in configure? (i.e. if\n--with-perl is specified, try compiling a simple XS file that exhibits\nthe problem; if it fails, try it with -D_GNU_SOURCE).\n\nCheers,\n\nNeil\n\n[1] Note that I'm assuming that PL/Perl is broken with 5.8.0 on\nsystems other than mine, and another person's on IRC who reported the\nproblem to begin with. Can other people confirm the problem?\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\nSummary of my perl5 (revision 5.0 version 8 subversion 0) configuration:\n Platform:\n osname=linux, osvers=2.4.19, archname=i386-linux-thread-multi\n uname='linux cyberhq 2.4.19 #1 smp sun aug 4 11:30:45 pdt 2002 i686 unknown unknown gnulinux '\n config_args='-Dusethreads -Duselargefiles -Dccflags=-DDEBIAN -Dcccdlflags=-f PIC -Darchname=i386-linux -Dprefix=/usr -Dprivlib=/usr/share/perl/5.8.0 -Darchli b=/usr/lib/perl/5.8.0 -Dvendorprefix=/usr -Dvendorlib=/usr/share/perl5 -Dvendora rch=/usr/lib/perl5 -Dsiteprefix=/usr/local -Dsitelib=/usr/local/share/perl/5.8.0 -Dsitearch=/usr/local/lib/perl/5.8.0 -Dman1dir=/usr/share/man/man1 -Dman3dir=/u \n sr/share/man/man3 -Dman1ext=1 -Dman3ext=3perl -Dpager=/usr/bin/sensible-pager -U afs -Ud_csh -Uusesfio -Uusenm -Duseshrplib -Dlibperl=libperl.so.5.8.0 -Dd_dosuid -des'\n hint=recommended, useposix=true, d_sigaction=define\n usethreads=define use5005threads=undef useithreads=define usemultiplicity=de fine\n useperlio=define d_sfio=undef uselargefiles=define usesocks=undef\n use64bitint=undef use64bitall=undef uselongdouble=undef\n usemymalloc=n, bincompat5005=undef\n Compiler:\n cc='cc', ccflags ='-D_REENTRANT -D_GNU_SOURCE -DDEBIAN -fno-strict-aliasing -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64',\n optimize='-O3',\n cppflags='-D_REENTRANT -D_GNU_SOURCE -DDEBIAN -fno-strict-aliasing -I/usr/lo cal/include'\n ccversion='', gccversion='2.95.4 20011002 (Debian prerelease)', gccosandvers =''\n intsize=4, longsize=4, ptrsize=4, doublesize=8, byteorder=1234\n d_longlong=define, longlongsize=8, d_longdbl=define, longdblsize=12\n ivtype='long', ivsize=4, nvtype='double', nvsize=8, Off_t='off_t', lseeksize =8\n alignbytes=4, prototype=define\n Linker and Libraries:\n ld='cc', ldflags =' -L/usr/local/lib'\n libpth=/usr/local/lib /lib /usr/lib\n libs=-lgdbm -ldb -ldl -lm -lpthread -lc -lcrypt\n perllibs=-ldl -lm -lpthread -lc -lcrypt\n libc=/lib/libc-2.2.5.so, so=so, useshrplib=true, libperl=libperl.so.5.8.0\n gnulibc_version='2.2.5'\n Dynamic Linking:\n dlsrc=dl_dlopen.xs, dlext=so, d_dlsymun=undef, ccdlflags='-rdynamic'\n cccdlflags='-fPIC', lddlflags='-shared -L/usr/local/lib'\n\n\nCharacteristics of this binary (from libperl): \n Compile-time options: MULTIPLICITY USE_ITHREADS USE_LARGE_FILES PERL_IMPLICIT_ CONTEXT\n Built under linux\n Compiled at Sep 14 2002 17:36:21\n @INC:\n /etc/perl\n /usr/local/lib/perl/5.8.0\n /usr/local/share/perl/5.8.0\n /usr/lib/perl5\n /usr/share/perl5\n /usr/lib/perl/5.8.0\n /usr/share/perl/5.8.0\n /usr/local/lib/site_perl\n\n", "msg_date": "17 Oct 2002 01:10:11 -0400", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": true, "msg_subject": "Re: PL/Perl and Perl 5.8" }, { "msg_contents": "On Thu, 2002-10-17 at 00:10, Neil Conway wrote:\n> \n> \n> Well, I'm not happy with defining _GNU_SOURCE, but I don't agree that\n> just saying \"it's a Perl problem\" is a good answer. That may well be\n> the case, but it doesn't change the fact that a lot of people are\n> running 5.8.0, and will probably continue to do so during the 7.3\n> lifecycle[1]. We work around braindamage on other systems -- strictely\n> speaking, we could say \"the snprintf() bug with 64-bit Solaris is a\n> Sun libc problem\", for example.\n> \nIf you want to try it on my UnixWare 7.1.3 box, I can create an account\nfor you. It has PERL 5.8.0 and a NON-gcc compiler. PL/Perl from 7.2.2\nworks fine with it. \n\nI don't have the time, but can give anyone that wants it an account. \n\n(Peter Eisentraut already has such, and I'll create one for any that\nwant one).\n\nThe box is a 1.7Ghz P-4, and is on a 768K/768K DSL line.\n\nLER\n> Perhaps we can include a test for this in configure? (i.e. if\n> --with-perl is specified, try compiling a simple XS file that exhibits\n> the problem; if it fails, try it with -D_GNU_SOURCE).\n> \n> Cheers,\n> \n> Neil\n> \n> [1] Note that I'm assuming that PL/Perl is broken with 5.8.0 on\n> systems other than mine, and another person's on IRC who reported the\n> problem to begin with. Can other people confirm the problem?\n> \n> -- \n> Neil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n> \n> Summary of my perl5 (revision 5.0 version 8 subversion 0) configuration:\n> Platform:\n> osname=linux, osvers=2.4.19, archname=i386-linux-thread-multi\n> uname='linux cyberhq 2.4.19 #1 smp sun aug 4 11:30:45 pdt 2002 i686 unknown unknown gnulinux '\n> config_args='-Dusethreads -Duselargefiles -Dccflags=-DDEBIAN -Dcccdlflags=-f PIC -Darchname=i386-linux -Dprefix=/usr -Dprivlib=/usr/share/perl/5.8.0 -Darchli b=/usr/lib/perl/5.8.0 -Dvendorprefix=/usr -Dvendorlib=/usr/share/perl5 -Dvendora rch=/usr/lib/perl5 -Dsiteprefix=/usr/local -Dsitelib=/usr/local/share/perl/5.8.0 -Dsitearch=/usr/local/lib/perl/5.8.0 -Dman1dir=/usr/share/man/man1 -Dman3dir=/u !\n \n> sr/share/man/man3 -Dman1ext=1 -Dman3ext=3perl -Dpager=/usr/bin/sensible-pager -U afs -Ud_csh -Uusesfio -Uusenm -Duseshrplib -Dlibperl=libperl.so.5.8.0 -Dd_dosuid -des'\n> hint=recommended, useposix=true, d_sigaction=define\n> usethreads=define use5005threads=undef useithreads=define usemultiplicity=de fine\n> useperlio=define d_sfio=undef uselargefiles=define usesocks=undef\n> use64bitint=undef use64bitall=undef uselongdouble=undef\n> usemymalloc=n, bincompat5005=undef\n> Compiler:\n> cc='cc', ccflags ='-D_REENTRANT -D_GNU_SOURCE -DDEBIAN -fno-strict-aliasing -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64',\n> optimize='-O3',\n> cppflags='-D_REENTRANT -D_GNU_SOURCE -DDEBIAN -fno-strict-aliasing -I/usr/lo cal/include'\n> ccversion='', gccversion='2.95.4 20011002 (Debian prerelease)', gccosandvers =''\n> intsize=4, longsize=4, ptrsize=4, doublesize=8, byteorder=1234\n> d_longlong=define, longlongsize=8, d_longdbl=define, longdblsize=12\n> ivtype='long', ivsize=4, nvtype='double', nvsize=8, Off_t='off_t', lseeksize =8\n> alignbytes=4, prototype=define\n> Linker and Libraries:\n> ld='cc', ldflags =' -L/usr/local/lib'\n> libpth=/usr/local/lib /lib /usr/lib\n> libs=-lgdbm -ldb -ldl -lm -lpthread -lc -lcrypt\n> perllibs=-ldl -lm -lpthread -lc -lcrypt\n> libc=/lib/libc-2.2.5.so, so=so, useshrplib=true, libperl=libperl.so.5.8.0\n> gnulibc_version='2.2.5'\n> Dynamic Linking:\n> dlsrc=dl_dlopen.xs, dlext=so, d_dlsymun=undef, ccdlflags='-rdynamic'\n> cccdlflags='-fPIC', lddlflags='-shared -L/usr/local/lib'\n> \n> \n> Characteristics of this binary (from libperl): \n> Compile-time options: MULTIPLICITY USE_ITHREADS USE_LARGE_FILES PERL_IMPLICIT_ CONTEXT\n> Built under linux\n> Compiled at Sep 14 2002 17:36:21\n> @INC:\n> /etc/perl\n> /usr/local/lib/perl/5.8.0\n> /usr/local/share/perl/5.8.0\n> /usr/lib/perl5\n> /usr/share/perl5\n> /usr/lib/perl/5.8.0\n> /usr/share/perl/5.8.0\n> /usr/local/lib/site_perl\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n", "msg_date": "17 Oct 2002 00:17:04 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: PL/Perl and Perl 5.8" }, { "msg_contents": "Neil Conway <neilc@samurai.com> writes:\n> Well, I'm not happy with defining _GNU_SOURCE, but I don't agree that\n> just saying \"it's a Perl problem\" is a good answer. That may well be\n> the case, but it doesn't change the fact that a lot of people are\n> running 5.8.0, and will probably continue to do so during the 7.3\n> lifecycle[1]. We work around braindamage on other systems -- strictely\n> speaking, we could say \"the snprintf() bug with 64-bit Solaris is a\n> Sun libc problem\", for example.\n\nWell, I'm not opposed to a workaround in principle; I'm just unconvinced\nthat this is the right solution. Do we understand what is broken and\nwhy -D_GNU_SOURCE fixes it?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 17 Oct 2002 08:45:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PL/Perl and Perl 5.8 " }, { "msg_contents": "Neil Conway writes:\n\n> gcc -O2 -g -fpic -I. -I/usr/lib/perl/5.8.0/CORE -I../../../src/include -c -o plperl.o plperl.c -MMD\n> In file included from /usr/lib/perl/5.8.0/CORE/op.h:480,\n> from /usr/lib/perl/5.8.0/CORE/perl.h:2209,\n> from plperl.c:61:\n> /usr/lib/perl/5.8.0/CORE/reentr.h:602: field `_crypt_struct' has incomplete type\n> /usr/lib/perl/5.8.0/CORE/reentr.h:747: confused by earlier errors, bailing out\n> make[3]: *** [plperl.o] Error 1\n\nCan you post some snippets from the relevant code sections? Following one\nof the links that were posted I gathered that this is related to\ncrypt_r(), whose prototype is not exposed on my system unless you use\n_GNU_SOURCE. But I don't see any _crypt_struct here.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Thu, 17 Oct 2002 21:18:05 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: PL/Perl and Perl 5.8" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Can you post some snippets from the relevant code sections? Following one\n> of the links that were posted I gathered that this is related to\n> crypt_r(), whose prototype is not exposed on my system unless you use\n> _GNU_SOURCE. But I don't see any _crypt_struct here.\n\nYeah, the seems to be the culprit. Line 480 of reentr.h is part of the\ndefinition of a monster struct; the relevent field is:\n\n#ifdef HAS_CRYPT_R\n#if CRYPT_R_PROTO == REENTRANT_PROTO_B_CCD\n\tCRYPTD* _crypt_data;\n#else\n\tstruct crypt_data _crypt_struct;\n#endif\n#endif /* HAS_CRYPT_R */\n\nThe \"crypt_data\" struct is defined in crypt.h, but only if _GNU_SOURCE\nis defined -- just like crypt_r().\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n", "msg_date": "17 Oct 2002 16:46:36 -0400", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": true, "msg_subject": "Re: PL/Perl and Perl 5.8" }, { "msg_contents": "Neil Conway writes:\n\n> #ifdef HAS_CRYPT_R\n> #if CRYPT_R_PROTO == REENTRANT_PROTO_B_CCD\n> \tCRYPTD* _crypt_data;\n> #else\n> \tstruct crypt_data _crypt_struct;\n> #endif\n> #endif /* HAS_CRYPT_R */\n>\n> The \"crypt_data\" struct is defined in crypt.h, but only if _GNU_SOURCE\n> is defined -- just like crypt_r().\n\nThe HAS_CRYPT_R is true because the function is available even without the\nprototype, but the struct is not. A plain bug in Perl's configury\nmechanism.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Sat, 19 Oct 2002 00:06:32 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: PL/Perl and Perl 5.8" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> The HAS_CRYPT_R is true because the function is available even without the\n> prototype, but the struct is not. A plain bug in Perl's configury\n> mechanism.\n\nYeah, that's true. The question is whether it's worth working around\nthe bug. IMHO, yes -- but what do other people think?\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n", "msg_date": "18 Oct 2002 18:36:21 -0400", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": true, "msg_subject": "Re: PL/Perl and Perl 5.8" }, { "msg_contents": "Neil Conway wrote:\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > The HAS_CRYPT_R is true because the function is available even without the\n> > prototype, but the struct is not. A plain bug in Perl's configury\n> > mechanism.\n> \n> Yeah, that's true. The question is whether it's worth working around\n> the bug. IMHO, yes -- but what do other people think?\n\nWith no motion on this, I assume we are going to call this a perl bug\nand not work around it for 7.3.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 5 Nov 2002 17:24:46 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: PL/Perl and Perl 5.8" }, { "msg_contents": "On Dienstag, November 5, 2002, at 11:24 Uhr, Bruce Momjian wrote:\n\n> Neil Conway wrote:\n>> Peter Eisentraut <peter_e@gmx.net> writes:\n>>> The HAS_CRYPT_R is true because the function is available even \n>>> without the\n>>> prototype, but the struct is not. A plain bug in Perl's configury\n>>> mechanism.\n>>\n>> Yeah, that's true. The question is whether it's worth working around\n>> the bug. IMHO, yes -- but what do other people think?\n>\n> With no motion on this, I assume we are going to call this a perl bug\n> and not work around it for 7.3.\n\nI've only just subscribed to this list, so I don't know all of the \ndiscussion\n(given time, I'll look it up in the archives). But if you have found a \nperl\nbug, particularly one of configuration, I'm sure the perl developers \nwould\nbe grateful if you could report it to the perl5-porters list\n(http://lists.perl.org/showlist.cgi?name=perl5-porters).\n\nOr I could report it on your behalf, if you don't want to subscribe and\nunsubscribe and all that.\n\nThank you,\n\nMarcel\n\n", "msg_date": "Wed, 6 Nov 2002 00:29:35 +0100", "msg_from": "=?ISO-8859-1?Q?Marcel_Gr=FCnauer?= <marcel@uptime.at>", "msg_from_op": false, "msg_subject": "Re: PL/Perl and Perl 5.8" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> With no motion on this, I assume we are going to call this a perl bug\n> and not work around it for 7.3.\n\nErm, no -- Reinhard Max already sent a fix for this to -patches, Tom\nhad an objection to it, and then Reinhard posted another version\n(which presumably satisfies Tom's objections). It should probably be\nin RC1...\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n", "msg_date": "05 Nov 2002 19:13:30 -0500", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": true, "msg_subject": "Re: PL/Perl and Perl 5.8" }, { "msg_contents": "Marcel Gr�nauer <marcel@uptime.at> writes:\n> I've only just subscribed to this list, so I don't know all of the\n> discussion (given time, I'll look it up in the archives). But if you\n> have found a perl bug, particularly one of configuration, I'm sure\n> the perl developers would be grateful if you could report it to the\n> perl5-porters list\n> (http://lists.perl.org/showlist.cgi?name=perl5-porters).\n\nYes, it has already been reported to p5p. The first p5p thread on the\ntopic didn't contain any mention of a fix for the problem being\ncommitted to the stable branch, but the Perl maintainers are aware of\nit, at any rate, and may have fixed it in the interim.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n", "msg_date": "05 Nov 2002 19:18:20 -0500", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": true, "msg_subject": "Re: PL/Perl and Perl 5.8" }, { "msg_contents": "Neil Conway <neilc@samurai.com> writes:\n> Erm, no -- Reinhard Max already sent a fix for this to -patches, Tom\n> had an objection to it, and then Reinhard posted another version\n> (which presumably satisfies Tom's objections).\n\nPeter didn't like it ... which is about what I'd expected, but I was\nkeeping quiet till he weighed in ...\n\nI'm guessing that what we need to do is -D_GNU_SOURCE somewhere in the\nMakefiles; the $64 question is exactly where (can we restrict it to\nsrc/pl/plperl?) and what conditions should cause the Makefiles to add\nit? Do we want a configure test?\n\nFWIW, I see no such failure on HPUX with Perl 5.8.0, but that seems to\nbe because Perl's HAS_CRYPT_R symbol doesn't get set here. Which is odd\nin itself, because crypt_r() is definitely available on this platform.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 05 Nov 2002 19:27:00 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PL/Perl and Perl 5.8 " }, { "msg_contents": "Tom Lane writes:\n\n> I'm guessing that what we need to do is -D_GNU_SOURCE somewhere in the\n> Makefiles; the $64 question is exactly where (can we restrict it to\n> src/pl/plperl?) and what conditions should cause the Makefiles to add\n> it? Do we want a configure test?\n\nThe simplest choice would be to just define it unconditionally in linux.h.\nSince it is not supposed to change any interfaces, just add new ones, this\nshould be safe. If you don't believe that, then we really need to test\nand define _GNU_SOURCE early in configure so the following tests can take\nit into account. In either case, the command line is not the place for\nit.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Wed, 6 Nov 2002 23:07:13 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: PL/Perl and Perl 5.8 " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> I'm guessing that what we need to do is -D_GNU_SOURCE somewhere in the\n>> Makefiles; the $64 question is exactly where\n\n> The simplest choice would be to just define it unconditionally in linux.h.\n> Since it is not supposed to change any interfaces, just add new ones, this\n> should be safe.\n\nThat works for me. The main issue in my mind is not to define it on\nplatforms that aren't glibc-based, but linux.h should be safe.\n\nAny objections out there?\n\nI see another potential problem BTW: pg_config.h has\n\n#ifndef HAVE_INET_ATON\n# include <sys/types.h>\n# include <netinet/in.h>\n# include <arpa/inet.h>\nextern int inet_aton(const char *cp, struct in_addr * addr);\n#endif\n\nwhich it does *before* pulling in the port-specific config file.\nWhile this won't break Linux since it has inet_aton(), I could see\nproblems arising on platforms without. I am inclined to move all\nthe substitute \"extern\" declarations in pg_config.h to the bottom\nof the file.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Thu, 07 Nov 2002 13:21:31 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PL/Perl and Perl 5.8 " }, { "msg_contents": "On 10/18/02, Neil Conway <neilc@samurai.com> wrote:\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > The HAS_CRYPT_R is true because the function is available even without the\n> > prototype, but the struct is not. A plain bug in Perl's configury\n> > mechanism.\n> \n> Yeah, that's true. The question is whether it's worth working around\n> the bug. IMHO, yes -- but what do other people think?\n\nthis message spells out the problem & solution nicely:\n \nhttp://mailman.cs.uchicago.edu/pipermail/swig-dev/2002-September/\n008056.html\n\njr\n\n\n> \n> -- \n> Neil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \n------------------------------------------------------------\nJoel W. Reed 412-257-3881\n--------All the simple programs have been written.----------", "msg_date": "Thu, 7 Nov 2002 14:42:29 -0500", "msg_from": "\"joel w. reed\" <jreed@ddiworld.com>", "msg_from_op": false, "msg_subject": "Re: PL/Perl and Perl 5.8" } ]
[ { "msg_contents": "Hello\n\nI wonder if there is a possibility to not \nonly loop through record's with a FOR loop, \nbut also loop through the record itselfs. \nMeaning something like that:\n\nFOR record IN SELECT * FROM table LOOP\n\n FOR each field IN record LOOP\n\n return all field values\n\n END LOOP\n\nEND LOOP\n\nI fear I have to read the systems table to\nget all field names and then read one field\nafter another from the record with the names \nfrom the systems table?\n\nRegards\nelf\n\n", "msg_date": "Thu, 17 Oct 2002 10:56:12 +0200", "msg_from": "Enrique Filberto <hopeu@gmx.net>", "msg_from_op": true, "msg_subject": "Looping through fields" } ]
[ { "msg_contents": "On Thu, 2002-10-17 at 23:34, Teodor Sigaev wrote:\n> wow=# select 5.3::float;\n> ERROR: Bad float8 input format '5.3'\n\nCould it be something with locales ?\n\nTry:\n\nselect 5,3::float;\n\n-------------\nHannu\n\n\n\n", "msg_date": "17 Oct 2002 22:47:01 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": true, "msg_subject": "Re: Current CVS has strange parser for float type" }, { "msg_contents": "wow=# select 5.3::float;\nERROR: Bad float8 input format '5.3'\nwow=# select 5.3::float8;\nERROR: Bad float8 input format '5.3'\nwow=# select 5.3::float4;\nERROR: Bad float4 input format '5.3'\nwow=# select 5.3e-1::float4;\nERROR: Bad float4 input format '0.53'\nwow=# select -5.3e-1::float4;\nERROR: Bad float4 input format '0.53'\nwow=# select -5.3::float4;\nERROR: Bad float4 input format '5.3'\nwow=# select 5.32222e2::float4;\nERROR: Bad float4 input format '532.222'\nwow=# select version();\n version\n---------------------------------------------------------------------\n PostgreSQL 7.3b2 on i386-unknown-freebsd4.6, compiled by GCC 2.95.3\n(1 row)\n\nVery strange or I missed something?\nThis 'feature' appears only on FreeBSD, Linux works fine.\n\n\n-- \nTeodor Sigaev\nteodor@stack.net\n\n\n", "msg_date": "Thu, 17 Oct 2002 22:34:50 +0400", "msg_from": "Teodor Sigaev <teodor@stack.net>", "msg_from_op": false, "msg_subject": "Current CVS has strange parser for float type " }, { "msg_contents": "\nWorks here:\n\t\n\ttest=> select 5.3::float;\n\t float8 \n\t--------\n\t 5.3\n\t(1 row)\n\n---------------------------------------------------------------------------\n\nTeodor Sigaev wrote:\n> wow=# select 5.3::float;\n> ERROR: Bad float8 input format '5.3'\n> wow=# select 5.3::float8;\n> ERROR: Bad float8 input format '5.3'\n> wow=# select 5.3::float4;\n> ERROR: Bad float4 input format '5.3'\n> wow=# select 5.3e-1::float4;\n> ERROR: Bad float4 input format '0.53'\n> wow=# select -5.3e-1::float4;\n> ERROR: Bad float4 input format '0.53'\n> wow=# select -5.3::float4;\n> ERROR: Bad float4 input format '5.3'\n> wow=# select 5.32222e2::float4;\n> ERROR: Bad float4 input format '532.222'\n> wow=# select version();\n> version\n> ---------------------------------------------------------------------\n> PostgreSQL 7.3b2 on i386-unknown-freebsd4.6, compiled by GCC 2.95.3\n> (1 row)\n> \n> Very strange or I missed something?\n> This 'feature' appears only on FreeBSD, Linux works fine.\n> \n> \n> -- \n> Teodor Sigaev\n> teodor@stack.net\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 17 Oct 2002 15:22:11 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Current CVS has strange parser for float type" }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> On Thu, 2002-10-17 at 23:34, Teodor Sigaev wrote:\n>> wow=# select 5.3::float;\n>> ERROR: Bad float8 input format '5.3'\n> Could it be something with locales ?\n\nOooh, bingo! On HPUX:\n\nregression=# select 5.3::float;\n float8\n--------\n 5.3\n(1 row)\n\nregression=# set lc_numeric = 'de_DE.iso88591';\nSET\nregression=# select 5.3::float;\nERROR: Bad float8 input format '5.3'\n\nI think this is a consequence of the changes made a little while back\n(by Peter IIRC?) in locale handling. It used to be that we deliberately\ndid *not* allow any LC_ setting except LC_MESSAGES to actually take\neffect globally in the backend, and this sort of problem is exactly\nwhy. I think we need to revert some aspects of that change.\n\nBruce, this is a \"must fix\" open item ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 17 Oct 2002 20:59:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Current CVS has strange parser for float type " }, { "msg_contents": "Tom Lane wrote:\n> I think this is a consequence of the changes made a little while back\n> (by Peter IIRC?) in locale handling. It used to be that we deliberately\n> did *not* allow any LC_ setting except LC_MESSAGES to actually take\n> effect globally in the backend, and this sort of problem is exactly\n> why. I think we need to revert some aspects of that change.\n> \n> Bruce, this is a \"must fix\" open item ...\n\nAdded.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\n P O S T G R E S Q L\n\n 7 . 3 O P E N I T E M S\n\n\nCurrent at ftp://momjian.postgresql.org/pub/postgresql/open_items.\n\nRequired Changes\n-------------------\nSchema handling - ready? interfaces? client apps?\nDrop column handling - ready for all clients, apps?\nGet bison upgrade on postgresql.org for ecpg only (Marc)\nFix vacuum btree bug (Tom)\nFix client apps for autocommit = off\nFix pg_dump to handle 64-bit off_t offsets for custom format (Philip)\nFix local handling of floats\n\nOptional Changes\n----------------\nAdd schema dump option to pg_dump\nAdd/remove GRANT EXECUTE to all /contrib functions?\nMissing casts for bit operations\n\\copy doesn't handle column names\nCOPY doesn't handle schemas\nCOPY quotes all table names\n\n\nDocumentation Changes\n---------------------\nMove documation to gborg for moved projects", "msg_date": "Thu, 17 Oct 2002 21:08:55 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Current CVS has strange parser for float type" }, { "msg_contents": "wow=# select 5,3::float;\n ?column? | float8\n----------+--------\n 5 | 3\n(1 row)\n\n:)\n\nHannu Krosing wrote:\n> On Thu, 2002-10-17 at 23:34, Teodor Sigaev wrote:\n> \n>>wow=# select 5.3::float;\n>>ERROR: Bad float8 input format '5.3'\n> \n> \n> Could it be something with locales ?\n> \n> Try:\n> \n> select 5,3::float;\n> \n> -------------\n> Hannu\n> \n> \n> \n> \n\n-- \nTeodor Sigaev\nteodor@stack.net\n\n\n", "msg_date": "Fri, 18 Oct 2002 11:31:41 +0400", "msg_from": "Teodor Sigaev <teodor@stack.net>", "msg_from_op": false, "msg_subject": "Re: Current CVS has strange parser for float type" }, { "msg_contents": "wow=# show lc_numeric;\n lc_numeric\n--------------\n ru_RU.KOI8-R\n(1 row)\n\nwow=# select 5.3::float;\nERROR: Bad float8 input format '5.3'\nwow=# set lc_numeric = 'C';\nSET\nwow=# select 5.3::float;\n float8\n--------\n 5.3\n(1 row)\n\nIt's locale.\n\nTom Lane wrote:\n> Hannu Krosing <hannu@tm.ee> writes:\n> \n>>On Thu, 2002-10-17 at 23:34, Teodor Sigaev wrote:\n>>\n>>>wow=# select 5.3::float;\n>>>ERROR: Bad float8 input format '5.3'\n>>\n>>Could it be something with locales ?\n> \n> \n> Oooh, bingo! On HPUX:\n> \n> regression=# select 5.3::float;\n> float8\n> --------\n> 5.3\n> (1 row)\n> \n> regression=# set lc_numeric = 'de_DE.iso88591';\n> SET\n> regression=# select 5.3::float;\n> ERROR: Bad float8 input format '5.3'\n> \n> I think this is a consequence of the changes made a little while back\n> (by Peter IIRC?) in locale handling. It used to be that we deliberately\n> did *not* allow any LC_ setting except LC_MESSAGES to actually take\n> effect globally in the backend, and this sort of problem is exactly\n> why. I think we need to revert some aspects of that change.\n> \n> Bruce, this is a \"must fix\" open item ...\n> \n> \t\t\tregards, tom lane\n> \n\n-- \nTeodor Sigaev\nteodor@stack.net\n\n\n", "msg_date": "Fri, 18 Oct 2002 11:34:36 +0400", "msg_from": "Teodor Sigaev <teodor@stack.net>", "msg_from_op": false, "msg_subject": "Re: Current CVS has strange parser for float type" }, { "msg_contents": "Teodor Sigaev writes:\n\n> wow=# select 5.3::float;\n> ERROR: Bad float8 input format '5.3'\n\nDoes it accept '5,4'::float? Try running initdb with --locale=C.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Fri, 18 Oct 2002 18:18:14 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Current CVS has strange parser for float type " }, { "msg_contents": "Teodor Sigaev <teodor@stack.net> writes:\n> It's locale.\n\nYup. I've applied a fix in pg_locale.c. Turns out the code was trying\nto do the right thing, but failed because setlocale() returns pointers\nto modifiable static variables :-(\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 18 Oct 2002 16:55:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Current CVS has strange parser for float type " }, { "msg_contents": "\n\nPeter Eisentraut wrote:\n> Teodor Sigaev writes:\n> \n> \n>>wow=# select 5.3::float;\n>>ERROR: Bad float8 input format '5.3'\n>>\n> \n> Does it accept '5,4'::float? \nYes, it accepted '5,4'::float format.\n\n\n\n-- \nTeodor Sigaev\nteodor@stack.net\n\n", "msg_date": "Sat, 19 Oct 2002 11:51:33 +0400", "msg_from": "Teodor Sigaev <teodor@stack.net>", "msg_from_op": false, "msg_subject": "Re: Current CVS has strange parser for float type" } ]
[ { "msg_contents": "I am cleaning up /contrib by adding \"autocommit = 'on'\" and making it\nmore consistent. Should I be adding this too:\n\n\t-- Adjust this setting to control where the objects get created.\n\tSET search_path = public;\n\nand doing all object creation in one transaction, like /contrib/cube\ndoes?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 17 Oct 2002 15:52:09 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Cleanup of /contrib" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I am cleaning up /contrib by adding \"autocommit = 'on'\" and making it\n> more consistent. Should I be adding this too:\n\n> \t-- Adjust this setting to control where the objects get created.\n> \tSET search_path = public;\n\nYes, that would be a good idea. Without that, the objects might well\nget created in the owning user's private schema; which most of the time\nwould be unhelpful. I'm not thrilled with having to edit the script\nif you do happen to want them in a non-public schema, but I have not\nthought of a better approach yet. (Anyone?)\n\n> and doing all object creation in one transaction, like /contrib/cube\n> does?\n\nThe one-transaction thing seems unnecessary to me, but if you like it...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 17 Oct 2002 21:06:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Cleanup of /contrib " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I am cleaning up /contrib by adding \"autocommit = 'on'\" and making it\n> > more consistent. Should I be adding this too:\n> \n> > \t-- Adjust this setting to control where the objects get created.\n> > \tSET search_path = public;\n> \n> Yes, that would be a good idea. Without that, the objects might well\n> get created in the owning user's private schema; which most of the time\n> would be unhelpful. I'm not thrilled with having to edit the script\n> if you do happen to want them in a non-public schema, but I have not\n> thought of a better approach yet. (Anyone?)\n> \n> > and doing all object creation in one transaction, like /contrib/cube\n> > does?\n> \n> The one-transaction thing seems unnecessary to me, but if you like it...\n\nSome have it, some don't. I will make it consistent, at least.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 17 Oct 2002 21:07:54 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Cleanup of /contrib" }, { "msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > I am cleaning up /contrib by adding \"autocommit = 'on'\"\n> \n> Is everyone in the world now required to add \"autocommit = on\" in all\n> scripts, interfaces, programs, applications that are not strictly personal\n> use? Is there no better solution?\n\nIf there is, I would love to hear it. If you write a script that\ncreates/modifies objects, you either have to use \"autocommit = on\" or\nuse transactions.\n \n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 18 Oct 2002 18:01:58 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Cleanup of /contrib" }, { "msg_contents": "Bruce Momjian writes:\n\n> I am cleaning up /contrib by adding \"autocommit = 'on'\"\n\nIs everyone in the world now required to add \"autocommit = on\" in all\nscripts, interfaces, programs, applications that are not strictly personal\nuse? Is there no better solution?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n\n", "msg_date": "Sat, 19 Oct 2002 00:05:56 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Cleanup of /contrib" } ]