threads listlengths 1 2.99k |
|---|
[
{
"msg_contents": "\nTom Lane wrote:\n> Jan Wieck <JanWieck@Yahoo.com> writes:\n> > Uh - not much time to spend if the statistics should at least\n> > be half accurate. And it would become worse in SMP systems.\n> > So that was a nifty idea, but I think it'd cause much more\n> > statistic losses than I assumed at first.\n>\n> > Back to drawing board. Maybe a SYS-V message queue can serve?\n>\n> That would be the same as a pipe: backends would block if the collector\n> stopped accepting data. I do like the \"auto discard\" aspect of this\n> UDP-socket approach.\n\n Does a pipe guarantee that a buffer, written with one atomic\n write(2), never can get intermixed with other data on the\n readers end? I know that you know what I mean, but for the\n broader audience: Let's define a message to the collector to\n be 4byte-len,len-bytes. Now hundreds of backends hammer\n messages into the (shared) writing end of the pipe, all with\n different sizes. Is it GUARANTEED that a\n read(4bytes),read(nbytes) sequence will allways return one\n complete message and never intermixed parts of different\n write(2)s?\n\n With message queues, this is guaranteed. Also, message queues\n would make it easy to query the collected statistics (see\n below).\n\n> I think Philip had the right idea: each backend should send totals,\n> not deltas, in its messages. Then, it doesn't matter (much) if the\n> collector loses some messages --- that just means that sometimes it\n> has a slightly out-of-date idea about how much work some backends have\n> done. It should be easy to design the software so that that just makes\n> a small, transient error in the currently displayed statistics.\n\n If we use two message queues (IPC_PRIVATE is enough here),\n one into collector and one into backend direction, this'd be\n an easy way to collect and query statistics.\n\n The backends send delta stats messages to the collector on\n one queue. Message queues block, by default, but the backend\n could use IPC_NOWAIT and just go on and collect up, as long\n as it finally will use a blocking call before exiting. We'll\n loose statistics for backends that go down in flames\n (coredump), but who cares for statistics then?\n\n To query statistics, we have a set of new builtin functions.\n All functions share a global statistics snapshot in the\n backend. If on function call the snapshot doesn't exist or\n was generated by another XACT/commandcounter, the backend\n sends a statistics request for his database ID to the\n collector and waits for the messages to arrive on the second\n message queue. It can pick up the messages meant for him via\n message type, which's equal to his backend number +1, because\n the collector will send 'em as such. For table access stats\n for example, the snapshot will have slots identified by the\n tables OID, so a function pg_get_tables_seqscan_count(oid)\n should be easy to implement. And setting up views that\n present access stats in readable format is a nobrainer.\n\n Now we have communication only between the backends and the\n collector. And we're certain that only someone able to\n SELECT from a system view will ever see this information.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Fri, 16 Mar 2001 14:40:24 -0500 (EST)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": true,
"msg_subject": "Re: Performance monitor signal handler"
},
{
"msg_contents": "Jan Wieck <JanWieck@Yahoo.com> writes:\n> Does a pipe guarantee that a buffer, written with one atomic\n> write(2), never can get intermixed with other data on the\n> readers end?\n\nYes. The HPUX man page for write(2) sez:\n\n o Write requests of {PIPE_BUF} bytes or less will not be\n interleaved with data from other processes doing writes on the\n same pipe. Writes of greater than {PIPE_BUF} bytes may have\n data interleaved, on arbitrary boundaries, with writes by\n other processes, whether or not the O_NONBLOCK flag of the\n file status flags is set.\n\nStevens' _UNIX Network Programming_ (1990) states this is true for all\npipes (nameless or named) on all flavors of Unix, and furthermore states\nthat PIPE_BUF is at least 4K on all systems. I don't have any relevant\nPosix standards to look at, but I'm not worried about assuming this to\nbe true.\n\n> With message queues, this is guaranteed. Also, message queues\n> would make it easy to query the collected statistics (see\n> below).\n\nI will STRONGLY object to any proposal that we use message queues.\nWe've already had enough problems with the ridiculously low kernel\nlimits that are commonly imposed on shmem and SysV semaphores.\nWe don't need to buy into that silliness yet again with message queues.\nI don't believe they gain us anything over pipes anyway.\n\nThe real problem with either pipes or message queues is that backends\nwill block if the collector stops collecting data. I don't think we\nwant that. I suppose we could have the backends write a pipe with\nO_NONBLOCK and ignore failure, however:\n\n o If the O_NONBLOCK flag is set, write() requests will be\n handled differently, in the following ways:\n\n - The write() function will not block the process.\n\n - A write request for {PIPE_BUF} or fewer bytes will have\n the following effect: If there is sufficient space\n available in the pipe, write() will transfer all the data\n and return the number of bytes requested. Otherwise,\n write() will transfer no data and return -1 with errno set\n to EAGAIN.\n\nSince we already ignore SIGPIPE, we don't need to worry about losing the\ncollector entirely.\n\nNow this would put a pretty tight time constraint on the collector:\nfall more than 4K behind, you start losing data. I am not sure if\na UDP socket would provide more buffering or not; anyone know?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 16 Mar 2001 15:37:45 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Performance monitor signal handler "
},
{
"msg_contents": "Tom Lane wrote:\n> Jan Wieck <JanWieck@Yahoo.com> writes:\n> > Does a pipe guarantee that a buffer, written with one atomic\n> > write(2), never can get intermixed with other data on the\n> > readers end?\n>\n> Yes. The HPUX man page for write(2) sez:\n>\n> o Write requests of {PIPE_BUF} bytes or less will not be\n> interleaved with data from other processes doing writes on the\n> same pipe. Writes of greater than {PIPE_BUF} bytes may have\n> data interleaved, on arbitrary boundaries, with writes by\n> other processes, whether or not the O_NONBLOCK flag of the\n> file status flags is set.\n>\n> Stevens' _UNIX Network Programming_ (1990) states this is true for all\n> pipes (nameless or named) on all flavors of Unix, and furthermore states\n> that PIPE_BUF is at least 4K on all systems. I don't have any relevant\n> Posix standards to look at, but I'm not worried about assuming this to\n> be true.\n\n That's good news - and maybe a Good Assumption (TM).\n\n> > With message queues, this is guaranteed. Also, message queues\n> > would make it easy to query the collected statistics (see\n> > below).\n>\n> I will STRONGLY object to any proposal that we use message queues.\n> We've already had enough problems with the ridiculously low kernel\n> limits that are commonly imposed on shmem and SysV semaphores.\n> We don't need to buy into that silliness yet again with message queues.\n> I don't believe they gain us anything over pipes anyway.\n\n OK.\n\n> The real problem with either pipes or message queues is that backends\n> will block if the collector stops collecting data. I don't think we\n> want that. I suppose we could have the backends write a pipe with\n> O_NONBLOCK and ignore failure, however:\n>\n> o If the O_NONBLOCK flag is set, write() requests will be\n> handled differently, in the following ways:\n>\n> - The write() function will not block the process.\n>\n> - A write request for {PIPE_BUF} or fewer bytes will have\n> the following effect: If there is sufficient space\n> available in the pipe, write() will transfer all the data\n> and return the number of bytes requested. Otherwise,\n> write() will transfer no data and return -1 with errno set\n> to EAGAIN.\n>\n> Since we already ignore SIGPIPE, we don't need to worry about losing the\n> collector entirely.\n\n That's not what the manpage said. It said that in the case\n you're inside PIPE_BUF size and using O_NONBLOCK, you either\n send complete messages or nothing, getting an EAGAIN then.\n\n So we could do the same here and write to the pipe. In the\n case we cannot, just count up and try again next year (or\n so).\n\n>\n> Now this would put a pretty tight time constraint on the collector:\n> fall more than 4K behind, you start losing data. I am not sure if\n> a UDP socket would provide more buffering or not; anyone know?\n\n Again, this ain't what the manpage said. If there's\n sufficient space available in the pipe in combination with\n that PIPE_BUF is at least 4K doesn't necessarily mean that\n the pipes buffer space is 4K.\n\n Well, what I'm missing is the ability to filter out\n statistics reports on the backend side via msgrcv(2)s msgtype\n :-(\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Fri, 16 Mar 2001 16:03:21 -0500 (EST)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": true,
"msg_subject": "Re: Performance monitor signal handler"
},
{
"msg_contents": "Tom Lane wrote:\n> Now this would put a pretty tight time constraint on the collector:\n> fall more than 4K behind, you start losing data. I am not sure if\n> a UDP socket would provide more buffering or not; anyone know?\n\n Looks like Linux has something around 16-32K of buffer space\n for UDP sockets. Just from eyeballing the fprintf(3) output\n of my destructively hacked postleprechaun.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Fri, 16 Mar 2001 16:18:13 -0500 (EST)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": true,
"msg_subject": "Re: Performance monitor signal handler"
},
{
"msg_contents": "Jan Wieck wrote:\n> Tom Lane wrote:\n> > Now this would put a pretty tight time constraint on the collector:\n> > fall more than 4K behind, you start losing data. I am not sure if\n> > a UDP socket would provide more buffering or not; anyone know?\n>\n> Looks like Linux has something around 16-32K of buffer space\n> for UDP sockets. Just from eyeballing the fprintf(3) output\n> of my destructively hacked postleprechaun.\n\n Just to get some evidence at hand - could some owners of\n different platforms compile and run the attached little C\n source please?\n\n (The program tests how much data can be stuffed into a pipe\n or a Sys-V message queue before the writer would block or get\n an EAGAIN error).\n\n My output on RedHat6.1 Linux 2.2.17 is:\n\n Pipe buffer is 4096 bytes\n Sys-V message queue buffer is 16384 bytes\n\n Seems Tom is (unfortunately) right. The pipe blocks at 4K.\n\n So a Sys-V message queue, with the ability to distribute\n messages from the collector to individual backends with\n kernel support via \"mtype\" is four times by unestimated\n complexity better here. What does your system say?\n\n I really never thought that Sys-V IPC is a good way to go at\n all. I hate it's incompatibility to the select(2) system\n call and all these OS/installation dependant restrictions.\n But I'm tempted to reevaluate it \"for this case\".\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #",
"msg_date": "Fri, 16 Mar 2001 17:25:24 -0500 (EST)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": true,
"msg_subject": "Re: Performance monitor signal handler"
},
{
"msg_contents": "Jan Wieck <JanWieck@yahoo.com> writes:\n> Just to get some evidence at hand - could some owners of\n> different platforms compile and run the attached little C\n> source please?\n\nHPUX 10.20:\n\nPipe buffer is 8192 bytes\nSys-V message queue buffer is 16384 bytes\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 16 Mar 2001 18:00:42 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Performance monitor signal handler "
},
{
"msg_contents": "\n> Just to get some evidence at hand - could some owners of\n> different platforms compile and run the attached little C\n> source please?\n\n$ uname -srm\nFreeBSD 4.1.1-STABLE\n$ ./jan\nPipe buffer is 16384 bytes\nSys-V message queue buffer is 2048 bytes\n\n$ uname -srm\nNetBSD 1.5 alpha\n$ ./jan\nPipe buffer is 4096 bytes\nSys-V message queue buffer is 2048 bytes\n\n$ uname -srm\nNetBSD 1.5_BETA2 i386\n$ ./jan\nPipe buffer is 4096 bytes\nSys-V message queue buffer is 2048 bytes\n\n$ uname -srm\nNetBSD 1.4.2 i386\n$ ./jan\nPipe buffer is 4096 bytes\nSys-V message queue buffer is 2048 bytes\n\n$ uname -srm\nNetBSD 1.4.1 sparc\n$ ./jan\nPipe buffer is 4096 bytes\nBad system call (core dumped)\t# no SysV IPC in running kernel\n\n$ uname -srm\nHP-UX B.11.11 9000/800\n$ ./jan\nPipe buffer is 8192 bytes\nSys-V message queue buffer is 16384 bytes\n\n$ uname -srm\nHP-UX B.11.00 9000/813\n$ ./jan\nPipe buffer is 8192 bytes\nSys-V message queue buffer is 16384 bytes\n\n$ uname -srm\nHP-UX B.10.20 9000/871\n$ ./jan\nPipe buffer is 8192 bytes\nSys-V message queue buffer is 16384 bytes\n\nHP-UX can also use STREAMS based pipes if the kernel parameter\nstreampipes is set. Using STREAMS based pipes increases the pipe\nbuffer size by a lot:\n\n# uname -srm \nHP-UX B.11.11 9000/800\n# ./jan\nPipe buffer is 131072 bytes\nSys-V message queue buffer is 16384 bytes\n\n# uname -srm\nHP-UX B.11.00 9000/800\n# ./jan\nPipe buffer is 131072 bytes\nSys-V message queue buffer is 16384 bytes\n\nRegards,\n\nGiles\n",
"msg_date": "Sat, 17 Mar 2001 11:17:51 +1100",
"msg_from": "Giles Lean <giles@nemeton.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Performance monitor signal handler "
},
{
"msg_contents": "* Jan Wieck <JanWieck@Yahoo.com> [010316 16:35]:\n> Jan Wieck wrote:\n> > Tom Lane wrote:\n> > > Now this would put a pretty tight time constraint on the collector:\n> > > fall more than 4K behind, you start losing data. I am not sure if\n> > > a UDP socket would provide more buffering or not; anyone know?\n> >\n> > Looks like Linux has something around 16-32K of buffer space\n> > for UDP sockets. Just from eyeballing the fprintf(3) output\n> > of my destructively hacked postleprechaun.\n> \n> Just to get some evidence at hand - could some owners of\n> different platforms compile and run the attached little C\n> source please?\n> \n> (The program tests how much data can be stuffed into a pipe\n> or a Sys-V message queue before the writer would block or get\n> an EAGAIN error).\n> \n> My output on RedHat6.1 Linux 2.2.17 is:\n> \n> Pipe buffer is 4096 bytes\n> Sys-V message queue buffer is 16384 bytes\n> \n> Seems Tom is (unfortunately) right. The pipe blocks at 4K.\n> \n> So a Sys-V message queue, with the ability to distribute\n> messages from the collector to individual backends with\n> kernel support via \"mtype\" is four times by unestimated\n> complexity better here. What does your system say?\n> \n> I really never thought that Sys-V IPC is a good way to go at\n> all. I hate it's incompatibility to the select(2) system\n> call and all these OS/installation dependant restrictions.\n> But I'm tempted to reevaluate it \"for this case\".\n> \n> \n> Jan\n$ ./queuetest\nPipe buffer is 32768 bytes\nSys-V message queue buffer is 4096 bytes\n$ uname -a\nUnixWare lerami 5 7.1.1 i386 x86at SCO UNIX_SVR5\n$ \n\nI think some of these are configurable...\n\nLER\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Fri, 16 Mar 2001 20:43:27 -0600",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: Performance monitor signal handler"
},
{
"msg_contents": "* Larry Rosenman <ler@lerctr.org> [010316 20:47]:\n> * Jan Wieck <JanWieck@Yahoo.com> [010316 16:35]:\n> $ ./queuetest\n> Pipe buffer is 32768 bytes\n> Sys-V message queue buffer is 4096 bytes\n> $ uname -a\n> UnixWare lerami 5 7.1.1 i386 x86at SCO UNIX_SVR5\n> $ \n> \n> I think some of these are configurable...\nThey both are. FIFOBLKSIZE and MSGMNB or some such kernel tunable.\n\nI can get more info if you need it.\n\nLER\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Fri, 16 Mar 2001 21:07:53 -0600",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: Performance monitor signal handler"
},
{
"msg_contents": "On Fri, Mar 16, 2001 at 05:25:24PM -0500, Jan Wieck wrote:\n> Jan Wieck wrote:\n...\n> Just to get some evidence at hand - could some owners of\n> different platforms compile and run the attached little C\n> source please?\n... \n> Seems Tom is (unfortunately) right. The pipe blocks at 4K.\n\nOn NetBSD-1.5S/i386 with just the highly conservative shmem defaults:\n\nPipe buffer is 4096 bytes\nSys-V message queue buffer is 2048 bytes\n\nCheers,\n\nPatrick\n",
"msg_date": "Sat, 17 Mar 2001 20:29:56 +0000",
"msg_from": "Patrick Welche <prlw1@newn.cam.ac.uk>",
"msg_from_op": false,
"msg_subject": "Re: Performance monitor signal handler"
},
{
"msg_contents": "Jan Wieck <JanWieck@yahoo.com> writes:\n> Just to get some evidence at hand - could some owners of\n> different platforms compile and run the attached little C\n> source please?\n> (The program tests how much data can be stuffed into a pipe\n> or a Sys-V message queue before the writer would block or get\n> an EAGAIN error).\n\nOne final followup on this --- I wasted a fair amount of time just\nnow trying to figure out why Perl 5.6.0 was silently hanging up\nin its self-tests (at op/taint, which seems pretty unrelated...).\n\nThe upshot: Jan's test program had left a 16k SysV message queue\nhanging about, and that queue was filling all available SysV message\nspace on my machine. Seems Perl tries to test message-queue sending,\nand it was patiently waiting for some message space to come free.\n\nIn short, the SysV message queue limits are so tiny that not only\nare you quite likely to get bollixed up if you use messages, but\nyou're likely to bollix anything else that's using message queues too.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 19 Mar 2001 00:28:45 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Performance monitor signal handler "
},
{
"msg_contents": "I have a new statistics collection proposal.\n\nI suggest three shared memory areas:\n\n\tOne per backend to hold the query string and other per-backend stats\n\tOne global area to hold accumulated stats for all backends\n\tOne global circular buffer to hold per-table/object stats\n\nThe circular buffer will look like:\n\n\t(Loops) Start---------------------------End\n |\n current pointer\n\nLoops is incremented every time the pointer reaches \"end\".\n\nEach statistics record will have a length of five bytes made up of\noid(4) and action(1). By having the same length for all statistics\nrecords, we don't need to perform any locking of the buffer. A backend\nwill grab the current pointer, add five to it, and write into the\nreserved 5-byte area. If two backends write at the same time, one\noverwrites the other, but this is just statistics information, so it is\nnot a great lose.\n\nOnly shared memory gives us near-zero cost for write/read. 99% of\nbackends will not be using stats, so it has to be cheap.\n\nThe collector program can read the shared memory stats and keep hashed\nvalues of accumulated stats. It uses the \"Loops\" variable to know if it\nhas read the current information in the buffer. When it receives a\nsignal, it can dump its stats to a file in standard COPY format of\n<oid><tab><action><tab><count>. It can also reset its counters with a\nsignal.\n\nComments?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 19 Mar 2001 11:09:45 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Performance monitor signal handler"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Only shared memory gives us near-zero cost for write/read. 99% of\n> backends will not be using stats, so it has to be cheap.\n\nNot with a circular buffer it's not cheap, because you need interlocking\non writes. Your claim that you can get away without that is simply\nfalse. You won't just get lost messages, you'll get corrupted messages.\n\n> The collector program can read the shared memory stats and keep hashed\n> values of accumulated stats. It uses the \"Loops\" variable to know if it\n> has read the current information in the buffer.\n\nAnd how does it sleep until the counter has been advanced? Seems to me\nit has to busy-wait (bad) or sleep (worse; if the minimum sleep delay\nis 10 ms then it's guaranteed to miss a lot of data under load).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 19 Mar 2001 11:28:24 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Performance monitor signal handler "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Only shared memory gives us near-zero cost for write/read. 99% of\n> > backends will not be using stats, so it has to be cheap.\n> \n> Not with a circular buffer it's not cheap, because you need interlocking\n> on writes. Your claim that you can get away without that is simply\n> false. You won't just get lost messages, you'll get corrupted messages.\n\nHow do I get corrupt messages if they are all five bytes? If I write\nfive bytes, and another does the same, I guess the assembler could\nintersperse the writes so the oid gets to be a corrupt value. Any cheap\nway around this, perhaps by skiping/clearing the write on a collision?\n\n> \n> > The collector program can read the shared memory stats and keep hashed\n> > values of accumulated stats. It uses the \"Loops\" variable to know if it\n> > has read the current information in the buffer.\n> \n> And how does it sleep until the counter has been advanced? Seems to me\n> it has to busy-wait (bad) or sleep (worse; if the minimum sleep delay\n> is 10 ms then it's guaranteed to miss a lot of data under load).\n\nI figured it could just wake up every few seconds and check. It will\nremember the loop counter and current pointer, and read any new\ninformation. I was thinking of a 20k buffer, which could cover about 4k\nevents.\n\nShould we think about doing these writes into an OS file, and only\nenabling the writes when we know there is a collector reading them,\nperhaps using a /tmp file to activate recording. We could allocation\n1MB and be sure not to miss anything, even with a circular setup.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 19 Mar 2001 11:50:15 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Performance monitor signal handler"
},
{
"msg_contents": "Bruce Momjian wrote:\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > Only shared memory gives us near-zero cost for write/read. 99% of\n> > > backends will not be using stats, so it has to be cheap.\n> >\n> > Not with a circular buffer it's not cheap, because you need interlocking\n> > on writes. Your claim that you can get away without that is simply\n> > false. You won't just get lost messages, you'll get corrupted messages.\n>\n> How do I get corrupt messages if they are all five bytes? If I write\n> five bytes, and another does the same, I guess the assembler could\n> intersperse the writes so the oid gets to be a corrupt value. Any cheap\n> way around this, perhaps by skiping/clearing the write on a collision?\n>\n> >\n> > > The collector program can read the shared memory stats and keep hashed\n> > > values of accumulated stats. It uses the \"Loops\" variable to know if it\n> > > has read the current information in the buffer.\n> >\n> > And how does it sleep until the counter has been advanced? Seems to me\n> > it has to busy-wait (bad) or sleep (worse; if the minimum sleep delay\n> > is 10 ms then it's guaranteed to miss a lot of data under load).\n>\n> I figured it could just wake up every few seconds and check. It will\n> remember the loop counter and current pointer, and read any new\n> information. I was thinking of a 20k buffer, which could cover about 4k\n> events.\n\n Here I wonder what your EVENT is. With an Oid as identifier\n and a 1 byte (even if it'd be anoter 32-bit value), how many\n messages do you want to generate to get these statistics:\n\n - Number of sequential scans done per table.\n - Number of tuples returned via sequential scans per table.\n - Number of buffer cache lookups done through sequential\n scans per table.\n - Number of buffer cache hits for sequential scans per\n table.\n - Number of tuples inserted per table.\n - Number of tuples updated per table.\n - Number of tuples deleted per table.\n - Number of index scans done per index.\n - Number of index tuples returned per index.\n - Number of buffer cache lookups done due to scans per\n index.\n - Number of buffer cache hits per index.\n - Number of valid heap tuples returned via index scan per\n index.\n - Number of buffer cache lookups done for heap fetches via\n index scan per index.\n - Number of buffer cache hits for heap fetches via index\n scan per index.\n - Number of buffer cache lookups not accountable for any of\n the above.\n - Number of buffer cache hits not accountable for any of\n the above.\n\n What I see is that there's a difference in what we two want\n to see in the statistics. You're talking about looking at the\n actual querystring and such. That's information useful for\n someone actually looking at a server, to see what a\n particular backend is doing. On my notebook a parallel\n regression test (containing >4,000 queries) passes by under\n 1:30, that's more than 40 queries per second. So that doesn't\n tell me much.\n\n What I'm after is to collect the above data over a week or so\n and then generate a report to identify the hot spots of the\n schema. Which tables/indices cause the most disk I/O, what's\n the average percentage of tuples returned in scans (not from\n the query, I mean from the single scan inside of the joins).\n That's the information I need to know where to look for\n possibly better qualifications, useless indices that aren't\n worth to maintain and the like.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Mon, 19 Mar 2001 13:04:37 -0500 (EST)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": true,
"msg_subject": "Re: Performance monitor signal handler"
},
{
"msg_contents": "> > I figured it could just wake up every few seconds and check. It will\n> > remember the loop counter and current pointer, and read any new\n> > information. I was thinking of a 20k buffer, which could cover about 4k\n> > events.\n> \n> Here I wonder what your EVENT is. With an Oid as identifier\n> and a 1 byte (even if it'd be anoter 32-bit value), how many\n> messages do you want to generate to get these statistics:\n> \n> - Number of sequential scans done per table.\n> - Number of tuples returned via sequential scans per table.\n> - Number of buffer cache lookups done through sequential\n> scans per table.\n> - Number of buffer cache hits for sequential scans per\n> table.\n> - Number of tuples inserted per table.\n> - Number of tuples updated per table.\n> - Number of tuples deleted per table.\n> - Number of index scans done per index.\n> - Number of index tuples returned per index.\n> - Number of buffer cache lookups done due to scans per\n> index.\n> - Number of buffer cache hits per index.\n> - Number of valid heap tuples returned via index scan per\n> index.\n> - Number of buffer cache lookups done for heap fetches via\n> index scan per index.\n> - Number of buffer cache hits for heap fetches via index\n> scan per index.\n> - Number of buffer cache lookups not accountable for any of\n> the above.\n> - Number of buffer cache hits not accountable for any of\n> the above.\n> \n> What I see is that there's a difference in what we two want\n> to see in the statistics. You're talking about looking at the\n> actual querystring and such. That's information useful for\n> someone actually looking at a server, to see what a\n> particular backend is doing. On my notebook a parallel\n> regression test (containing >4,000 queries) passes by under\n> 1:30, that's more than 40 queries per second. So that doesn't\n> tell me much.\n> \n> What I'm after is to collect the above data over a week or so\n> and then generate a report to identify the hot spots of the\n> schema. Which tables/indices cause the most disk I/O, what's\n> the average percentage of tuples returned in scans (not from\n> the query, I mean from the single scan inside of the joins).\n> That's the information I need to know where to look for\n> possibly better qualifications, useless indices that aren't\n> worth to maintain and the like.\n> \n\nI was going to have the per-table stats insert a stat record every time\nit does a sequential scan, so it sould be [oid][sequential_scan_value]\nand allow the collector to gather that and aggregate it.\n\nI didn't think we wanted each backend to do the aggregation per oid. \nSeems expensive. Maybe we would need a count for things like \"number of\nrows returned\" so it would be [oid][stat_type][value].\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 19 Mar 2001 13:10:16 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Performance monitor signal handler"
},
{
"msg_contents": "I have talked to Jan over the phone, and he has convinced me that UDP is\nthe proper way to communicate stats to the collector, rather than my\nshared memory idea.\n\nThe advantages of his UDP approach is that the collector can sleep on\nthe UDP socket rather than having the collector poll the shared memory\narea. It also has the auto-discard option. He will make logging\nconfigurable on a per-database level, so it can be turned off when not\nin use.\n\nHe has a trial UDP implementation that he will post soon. Also, I asked\nhim to try DGRAM Unix-domain sockets for performance reasons. My\nSteven's book says it they should be supported. He can put the socket\nfile in /data.\n\n\n\n> > > I figured it could just wake up every few seconds and check. It will\n> > > remember the loop counter and current pointer, and read any new\n> > > information. I was thinking of a 20k buffer, which could cover about 4k\n> > > events.\n> > \n> > Here I wonder what your EVENT is. With an Oid as identifier\n> > and a 1 byte (even if it'd be anoter 32-bit value), how many\n> > messages do you want to generate to get these statistics:\n> > \n> > - Number of sequential scans done per table.\n> > - Number of tuples returned via sequential scans per table.\n> > - Number of buffer cache lookups done through sequential\n> > scans per table.\n> > - Number of buffer cache hits for sequential scans per\n> > table.\n> > - Number of tuples inserted per table.\n> > - Number of tuples updated per table.\n> > - Number of tuples deleted per table.\n> > - Number of index scans done per index.\n> > - Number of index tuples returned per index.\n> > - Number of buffer cache lookups done due to scans per\n> > index.\n> > - Number of buffer cache hits per index.\n> > - Number of valid heap tuples returned via index scan per\n> > index.\n> > - Number of buffer cache lookups done for heap fetches via\n> > index scan per index.\n> > - Number of buffer cache hits for heap fetches via index\n> > scan per index.\n> > - Number of buffer cache lookups not accountable for any of\n> > the above.\n> > - Number of buffer cache hits not accountable for any of\n> > the above.\n> > \n> > What I see is that there's a difference in what we two want\n> > to see in the statistics. You're talking about looking at the\n> > actual querystring and such. That's information useful for\n> > someone actually looking at a server, to see what a\n> > particular backend is doing. On my notebook a parallel\n> > regression test (containing >4,000 queries) passes by under\n> > 1:30, that's more than 40 queries per second. So that doesn't\n> > tell me much.\n> > \n> > What I'm after is to collect the above data over a week or so\n> > and then generate a report to identify the hot spots of the\n> > schema. Which tables/indices cause the most disk I/O, what's\n> > the average percentage of tuples returned in scans (not from\n> > the query, I mean from the single scan inside of the joins).\n> > That's the information I need to know where to look for\n> > possibly better qualifications, useless indices that aren't\n> > worth to maintain and the like.\n> > \n> \n> I was going to have the per-table stats insert a stat record every time\n> it does a sequential scan, so it sould be [oid][sequential_scan_value]\n> and allow the collector to gather that and aggregate it.\n> \n> I didn't think we wanted each backend to do the aggregation per oid. \n> Seems expensive. Maybe we would need a count for things like \"number of\n> rows returned\" so it would be [oid][stat_type][value].\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 20 Mar 2001 16:13:30 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Performance monitor signal handler"
},
{
"msg_contents": "Bruce Momjian wrote:\n> I have talked to Jan over the phone, and he has convinced me that UDP is\n> the proper way to communicate stats to the collector, rather than my\n> shared memory idea.\n>\n> The advantages of his UDP approach is that the collector can sleep on\n> the UDP socket rather than having the collector poll the shared memory\n> area. It also has the auto-discard option. He will make logging\n> configurable on a per-database level, so it can be turned off when not\n> in use.\n>\n> He has a trial UDP implementation that he will post soon. Also, I asked\n> him to try DGRAM Unix-domain sockets for performance reasons. My\n> Steven's book says it they should be supported. He can put the socket\n> file in /data.\n\n\"Trial\" implementation attached :-)\n\n First attachment is a patch for various backend files plus\n generating two new source files. If your patch(1) doesn't put\n 'em automatically, they go to src/include/pgstat.h and\n src/backend/postmaster/pgstat.c.\n\n BTW: tgl on 2/99 was right, the hash_destroy() really\n crashes. Maybe we want to pull out the fix I've done\n (includes some new feature for hash table memory allocation)\n and apply that to 7.1?\n\n Second attachment is a tarfile that should unpack to\n contrib/pgstat_tmp. I've placed the SQL level functions into\n a shared module for now. The sql script also creates a couple\n of views.\n\n - pgstat_all_tables shows scan- and tuple based statistics\n for all tables. pgstat_sys_tables and pgstat_user_tables\n filter out (you guess what) system or user tables.\n\n - pgstatio_all_tables, pgstatio_sys_tables and\n pgstatio_user_tables show buffer IO statistics for\n tables.\n\n - pgstat_*_indexes and pgstatio_*_indexes are similar like\n the above, just that they give detailed info about each\n single index.\n\n - pgstatio_*_sequences shows buffer IO statistics about -\n right, sequences. Since sequences aren't scanned\n regularely, they have no scan- and tuple related view.\n\n - pgstat_activity shows informations about all currently\n running backends of the entire instance. The underlying\n function for displaying the actual query returns NULL\n allways for non-superusers.\n\n - pgstat_database shows transaction commit/abort counts and\n cumulated buffer IO statistics for all existing\n databases.\n\n The collector writes frequently a file data/pgstat.stat\n (approx. every 500 milliseconds as long as there is something\n to tell, so nothing is done if the entire installation\n sleeps). He also reads this file on startup, so collected\n statistics survive postmaster restarts.\n\n TODO:\n\n - Are PF_UNIX SOCK_DGRAM sockets supported on all the\n platforms we do? If not, what's wrong with the current\n implementation?\n\n - There is no way yet to tell the collector about objects\n (relations and databases) removed from the database.\n Basically that could be done with messages too, but who\n will send them and how can we guarantee that they'll be\n generated even if somebody never queries the statistics?\n Thus, the current collector will grow, and grow, and grow\n until you remove the pgstat.stat file while the\n postmaster is down.\n\n - Also there aren't functions or messages implemented to\n explicitly reset statistics.\n\n - Possible additions would be to remember when the backends\n started and collect resource usage (rstat(2)) information\n as well.\n\n - The entire thing needs an additional attribute in\n pg_database that tells the backends what to tell the\n collector at all. Just to make them quiet again.\n\n So far for an actual snapshot. Comments?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #",
"msg_date": "Thu, 22 Mar 2001 09:22:54 -0500 (EST)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": true,
"msg_subject": "Re: Performance monitor signal handler"
},
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Vince Vielhaber [mailto:vev@michvhf.com] \n> Sent: 30 January 2003 19:20\n> To: Lamar Owen\n> Cc: Tom Lane; Dave Page; Ron Mayer; pgsql-hackers@postgresql.org\n> Subject: Re: [mail] Re: [HACKERS] Windows Build System\n> \n> \n> I've \n> been on both sides know that the windows user/developer \n> doesn't hold things to the same standards as the unix user/developer.\n\nI ought to plonk you for a comment like that. Especially coming from the\nperson who's crap I've been trying to sort out for the last couple of\nmonths.\n\n> Since you're pretty much ignoring my reasoning, I'll give you \n> the same consideration. The history of windows as a platform \n> has shown itself to be rather fragile compared to unix.\n\nWhen properly configured, Windows can be reliable, maybe not as much as\nSolaris or HPUX but certainly some releases of Linux (which I use as\nwell). You don't see Oracle or IBM avoiding Windows 'cos it isn't stable\nenough.\n\n> Before you respond to this, read Tom Lane's response and \n> reply to that.\n\n*I* did. I volunteered to do some more of the testing we're all so\nresistant.\n\nDave.\n",
"msg_date": "Thu, 30 Jan 2003 19:56:30 -0000",
"msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: [mail] Re: Windows Build System"
},
{
"msg_contents": "On Thu, 30 Jan 2003, Dave Page wrote:\n\n>\n>\n> > -----Original Message-----\n> > From: Vince Vielhaber [mailto:vev@michvhf.com]\n> > Sent: 30 January 2003 19:20\n> > To: Lamar Owen\n> > Cc: Tom Lane; Dave Page; Ron Mayer; pgsql-hackers@postgresql.org\n> > Subject: Re: [mail] Re: [HACKERS] Windows Build System\n> >\n> >\n> > I've\n> > been on both sides know that the windows user/developer\n> > doesn't hold things to the same standards as the unix user/developer.\n>\n> I ought to plonk you for a comment like that. Especially coming from the\n> person who's crap I've been trying to sort out for the last couple of\n> months.\n\nGrow up Dave. That shit doesn't belong on this or any other list. If\nyou didn't want to do something, you shouldn't have volunteered to do it.\n\nVince.\n-- \n Fast, inexpensive internet service 56k and beyond! http://www.pop4.net/\n http://www.meanstreamradio.com http://www.unknown-artists.com\n Internet radio: It's not file sharing, it's just radio.\n\n",
"msg_date": "Thu, 30 Jan 2003 15:05:37 -0500 (EST)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: [mail] Re: Windows Build System"
},
{
"msg_contents": "On Thu, 2003-01-30 at 13:56, Dave Page wrote:\n> When properly configured, Windows can be reliable, maybe not as much as\n> Solaris or HPUX but certainly some releases of Linux (which I use as\n> well). You don't see Oracle or IBM avoiding Windows 'cos it isn't stable\n> enough.\n\nI'm not jumping on one side or the other but I wanted to make clear on\nsomething. The fact that IBM or Oracle use windows has absolutely zero\nto do with reliability or stability. They are there because the market\nis willing to spend money on their product. Let's face it, the share\nholders of each respective company would come unglued if the largest\nsoftware audience in the world were completely ignored.\n\nSimple fact is, your example really is pretty far off from supporting\nany view. Bluntly stated, both are in that market because they want to\nmake money; they're even obligated to do so.\n\n\n-- \nGreg Copeland <greg@copelandconsulting.net>\nCopeland Computer Consulting\n\n",
"msg_date": "30 Jan 2003 16:37:22 -0600",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": false,
"msg_subject": "Re: [mail] Re: Windows Build System"
},
{
"msg_contents": "Greg Copeland wrote:\n> On Thu, 2003-01-30 at 13:56, Dave Page wrote:\n> > When properly configured, Windows can be reliable, maybe not as much as\n> > Solaris or HPUX but certainly some releases of Linux (which I use as\n> > well). You don't see Oracle or IBM avoiding Windows 'cos it isn't stable\n> > enough.\n> \n> I'm not jumping on one side or the other but I wanted to make clear on\n> something. The fact that IBM or Oracle use windows has absolutely zero\n> to do with reliability or stability. They are there because the market\n> is willing to spend money on their product. Let's face it, the share\n> holders of each respective company would come unglued if the largest\n> software audience in the world were completely ignored.\n> \n> Simple fact is, your example really is pretty far off from supporting\n> any view. Bluntly stated, both are in that market because they want to\n> make money; they're even obligated to do so.\n\nThat's true, but it ignores the question that makes it relevant: has\ntheir appearance in the Windows market tarnished their reputation?\nMore precisely, has it tarnished their reputation in the *Unix*\ncommunity? The answer, I think, is no.\n\nAnd that *is* relevant to us, because our concern is about the\nreputation of PostgreSQL, and what will happen to it if we release a\nnative Windows port to the world.\n\n\nOf course, you could argue that Oracle and IBM didn't have much of a\nreputation anyway, and I wouldn't be able to say much to that. :-)\n\n\n-- \nKevin Brown\t\t\t\t\t kevin@sysexperts.com\n",
"msg_date": "Thu, 30 Jan 2003 14:54:19 -0800",
"msg_from": "Kevin Brown <kevin@sysexperts.com>",
"msg_from_op": false,
"msg_subject": "Re: [mail] Re: Windows Build System"
},
{
"msg_contents": "Kevin Brown wrote:\n> \n> Greg Copeland wrote:\n> > On Thu, 2003-01-30 at 13:56, Dave Page wrote:\n> > > When properly configured, Windows can be reliable, maybe not as much as\n> > > Solaris or HPUX but certainly some releases of Linux (which I use as\n> > > well). You don't see Oracle or IBM avoiding Windows 'cos it isn't stable\n> > > enough.\n> >\n> > I'm not jumping on one side or the other but I wanted to make clear on\n> > something. The fact that IBM or Oracle use windows has absolutely zero\n> > to do with reliability or stability. They are there because the market\n> > is willing to spend money on their product. Let's face it, the share\n> > holders of each respective company would come unglued if the largest\n> > software audience in the world were completely ignored.\n> >\n> > Simple fact is, your example really is pretty far off from supporting\n> > any view. Bluntly stated, both are in that market because they want to\n> > make money; they're even obligated to do so.\n> \n> That's true, but it ignores the question that makes it relevant: has\n> their appearance in the Windows market tarnished their reputation?\n> More precisely, has it tarnished their reputation in the *Unix*\n> community? The answer, I think, is no.\n> \n> And that *is* relevant to us, because our concern is about the\n> reputation of PostgreSQL, and what will happen to it if we release a\n> native Windows port to the world.\n\nMore to the point, does the unreliable Cygwin port possibly do our\nreputation any good? It is known to crash with corruptions under less\nthan heavy load. \n\nLooking at the arguments so far, nearly everyone who questions the Win32\nport must be vehemently against the Cygwin stuff anyway. So that camp\nshould be happy to see it flushed down the toilet. And the pro-Win32\npeople want the native version because they are unhappy with the\nstepchild-Cygwin stuff too, so they won't care too much.\n\nAnyone here who likes the Cygwin port or can we yank it out right now?\n\n\nJan\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n",
"msg_date": "Fri, 31 Jan 2003 00:29:39 -0500",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": true,
"msg_subject": "Re: [mail] Re: Windows Build System"
},
{
"msg_contents": "Jan Wieck wrote:\n> Looking at the arguments so far, nearly everyone who questions the Win32\n> port must be vehemently against the Cygwin stuff anyway. So that camp\n> should be happy to see it flushed down the toilet. And the pro-Win32\n> people want the native version because they are unhappy with the\n> stepchild-Cygwin stuff too, so they won't care too much.\n\nWhat is interesting is that the MySQL folk don't seem to be vehemently against \nit, as a look at their downloads pages indicate that they depend on Cygwin for \nthe Windows port of their product.\n--\noutput = (\"cbbrowne\" \"@ntlug.org\")\nhttp://www.ntlug.org/~cbbrowne/lisp.html\n\"What did we agree about a leader??\"\n\"We agreed we wouldn't have one.\"\n\"Good. Now shut up and do as I say...\"\n\n\n",
"msg_date": "Fri, 31 Jan 2003 08:14:27 -0500",
"msg_from": "cbbrowne@cbbrowne.com",
"msg_from_op": false,
"msg_subject": "Re: [mail] Re: Windows Build System "
},
{
"msg_contents": "The U.S. Census provides a database of street polygons and other data \nabout landmarks, elevation, etc. This was discussed in a separate thread.\n\nThe main URL is here:\nhttp://www.census.gov/geo/www/tiger/index.html\n\nMy loader was written for the 2000 version, the 2002 version has some \ndifference, but it should be easy enough to ad the fields.\n\nOn my site, in the downloads section, at the bottom is the tigerua \nloader. It is very raw, just hacked together to load the data. It may \ntake a little work to function with 2002 files, I have not looked at \nthat yet.\n\nMy site:\nhttp://www.mohawksoft.com\n\n",
"msg_date": "Tue, 08 Apr 2003 08:52:22 -0400",
"msg_from": "mlw <pgsql@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Complex database for testing, U.S. Census Tiger/UA"
},
{
"msg_contents": "mlw wrote:\n> \n> The U.S. Census provides a database of street polygons and other data\n> about landmarks, elevation, etc. This was discussed in a separate thread.\n> \n> The main URL is here:\n> http://www.census.gov/geo/www/tiger/index.html\n\nWhile yes, the tiger database (or better it's content) is interesting, I\ndon't think that it can be counted as a \"complex database\". Just that\nsomething is big doesn't mean that.\n\n> \n> My loader was written for the 2000 version, the 2002 version has some\n> difference, but it should be easy enough to ad the fields.\n\nOT:\n\nJust out of curiosity, do you plan more on this? I was playing around\nwith the 2000 version a while back, but the Garmin GPS units\nunfortunately use a proprietary map format, so one cannot generate his\nown detail maps for download. The waypoint and route data protocol is\nwell known though.\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n",
"msg_date": "Tue, 08 Apr 2003 10:32:52 -0400",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": true,
"msg_subject": "Re: Complex database for testing, U.S. Census Tiger/UA"
},
{
"msg_contents": "> mlw wrote:\n>> \n>> The U.S. Census provides a database of street polygons and other data\n>> about landmarks, elevation, etc. This was discussed in a separate\n>> thread.\n>> \n>> The main URL is here:\n>> http://www.census.gov/geo/www/tiger/index.html\n> \n> While yes, the tiger database (or better it's content) is interesting,\n> I don't think that it can be counted as a \"complex database\". Just that\n> something is big doesn't mean that.\n\nI guess you are right, but there are a lot of related tables. I wouldn't\ncall it simple, though. It can get huge, however.\n> \n>> \n>> My loader was written for the 2000 version, the 2002 version has some\n>> difference, but it should be easy enough to ad the fields.\n> \n> OT:\n> \n> Just out of curiosity, do you plan more on this? I was playing around\n> with the 2000 version a while back, but the Garmin GPS units\n> unfortunately use a proprietary map format, so one cannot generate his\n> own detail maps for download. The waypoint and route data protocol is\n> well known though.\n\nI'm not sure what a Garmin GPS unit is, but the TigerUA DB uses longitude\nand latitude. Any reasonable geographical system must somehow map to lat/long.\n\nActually, I am going to download the latest version and get it installed on\na system. There is a project I plan to work on in the near future, after all\nthe other crap I gotta do, that will make use of the data.\n\n",
"msg_date": "Tue, 8 Apr 2003 11:12:11 -0400 (EDT)",
"msg_from": "pgsql@mohawksoft.com",
"msg_from_op": false,
"msg_subject": "Re: Complex database for testing, U.S. Census Tiger/UA"
},
{
"msg_contents": "Jan Wieck wrote:\n> mlw wrote:\n> > \n> > The U.S. Census provides a database of street polygons and other data\n> > about landmarks, elevation, etc. This was discussed in a separate thread.\n> > \n> > The main URL is here:\n> > http://www.census.gov/geo/www/tiger/index.html\n> \n> While yes, the tiger database (or better it's content) is interesting, I\n> don't think that it can be counted as a \"complex database\". Just that\n> something is big doesn't mean that.\n\nJust so.\n\nThere are doubtless interesting cases that may be tested by virtue of\nhaving a data set that is large, and perhaps \"deeply interlinked.\"\n\nBut that only covers cases that have to do with \"largeness.\" It doesn't\nhelp ensure that PostgreSQL plays well when it gets hit by nested sets\nof updates where the challenges involve ensuring the system performs OK\nand does not deadlock when hit by complex sets of transactions.\n\nSo that an \"interesting\" database might involve not only a database, but\nalso a set of transactions that hit multiple tables that are to update\nthat database. In effect, something like the \"readers/writers\" that get\nused to test locking semantics.\n\nThis is something that would not be able to solely consist of a set of\ntables; it would have to include streams of updates. Something like one\nof the TPC benchmarks...\n--\noutput = reverse(\"moc.enworbbc@\" \"enworbbc\")\nhttp://www3.sympatico.ca/cbbrowne/rdbms.html\n\"If I could find a way to get [Saddam Hussein] out of there, even\nputting a contract out on him, if the CIA still did that sort of a\nthing, assuming it ever did, I would be for it.\" -- Richard M. Nixon\n\n",
"msg_date": "Tue, 08 Apr 2003 11:24:06 -0400",
"msg_from": "cbbrowne@cbbrowne.com",
"msg_from_op": false,
"msg_subject": "Re: Complex database for testing, U.S. Census Tiger/UA "
},
{
"msg_contents": "Around 11:24 on Apr 8, 2003, cbbrowne@cbbrowne.com said:\n\n\tI think it was my first application I wrote in python which parsed\nthe zip files containing these data and shoved it into a postgres system.\nI had multiple clients on four or five computers running nonstop for about\ntwo weeks to get it all populated.\n\n\tBy the time I was done, and got my first index created, I began to\nrun out of disk space. I think I only had about 70GB to work with on the\nRAID array.\n\n# Jan Wieck wrote:\n# > mlw wrote:\n# > >\n# > > The U.S. Census provides a database of street polygons and other data\n# > > about landmarks, elevation, etc. This was discussed in a separate thread.\n# > >\n# > > The main URL is here:\n# > > http://www.census.gov/geo/www/tiger/index.html\n# >\n# > While yes, the tiger database (or better it's content) is interesting, I\n# > don't think that it can be counted as a \"complex database\". Just that\n# > something is big doesn't mean that.\n#\n# Just so.\n#\n# There are doubtless interesting cases that may be tested by virtue of\n# having a data set that is large, and perhaps \"deeply interlinked.\"\n#\n# But that only covers cases that have to do with \"largeness.\" It doesn't\n# help ensure that PostgreSQL plays well when it gets hit by nested sets\n# of updates where the challenges involve ensuring the system performs OK\n# and does not deadlock when hit by complex sets of transactions.\n#\n# So that an \"interesting\" database might involve not only a database, but\n# also a set of transactions that hit multiple tables that are to update\n# that database. In effect, something like the \"readers/writers\" that get\n# used to test locking semantics.\n#\n# This is something that would not be able to solely consist of a set of\n# tables; it would have to include streams of updates. Something like one\n# of the TPC benchmarks...\n# --\n# output = reverse(\"moc.enworbbc@\" \"enworbbc\")\n# http://www3.sympatico.ca/cbbrowne/rdbms.html\n# \"If I could find a way to get [Saddam Hussein] out of there, even\n# putting a contract out on him, if the CIA still did that sort of a\n# thing, assuming it ever did, I would be for it.\" -- Richard M. Nixon\n#\n#\n# ---------------------------(end of broadcast)---------------------------\n# TIP 3: if posting/reading through Usenet, please send an appropriate\n# subscribe-nomail command to majordomo@postgresql.org so that your\n# message can get through to the mailing list cleanly\n#\n#\n\n--\nSPY My girlfriend asked me which one I like better.\npub 1024/3CAE01D5 1994/11/03 Dustin Sallings <dustin@spy.net>\n| Key fingerprint = 87 02 57 08 02 D0 DA D6 C8 0F 3E 65 51 98 D8 BE\nL_______________________ I hope the answer won't upset her. ____________\n\n",
"msg_date": "Tue, 8 Apr 2003 09:35:10 -0700",
"msg_from": "Dustin Sallings <dustin@spy.net>",
"msg_from_op": false,
"msg_subject": "Re: Complex database for testing, U.S. Census Tiger/UA "
},
{
"msg_contents": "Dustin Sallings wrote:\n> \tI think it was my first application I wrote in python which parsed\n> the zip files containing these data and shoved it into a postgres system.\n> I had multiple clients on four or five computers running nonstop for about\n> two weeks to get it all populated.\n> \n> \tBy the time I was done, and got my first index created, I began to\n> run out of disk space. I think I only had about 70GB to work with on the\n> RAID array.\n\nBut this does not establish that this data represents a meaningful\n\"transactional\" load.\n\nBased on the sources, which presumably involve unique data, the\n\"transactions\" are all touching independent sets of data, and are likely\nto be totally uninteresting from the perspective of seeing how the\nsystem works under /TRANSACTION/ load.\n\nTRANSACTION loading will involve doing updates that actually have some\nopportunity to trample on one another. Multiple transactions\nconcurrently updating a single balance table. Multiple transactions\nconcurrently trying to attach links to a table entry. That sort of\nthing.\n\nI remember a while back when MSFT did a \"enterprise scalability day,\"\nwhere they were trumpeting SQL Server performance on \"hundreds of\nmillions of transactions.\" At the time, I was at Sabre, who actually do\ntens of millions of transactions per day, for passenger reservations\nacross lotso airlines. Microsoft was making loud noises to the effect\nthat NT Server was wonderful for \"enterprise transaction\" work; the guys\nat work just laughed, because the kind of performance they got involved\nconsiderable amounts of 370 assembler to tune vital bits of the\nsystems.\n\nWhat happened in the \"scalability tests\" was that Microsoft did much the\nsame thing you did; they had hordes of transactions going through that\nwere well, basically independent of one another. They could \"scale\"\nthings up trivially by adding extra boxes. Need to handle 10x the\ntransactions? Well, since they don't actually modify any shared\nresources, you just need to put in 10x as many servers.\n\nAnd that's essentially what happens any time TPC-? benchmarks reach the\npoint of irrelevance; that happens every time someone figures out some\n\"hack\" that is able to successfully partition the work load. At that\npoint, they merely need to add a bit of extra hardware, and increasing\nperformance is as easy as adding extra processor boards. The real world\ndoesn't scale so easily...\n--\n(concatenate 'string \"cbbrowne\" \"@acm.org\")\nhttp://cbbrowne.com/info/emacs.html\nSend messages calling for fonts not available to the recipient(s).\nThis can (in the case of Zmail) totally disable the user's machine and\nmail system for up to a whole day in some circumstances.\n-- from the Symbolics Guidelines for Sending Mail\n\n",
"msg_date": "Tue, 08 Apr 2003 14:58:42 -0400",
"msg_from": "cbbrowne@cbbrowne.com",
"msg_from_op": false,
"msg_subject": "Re: Complex database for testing, U.S. Census Tiger/UA "
}
] |
[
{
"msg_contents": "I just compiled and installed postgresql from the cvs (upgraded from 7.1beta5 \nbecuase of some problems I had), followed the steps, and the database server \nworks great, but my startup scripts don't work anymore.\nWhat the script has, works if I try to do it as postgres, but with a \nsu -l postgres -c 'command' as root it doesn't work.\nThis is the script (on Solaris 7):\n\n#!/bin/bash\n# postgresql\tThis is the init script for starting up the PostgreSQL\n#\t\tserver\n#\n# chkconfig: 345 85 15\n# description: Starts and stops the PostgreSQL backend daemon that handles \\\n#\t all database requests.\n# processname: postmaster\n#\n\n# This script is slightly unusual in that the name of the daemon (postmaster)\n# is not the same as the name of the subsystem (postgresql)\n\n# See how we were called.\ncase \"$1\" in\n start)\n\techo -n \"Starting postgresql service: \"\n\tsu -l postgres -c '/dbs/postgres/bin/pg_ctl -o \"-i\" -D /dbs/postgres/data/ \nstart -l /dbs/postgres/sql.log'\n\t;;\n stop)\n\techo -n \"Stopping postgresql service: \"\n\tsu -l postgres -c '/dbs/postgres/bin/pg_ctl -m i -D /dbs/postgres/data stop'\n\techo\n\t;;\n status)\n\tsu -l postgres -c '/dbs/postgres/bin/pg_ctl -D /dbs/postgres/data/ status'\n\t;;\n restart)\n\t$0 stop\n\tsleep 10\n\t$0 start\n\t;;\n *)\n\techo \"Usage: postgresql {start|stop|status|restart}\"\n\texit 1\nesac\n\nexit 0\n\n-- \nSystem Administration: It's a dirty job, \nbut someone told me I had to do it.\n-----------------------------------------------------------------\nMart�n Marqu�s\t\t\temail: \tmartin@math.unl.edu.ar\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n",
"msg_date": "Fri, 16 Mar 2001 16:47:54 -0300",
"msg_from": "\"Martin A. Marques\" <martin@math.unl.edu.ar>",
"msg_from_op": true,
"msg_subject": "problems with startup script on upgrade"
},
{
"msg_contents": "\"Martin A. Marques\" <martin@math.unl.edu.ar> writes:\n> ... my startup scripts don't work anymore.\n> What the script has, works if I try to do it as postgres, but with a \n> su -l postgres -c 'command' as root it doesn't work.\n\nPlease define \"doesn't work\". What happens exactly? What messages\nare produced?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 16 Mar 2001 15:46:15 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: problems with startup script on upgrade "
},
{
"msg_contents": "El Vie 16 Mar 2001 17:46, Tom Lane escribi�:\n> \"Martin A. Marques\" <martin@math.unl.edu.ar> writes:\n> > ... my startup scripts don't work anymore.\n> > What the script has, works if I try to do it as postgres, but with a\n> > su -l postgres -c 'command' as root it doesn't work.\n>\n> Please define \"doesn't work\". What happens exactly? What messages\n> are produced?\n\nroot@ultra31 /space/pruebas/postgres-cvs # su postgres -c \n'/dbs/postgres/bin/pg_ctl -o \"-i\" -D /dbs/postgres/data/ start -l \n/dbs/postgres/sql.log'\n19054 Killed\npostmaster successfully started\nroot@ultra31 /space/pruebas/postgres-cvs #\n\nNo postmaster after that!\n\n\n-- \nSystem Administration: It's a dirty job, \nbut someone told me I had to do it.\n-----------------------------------------------------------------\nMart�n Marqu�s\t\t\temail: \tmartin@math.unl.edu.ar\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n",
"msg_date": "Fri, 16 Mar 2001 18:49:16 -0300",
"msg_from": "\"Martin A. Marques\" <martin@math.unl.edu.ar>",
"msg_from_op": true,
"msg_subject": "Re: problems with startup script on upgrade"
},
{
"msg_contents": "El Vie 16 Mar 2001 18:58, escribiste:\n> \"Martin A. Marques\" <martin@math.unl.edu.ar> writes:\n> >> Please define \"doesn't work\". What happens exactly? What messages\n> >> are produced?\n> >\n> > root@ultra31 /space/pruebas/postgres-cvs # su postgres -c\n> > '/dbs/postgres/bin/pg_ctl -o \"-i\" -D /dbs/postgres/data/ start -l\n> > /dbs/postgres/sql.log'\n> > 19054 Killed\n> > postmaster successfully started\n> > root@ultra31 /space/pruebas/postgres-cvs #\n>\n> Hm, that 'Killed' looks suspicious. What shows up in the\n> /dbs/postgres/sql.log file?\n\nNothing at all.\n\n\n-- \nSystem Administration: It's a dirty job, \nbut someone told me I had to do it.\n-----------------------------------------------------------------\nMart�n Marqu�s\t\t\temail: \tmartin@math.unl.edu.ar\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n",
"msg_date": "Fri, 16 Mar 2001 18:56:07 -0300",
"msg_from": "\"Martin A. Marques\" <martin@math.unl.edu.ar>",
"msg_from_op": true,
"msg_subject": "Re: problems with startup script on upgrade"
},
{
"msg_contents": "\"Martin A. Marques\" <martin@math.unl.edu.ar> writes:\n>> Please define \"doesn't work\". What happens exactly? What messages\n>> are produced?\n\n> root@ultra31 /space/pruebas/postgres-cvs # su postgres -c \n> '/dbs/postgres/bin/pg_ctl -o \"-i\" -D /dbs/postgres/data/ start -l \n> /dbs/postgres/sql.log'\n> 19054 Killed\n> postmaster successfully started\n> root@ultra31 /space/pruebas/postgres-cvs #\n\nHm, that 'Killed' looks suspicious. What shows up in the\n/dbs/postgres/sql.log file?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 16 Mar 2001 16:58:30 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: problems with startup script on upgrade "
},
{
"msg_contents": "\"Martin A. Marques\" <martin@math.unl.edu.ar> writes:\n>> Hm, that 'Killed' looks suspicious. What shows up in the\n>> /dbs/postgres/sql.log file?\n\n> Nothing at all.\n\nThat's no help :-(. Please alter the command to trace the shell script,\nie\n\nsu postgres -c 'sh -x /dbs/postgres/bin/pg_ctl -o ... 2>tracefile'\n\nand send the tracefile.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 16 Mar 2001 17:05:04 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: problems with startup script on upgrade "
},
{
"msg_contents": "El Vie 16 Mar 2001 19:05, Tom Lane escribi�:\n> \"Martin A. Marques\" <martin@math.unl.edu.ar> writes:\n> >> Hm, that 'Killed' looks suspicious. What shows up in the\n> >> /dbs/postgres/sql.log file?\n> >\n> > Nothing at all.\n>\n> That's no help :-(. Please alter the command to trace the shell script,\n> ie\n>\n> su postgres -c 'sh -x /dbs/postgres/bin/pg_ctl -o ... 2>tracefile'\n>\n> and send the tracefile.\n\nThere it goes.\n\n-- \nSystem Administration: It's a dirty job, \nbut someone told me I had to do it.\n-----------------------------------------------------------------\nMart�n Marqu�s\t\t\temail: \tmartin@math.unl.edu.ar\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------",
"msg_date": "Fri, 16 Mar 2001 19:08:29 -0300",
"msg_from": "\"Martin A. Marques\" <martin@math.unl.edu.ar>",
"msg_from_op": true,
"msg_subject": "Re: problems with startup script on upgrade"
},
{
"msg_contents": "El Vie 16 Mar 2001 19:05, Tom Lane escribi�:\n> \"Martin A. Marques\" <martin@math.unl.edu.ar> writes:\n> >> Hm, that 'Killed' looks suspicious. What shows up in the\n> >> /dbs/postgres/sql.log file?\n> >\n> > Nothing at all.\n>\n> That's no help :-(. Please alter the command to trace the shell script,\n> ie\n>\n> su postgres -c 'sh -x /dbs/postgres/bin/pg_ctl -o ... 2>tracefile'\n>\n> and send the tracefile.\n\nFound something, but just can't get why it started happening. There was some \nlog to the sql.log:\n\nld.so.1: /dbs/postgres/bin/postmaster: fatal: libz.so: open failed: No such \nfile or directory\n\nNow, libz.so is in the LD_LIBRARY_PATH of the postgres user, so why is it \nthat Solaris doesn't load the .profile in the postgres directory.\n\nI'm astonished!\n\n-- \nSystem Administration: It's a dirty job, \nbut someone told me I had to do it.\n-----------------------------------------------------------------\nMart�n Marqu�s\t\t\temail: \tmartin@math.unl.edu.ar\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n",
"msg_date": "Fri, 16 Mar 2001 19:24:06 -0300",
"msg_from": "\"Martin A. Marques\" <martin@math.unl.edu.ar>",
"msg_from_op": true,
"msg_subject": "Re: problems with startup script on upgrade"
},
{
"msg_contents": "\"Martin A. Marques\" <martin@math.unl.edu.ar> writes:\n> Now, libz.so is in the LD_LIBRARY_PATH of the postgres user, so why is it \n> that Solaris doesn't load the .profile in the postgres directory.\n\nAh, but is the LD_LIBRARY_PATH the same inside that su? A change of\nenvironment might explain why this works \"by hand\" and not through su\n...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 16 Mar 2001 17:35:59 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: problems with startup script on upgrade "
},
{
"msg_contents": "El Vie 16 Mar 2001 19:35, Tom Lane escribi�:\n> \"Martin A. Marques\" <martin@math.unl.edu.ar> writes:\n> > Now, libz.so is in the LD_LIBRARY_PATH of the postgres user, so why is it\n> > that Solaris doesn't load the .profile in the postgres directory.\n>\n> Ah, but is the LD_LIBRARY_PATH the same inside that su? A change of\n> environment might explain why this works \"by hand\" and not through su\n> ...\n\nThis #$^%^*$%� Solaris!!!!!!\nCheck this out, and tell me I shouldn't yell out at SUN:\n\nroot@ultra31 / # su - postgres -c 'echo $PATH'\n/usr/bin:\nroot@ultra31 / # su - postgres\npostgres@ultra31:~ > echo $PATH\n/usr/local/bin:/usr/local/gcc/bin:/usr/local/php/bin:/opt/sfw/bin:/usr/local/a2p/bin:/usr/local/sql/bin:/usr/ccs/bin:/bin:/usr/bin/X11:/usr/bin:/usr/ucb:/dbs/postgres/bin:\npostgres@ultra31:~ > logout\nroot@ultra31 / #\n\nCan someone explain to why Solaris is doing that, and why did it start doing \nit after an upgrade? I have no words.\n\n-- \nSystem Administration: It's a dirty job, \nbut someone told me I had to do it.\n-----------------------------------------------------------------\nMart�n Marqu�s\t\t\temail: \tmartin@math.unl.edu.ar\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n",
"msg_date": "Fri, 16 Mar 2001 19:55:12 -0300",
"msg_from": "\"Martin A. Marques\" <martin@math.unl.edu.ar>",
"msg_from_op": true,
"msg_subject": "Re: problems with startup script on upgrade"
},
{
"msg_contents": "> > Ah, but is the LD_LIBRARY_PATH the same inside that su? A change of\n> > environment might explain why this works \"by hand\" and not through su\n> > ...\n> This #$^%^*$%� Solaris!!!!!!\n> Check this out, and tell me I shouldn't yell out at SUN:\n> root@ultra31 / # su - postgres -c 'echo $PATH'\n> /usr/bin:\n> root@ultra31 / # su - postgres\n> postgres@ultra31:~ > echo $PATH\n/usr/local/bin:/usr/local/gcc/bin:/usr/local/php/bin:/opt/sfw/bin:/usr/local/a2p/bin:/usr/local/sql/bin:/usr/ccs/bin:/bin:/usr/bin/X11:/usr/bin:/usr/ucb:/dbs/postgres/bin:\n> postgres@ultra31:~ > logout\n> root@ultra31 / #\n> Can someone explain to why Solaris is doing that, and why did it start doing\n> it after an upgrade? I have no words.\n\nIt may be that this is the first build of PostgreSQL which asks for\n\"libz.so\", but that is just a guess.\n\nNot sure about \"after the upgrade\", but I'll bet that the first (command\nline) case does not have an attached terminal, while the second case,\nwhere you actually connect to the session, does.\n\nDoes your .profile try doing some \"terminal stuff\"? Try adding echo's to\nyour .profile to verify that it start, and that it runs to completion...\n\nAlso, PATH is not relevant for finding libz.so, so you need to figure\nout what (if anything) is happening to LD_LIBRARY_PATH.\n\n - Thomas\n",
"msg_date": "Sat, 17 Mar 2001 00:36:42 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: problems with startup script on upgrade"
},
{
"msg_contents": "Hi all\n\nOn Fri, 16 Mar 2001, Martin A. Marques wrote:\n> ld.so.1: /dbs/postgres/bin/postmaster: fatal: libz.so: open failed: No such \n> file or directory\n> \n> Now, libz.so is in the LD_LIBRARY_PATH of the postgres user, so why is it \n> that Solaris doesn't load the .profile in the postgres directory.\n\nThe main trouble with all of this is that LD_LIBRARY_PATH is irrelevant\nhere.\n\n From man ld.so.1:\n\nSECURITY\n To prevent malicious dependency substitution or symbol\n interposition, some restrictions may apply to the evaluation\n of the dependencies of secure processes.\n\n The runtime linker categorizes a process as secure if the\n user is not a super user, and either the real user and\n effective user identifiers are not equal, or the real group\n and effective group identifiers are not equal. See\n getuid(2), geteuid(2), getgid(2), and getegid(2).\n\n If an LD_LIBRARY_PATH environment variable is in effect for\n a secure process, then only the trusted directories speci-\n fied by this variable will be used to augment the runtime\n linker's search rules. Presently, the only trusted direc-\n tory known to the runtime linker is /usr/lib.\n\nThere are many way to solve the problem:\n the easy -- copy (or link) libz.so to /usr/lib\n the clean -- avoid using LD_LIBRARY_PATH, use -R for linking instead\n\nRegards,\nASK\n\n",
"msg_date": "Sun, 18 Mar 2001 18:40:32 +0200 (IST)",
"msg_from": "Alexander Klimov <ask@wisdom.weizmann.ac.il>",
"msg_from_op": false,
"msg_subject": "Re: problems with startup script on upgrade"
},
{
"msg_contents": "Alexander Klimov writes:\n\n> There are many way to solve the problem:\n> the easy -- copy (or link) libz.so to /usr/lib\n> the clean -- avoid using LD_LIBRARY_PATH, use -R for linking instead\n\nOur makefiles are set up to use '-R' for linking. Does this not work as\ndesigned?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Sun, 18 Mar 2001 20:30:01 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: problems with startup script on upgrade"
},
{
"msg_contents": "On Sun, 18 Mar 2001, Peter Eisentraut wrote:\n> Alexander Klimov writes:\n> > There are many way to solve the problem:\n> > the easy -- copy (or link) libz.so to /usr/lib\n> > the clean -- avoid using LD_LIBRARY_PATH, use -R for linking instead\n> \n> Our makefiles are set up to use '-R' for linking. Does this not work as\n> designed?\n\nIt depends on what it was designed for :-) My guess is that currently\nthere is only something like `-R/usr/local/pgsql' in linking, but not for\nlocations of other libraries: libz, libssl, libcrypto, etc.\n\nI guess that Martin's case of running postgress as `secure application' is\nsomething unusual, but there is more usual case: if you run apache as root\n(to be able to bind it to port 80) and then use from perl libpq (compiled\nwith ssl) you will definitely have a trouble -- ssl will not be found.\n\nRegards,\nASK\n\n\n",
"msg_date": "Mon, 19 Mar 2001 13:47:37 +0200 (IST)",
"msg_from": "Alexander Klimov <ask@wisdom.weizmann.ac.il>",
"msg_from_op": false,
"msg_subject": "Re: Re: problems with startup script on upgrade"
}
] |
[
{
"msg_contents": "\nwill do an announce later on tonight, to give the mirrors a chance to\nstart syncing ... can others confirm that the packaging once more looks\nclean?\n\nthanks ...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n",
"msg_date": "Fri, 16 Mar 2001 17:55:54 -0400 (AST)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "beta6 packaged ..."
},
{
"msg_contents": "The Hermit Hacker <scrappy@hub.org> writes:\n> will do an announce later on tonight, to give the mirrors a chance to\n> start syncing ... can others confirm that the packaging once more looks\n> clean?\n\nThe main tar.gz matches what I have here. Didn't look at the partial\ntarballs.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 16 Mar 2001 17:53:44 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: beta6 packaged ... "
}
] |
[
{
"msg_contents": "Got it at spin.c:156 with 50 clients doing inserts into\n50 tables (int4, text[1-256 bytes]).\n-B 16384, -wal_buffers=256 (with default others wal params).\n\nVadim\n",
"msg_date": "Fri, 16 Mar 2001 14:06:02 -0800",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "Stuck spins in current"
},
{
"msg_contents": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n> Got it at spin.c:156 with 50 clients doing inserts into\n> 50 tables (int4, text[1-256 bytes]).\n> -B 16384, -wal_buffers=256 (with default others wal params).\n\nSpinAcquire() ... but on which lock?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 16 Mar 2001 17:47:34 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Stuck spins in current "
},
{
"msg_contents": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n>> Got it at spin.c:156 with 50 clients doing inserts into\n>> 50 tables (int4, text[1-256 bytes]).\n>> -B 16384, -wal_buffers=256 (with default others wal params).\n\n> SpinAcquire() ... but on which lock?\n\nAfter a little bit of thought I'll bet it's ControlFileLockId.\n\nLikely we shouldn't be using a spinlock at all for that, but the\nshort-term solution might be a longer timeout for this particular lock.\nAlternatively, could we avoid holding that lock while initializing a\nnew log segment?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 16 Mar 2001 17:56:47 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Stuck spins in current "
}
] |
[
{
"msg_contents": "Not sure if this counts as *major*, but this jdbc1 compile\nproblem is presumably still there:\n\nhttp://www.postgresql.org/mhonarc/pgsql-bugs/2001-03/msg00003.html\n\nas the referenced source file hasn't been updated. If I recall\nright, Peter was waiting for a java 1 SDK to be installed.\n",
"msg_date": "Fri, 16 Mar 2001 17:32:48 -0500",
"msg_from": "Nat Howard <nrh@pupworks.com>",
"msg_from_op": true,
"msg_subject": "Re: Beta6 for Tomorrow"
},
{
"msg_contents": "Nat Howard writes:\n\n> Not sure if this counts as *major*, but this jdbc1 compile\n> problem is presumably still there:\n>\n> http://www.postgresql.org/mhonarc/pgsql-bugs/2001-03/msg00003.html\n>\n> as the referenced source file hasn't been updated. If I recall\n> right, Peter was waiting for a java 1 SDK to be installed.\n\nCare to submit a patch? This seems easy enough to fix for someone with an\nappropriate JDK installed.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Sun, 18 Mar 2001 21:21:53 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: Beta6 for Tomorrow"
},
{
"msg_contents": "Peter,\n\nI'll give it a try, and send the stuff to you directly, so you can\nsay something like \"that isn't what I meant!\".\n\n\n\n>Nat Howard writes:\n>\n>> Not sure if this counts as *major*, but this jdbc1 compile\n>> problem is presumably still there:\n>>\n>> http://www.postgresql.org/mhonarc/pgsql-bugs/2001-03/msg00003.html\n>>\n>> as the referenced source file hasn't been updated. If I recall\n>> right, Peter was waiting for a java 1 SDK to be installed.\n>\n>Care to submit a patch? This seems easy enough to fix for someone with an\n>appropriate JDK installed.\n>\n>-- \n>Peter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n>\n",
"msg_date": "Sun, 18 Mar 2001 20:13:35 -0500",
"msg_from": "Nat Howard <nrh@pupworks.com>",
"msg_from_op": true,
"msg_subject": "Re: Re: Beta6 for Tomorrow "
}
] |
[
{
"msg_contents": "> >> Got it at spin.c:156 with 50 clients doing inserts into\n> >> 50 tables (int4, text[1-256 bytes]).\n> >> -B 16384, -wal_buffers=256 (with default others wal params).\n> \n> > SpinAcquire() ... but on which lock?\n> \n> After a little bit of thought I'll bet it's ControlFileLockId.\n\nI see \"XLogWrite: new log file created...\" in postmaster' log -\nbackend writes this after releasing ControlFileLockId.\n\n> Likely we shouldn't be using a spinlock at all for that, but the\n> short-term solution might be a longer timeout for this \n> particular lock.\n> Alternatively, could we avoid holding that lock while initializing a\n> new log segment?\n\nHow to synchronize with checkpoint-er if wal_files > 0?\nAnd you know - I've run same tests on ~ Mar 9 snapshot\nwithout any problems.\n\nVadim\n",
"msg_date": "Fri, 16 Mar 2001 15:16:30 -0800",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: Stuck spins in current "
},
{
"msg_contents": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n>> Alternatively, could we avoid holding that lock while initializing a\n>> new log segment?\n\n> How to synchronize with checkpoint-er if wal_files > 0?\n\nI was sort of visualizing assigning the created xlog files dynamically:\n\n\tcreate a temp file of a PID-dependent name\n\tfill it with zeroes and fsync it\n\tacquire ControlFileLockId\n\trename temp file into place as next uncreated segment\n\tupdate pg_control\n\trelease ControlFileLockId\n\nSince the things are just filled with 0's, there's no need to know which\nsegment it will be while you're filling it.\n\nThis would leave you sometimes with more advance files than you really\nneeded, but so what ...\n\n> And you know - I've run same tests on ~ Mar 9 snapshot\n> without any problems.\n\nThat was before I changed the code to pre-fill the file --- now it takes\nlonger to init a log segment. And we're only using a plain SpinAcquire,\nnot the flavor with a longer timeout.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 16 Mar 2001 18:25:07 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Stuck spins in current "
}
] |
[
{
"msg_contents": "> > How to synchronize with checkpoint-er if wal_files > 0?\n> \n> I was sort of visualizing assigning the created xlog files \n> dynamically:\n> \n> \tcreate a temp file of a PID-dependent name\n> \tfill it with zeroes and fsync it\n> \tacquire ControlFileLockId\n> \trename temp file into place as next uncreated segment\n> \tupdate pg_control\n> \trelease ControlFileLockId\n> \n> Since the things are just filled with 0's, there's no need to \n> know which segment it will be while you're filling it.\n> \n> This would leave you sometimes with more advance files than you really\n> needed, but so what ...\n\nYes, it has sence, but:\n\n> > And you know - I've run same tests on ~ Mar 9 snapshot\n> > without any problems.\n> \n> That was before I changed the code to pre-fill the file --- \n> now it takes longer to init a log segment. And we're only\n> using a plain SpinAcquire, not the flavor with a longer timeout.\n\nxlog.c revision 1.55 from Feb 26 already had log file\nzero-filling, so ...\n\nVadim\n",
"msg_date": "Fri, 16 Mar 2001 15:34:10 -0800",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: Stuck spins in current "
},
{
"msg_contents": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n> And you know - I've run same tests on ~ Mar 9 snapshot\n> without any problems.\n>> \n>> That was before I changed the code to pre-fill the file --- \n>> now it takes longer to init a log segment. And we're only\n>> using a plain SpinAcquire, not the flavor with a longer timeout.\n\n> xlog.c revision 1.55 from Feb 26 already had log file\n> zero-filling, so ...\n\nOh, you're right, I didn't study the CVS log carefully enough. Hmm,\nmaybe the control file lock isn't the problem. The abort() in\ns_lock_stuck should have left a core file --- what is the backtrace?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 16 Mar 2001 18:38:32 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Stuck spins in current "
}
] |
[
{
"msg_contents": "> > And you know - I've run same tests on ~ Mar 9 snapshot\n> > without any problems.\n> >> \n> >> That was before I changed the code to pre-fill the file --- \n> >> now it takes longer to init a log segment. And we're only\n> >> using a plain SpinAcquire, not the flavor with a longer timeout.\n> \n> > xlog.c revision 1.55 from Feb 26 already had log file\n> > zero-filling, so ...\n> \n> Oh, you're right, I didn't study the CVS log carefully enough. Hmm,\n> maybe the control file lock isn't the problem. The abort() in\n> s_lock_stuck should have left a core file --- what is the backtrace?\n\nAfter 10 times increasing DEFAULT_TIMEOUT in s_lock.c\nI got abort in xlog.c:626 - waiting for insert_lck.\nBut problem is near new log file creation code: system\ngoes sleep just after new one is created.\n\nVadim\n",
"msg_date": "Fri, 16 Mar 2001 16:01:48 -0800",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: Stuck spins in current "
},
{
"msg_contents": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n> xlog.c revision 1.55 from Feb 26 already had log file\n> zero-filling, so ...\n>> \n>> Oh, you're right, I didn't study the CVS log carefully enough. Hmm,\n>> maybe the control file lock isn't the problem. The abort() in\n>> s_lock_stuck should have left a core file --- what is the backtrace?\n\n> After 10 times increasing DEFAULT_TIMEOUT in s_lock.c\n> I got abort in xlog.c:626 - waiting for insert_lck.\n> But problem is near new log file creation code: system\n> goes sleep just after new one is created.\n\nHave you learned any more about this? Or can you send your test program\nso other people can try it?\n\nIn the meantime, even if it turns out there's a different problem here,\nit seems clear to me that it's a bad idea to use a plain spinlock to\ninterlock xlog segment creation. The spinlock timeouts are not set\nhigh enough to be safe for something that could take several seconds.\nUnless someone objects, I will go ahead and work on the change I\nsuggested yesterday to not hold the ControlFileLockId spinlock while\nwe are zero-filling the new segment.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 17 Mar 2001 11:59:26 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Stuck spins in current "
},
{
"msg_contents": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n> And you know - I've run same tests on ~ Mar 9 snapshot\n> without any problems.\n\nOh, I see it:\n\nProcess A is doing GetSnapShotData. It holds SInvalLock and calls\nReadNewTransactionId, which wants XidGenLockId.\n\nProcess B is doing GetNewTransactionId. It holds XidGenLockId and\nhas run out of XIDs, so it needs to write a NEXTXID log record.\nTherefore, it calls XLogInsert which wants the insert_lck.\n\nProcess C is inside XLogInsert on its first xlog entry of a transaction.\nIt holds the insert_lck and wants to put its XID into MyProc->logRec,\nfor which it needs SInvalLock.\n\nOoops.\n\nAt this point I must humbly say \"yes, you told me so\", because if I\nhadn't insisted that we needed NEXTXID records then we wouldn't have\nthis deadlock.\n\nIt looks to me like the simplest answer is to take NEXTXID records\nout again. (Fortunately, there doesn't seem to be any comparable\ncycle involving OidGenLock, or we'd need to think of a better answer.)\nI shall retire to lick my wounds, and make the changes tomorrow ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 17 Mar 2001 22:07:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Stuck spins in current "
},
{
"msg_contents": "> At this point I must humbly say \"yes, you told me so\", because if I\n\nNo, I didn't - I must humbly say that I didn't foresee this deadlock,\nso \"I didn't tell you so\" -:)\n\nAnyway, deadlock in my tests are very correlated with new log file\ncreation - something probably is still wrong...\n\nVadim\n\n\n",
"msg_date": "Sat, 17 Mar 2001 22:16:49 -0800",
"msg_from": "\"Vadim Mikheev\" <vmikheev@sectorbase.com>",
"msg_from_op": false,
"msg_subject": "Re: Stuck spins in current "
},
{
"msg_contents": "\"Vadim Mikheev\" <vmikheev@sectorbase.com> writes:\n> Anyway, deadlock in my tests are very correlated with new log file\n> creation - something probably is still wrong...\n\nWell, if you can reproduce it easily, seems like you could get in there\nand verify or disprove my theory about where the deadlock is.\n\nOr send the testbed and I'll try ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 18 Mar 2001 01:22:03 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Stuck spins in current "
}
] |
[
{
"msg_contents": "Since pg_upgrade will not work for 7.1, should its installation be\nprevented and the man page be disabled?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Sat, 17 Mar 2001 01:41:48 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "pg_upgrade"
},
{
"msg_contents": "> Since pg_upgrade will not work for 7.1, should its installation be\n> prevented and the man page be disabled?\n\nProbably. I am not sure it will ever be used again now that we have\nnumeric file names.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 16 Mar 2001 21:32:33 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade"
},
{
"msg_contents": "> > Since pg_upgrade will not work for 7.1, should its installation be\n> > prevented and the man page be disabled?\n> \n> Probably. I am not sure it will ever be used again now that we have\n> numeric file names.\n\nPerhaps we should leave it for 7.1 because people will complain when\nthey can not find it. Maybe we can mention this may go away in the next\nrelease.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 16 Mar 2001 22:13:48 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade"
},
{
"msg_contents": "> > > Since pg_upgrade will not work for 7.1, should its installation be\n> > > prevented and the man page be disabled?\n> > Probably. I am not sure it will ever be used again now that we have\n> > numeric file names.\n> Perhaps we should leave it for 7.1 because people will complain when\n> they can not find it. Maybe we can mention this may go away in the next\n> release.\n\nIf it doesn't work, and will not be made to work, then let's remove it\nfrom the tree. If someone wants to resurrect it, then it is easily\nretrieved from the cvs attic. But istm that it is not a bad thing if\npeople can not find something which will not work ;)\n\nComments?\n\n - Thomas\n",
"msg_date": "Mon, 19 Mar 2001 07:23:07 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade"
},
{
"msg_contents": "Thomas Lockhart <lockhart@alumni.caltech.edu> writes:\n> If it doesn't work, and will not be made to work, then let's remove it\n> from the tree.\n\nI tend to agree with Peter's slightly less drastic proposal: remove it\nfrom the installed fileset and disable its man page, without necessarily\n'cvs remove'ing all the source files. (I see we have already removed\nall the other documentation references to it, so disconnecting the ref\npage from reference.sgml should be sufficient.)\n\nI hope that pg_upgrade will be of use again in the future, so even\nthough it can't work for 7.1, a scorched-earth policy is not the way\nto go...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 19 Mar 2001 03:07:35 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: pg_upgrade "
},
{
"msg_contents": "Whatever you guys decide is fine with me.\n\n> Thomas Lockhart <lockhart@alumni.caltech.edu> writes:\n> > If it doesn't work, and will not be made to work, then let's remove it\n> > from the tree.\n> \n> I tend to agree with Peter's slightly less drastic proposal: remove it\n> from the installed fileset and disable its man page, without necessarily\n> 'cvs remove'ing all the source files. (I see we have already removed\n> all the other documentation references to it, so disconnecting the ref\n> page from reference.sgml should be sufficient.)\n> \n> I hope that pg_upgrade will be of use again in the future, so even\n> though it can't work for 7.1, a scorched-earth policy is not the way\n> to go...\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 19 Mar 2001 09:21:39 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: pg_upgrade"
},
{
"msg_contents": "Tom Lane writes:\n\n> Thomas Lockhart <lockhart@alumni.caltech.edu> writes:\n> > If it doesn't work, and will not be made to work, then let's remove it\n> > from the tree.\n>\n> I tend to agree with Peter's slightly less drastic proposal: remove it\n> >from the installed fileset and disable its man page, without necessarily\n> 'cvs remove'ing all the source files. (I see we have already removed\n> all the other documentation references to it, so disconnecting the ref\n> page from reference.sgml should be sufficient.)\n\nI'll do this then.\n\n>\n> I hope that pg_upgrade will be of use again in the future, so even\n> though it can't work for 7.1, a scorched-earth policy is not the way\n> to go...\n>\n> \t\t\tregards, tom lane\n>\n>\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Thu, 22 Mar 2001 17:51:32 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: Re: pg_upgrade "
}
] |
[
{
"msg_contents": "Is there a timeout setting I can use to abort transactions that aren't\ndeadlocked, but which have been blocked waiting for locks greater than some\namount of time? I didn't see anything in the docs on this and observed with\n2 instances of psql that a transaction waiting on a lock seems to wait\nforever.\n\nIf pgsql doesn't have such a setting, has there been any discussion about\nadding it?\n\nRegards,\nKevin Manley\n\n\n",
"msg_date": "Fri, 16 Mar 2001 17:10:14 -0800",
"msg_from": "\"Kevin T. Manley\" <kmanley@qwest.net>",
"msg_from_op": true,
"msg_subject": "transaction timeout"
}
] |
[
{
"msg_contents": "Hi friends!\n\nI'm working in a trigger and I need to put the result\nof a query into a variable.\n\nThat's very easy- apparently!\n\nThe query has a aggregate function like this:\n\nselect sum(field) into variable ...\n\nand I'm sure that field and variable are int4 type.\n\nSo, when I run this trigger there is a mistake:\n ''there is no operator '=$' for types 'int4' and 'int4'\n you will either have to retype this query using an\n explicit cast, or you will have to define the operator\n using CREATE OPERATOR''\n\nwhat's meaning this? and\nhow can I assign the result of aggregate function into a\nvariable?\n(My system is 6.5.3)\n\n\n\n",
"msg_date": "Sat, 17 Mar 2001 00:22:44 -0500",
"msg_from": "jreniz <jreniz@tutopia.com>",
"msg_from_op": true,
"msg_subject": "Trigger problem"
},
{
"msg_contents": "jreniz <jreniz@tutopia.com> writes:\n> So, when I run this trigger there is a mistake:\n> ''there is no operator '=$' for types 'int4' and 'int4'\n\n> (My system is 6.5.3)\n\nThis is an old bug. Update to 7.0.3.\n\nIt might work to add spaces around the '=' signs in your trigger\nfunction, but an update would be a good idea anyway.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 18 Mar 2001 15:37:41 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Trigger problem "
}
] |
[
{
"msg_contents": "I know that new pg_dump can dump out large objects. But what about\npg_dumpall? Do we have to dump out a whole database cluster by using\npg_dumpall then run pg_dump separetly to dump large objects? That\nseems pain...\n--\nTatsuo Ishii\n",
"msg_date": "Sat, 17 Mar 2001 17:36:38 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "pg_dump"
},
{
"msg_contents": "At 17:36 17/03/01 +0900, Tatsuo Ishii wrote:\n>I know that new pg_dump can dump out large objects. But what about\n>pg_dumpall? Do we have to dump out a whole database cluster by using\n>pg_dumpall then run pg_dump separetly to dump large objects?\n\nThat won't even work, since pg_dump won't dump BLOBs without dumping all\nthe tables in the database.\n\n\n>That seems pain...\n\nIt is if you do not have individual database backup procedures; but\npg_dumpall uses the plain text dump format, which, without changes to\nlo_import, can not restore binary data. If lo_import could load UUENCODED\ndata from STDIN, then maybe we could get it to work. Alternatively, we may\nbe able to put an option on pg_dumpall that will dump to one long script\nfile with embedded TAR archives, but I have not really looked at the option.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Sat, 17 Mar 2001 22:58:34 +1100",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump"
}
] |
[
{
"msg_contents": "pg_restore crushes if dump data includes large objects...\n--\nTatsuo Ishii\n\n[t-ishii@srapc1474 7.1]$ createdb test\nCREATE DATABASE\n[t-ishii@srapc1474 7.1]$ psql -c \"select lo_import('/boot/vmlinuz')\" test\n lo_import \n-----------\n 20736\n(1 row)\n\n[t-ishii@srapc1474 7.1]$ pg_dump -F c -b test > test.db\n[t-ishii@srapc1474 7.1]$ createdb test2\nCREATE DATABASE\n[t-ishii@srapc1474 7.1]$ pg_restore -d test2 test.db\nSegmentation fault (core dumped)\n[t-ishii@srapc1474 7.1]$ gdb pg_restore core\nGNU gdb 5.0\nCopyright 2000 Free Software Foundation, Inc.\n[snip]\n#0 0x804abd4 in _enableTriggersIfNecessary (AH=0x8057d30, te=0x0, \n ropt=0x8057c90) at pg_backup_archiver.c:474\n474\t\tahprintf(AH, \"UPDATE pg_class SET reltriggers = \"\n(gdb) where\n#0 0x804abd4 in _enableTriggersIfNecessary (AH=0x8057d30, te=0x0, \n ropt=0x8057c90) at pg_backup_archiver.c:474\n#1 0x804a8c0 in RestoreArchive (AHX=0x8057d30, ropt=0x8057c90)\n at pg_backup_archiver.c:336\n#2 0x804a03e in main (argc=4, argv=0x7ffff864) at pg_restore.c:312\n#3 0x2ab9796b in __libc_start_main (main=0x8049a40 <main>, argc=4, \n argv=0x7ffff864, init=0x8049394 <_init>, fini=0x8052d2c <_fini>, \n rtld_fini=0x2aab5d00 <_dl_fini>, stack_end=0x7ffff85c)\n at ../sysdeps/generic/libc-start.c:92\n(gdb) \n",
"msg_date": "Sat, 17 Mar 2001 18:04:00 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "beta6 pg_restore core dumps"
},
{
"msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> pg_restore crushes if dump data includes large objects...\n\nThis is probably the same problem that Martin Renters reported\nyesterday. I have a patch that seems to fix it on my machine,\nbut I haven't heard back from Martin whether it solves his case\ncompletely. In particular, he said something about memory leaks...\n\n\t\t\tregards, tom lane\n\n\n*** pg_backup_custom.c.orig\tFri Feb 9 17:32:26 2001\n--- pg_backup_custom.c\tFri Mar 16 17:24:59 2001\n***************\n*** 521,531 ****\n \t\tif (blkLen > (ctx->inSize - 1)) {\n \t\t\tfree(ctx->zlibIn);\n \t\t\tctx->zlibIn = NULL;\n! \t\t\tctx->zlibIn = (char*)malloc(blkLen);\n \t\t\tif (!ctx->zlibIn)\n \t\t\t\tdie_horribly(AH, \"%s: failed to allocate decompression buffer\\n\", progname);\n \n! \t\t\tctx->inSize = blkLen;\n \t\t\tin = ctx->zlibIn;\n \t\t}\n \n--- 521,531 ----\n \t\tif (blkLen > (ctx->inSize - 1)) {\n \t\t\tfree(ctx->zlibIn);\n \t\t\tctx->zlibIn = NULL;\n! \t\t\tctx->zlibIn = (char*)malloc(blkLen+1);\n \t\t\tif (!ctx->zlibIn)\n \t\t\t\tdie_horribly(AH, \"%s: failed to allocate decompression buffer\\n\", progname);\n \n! \t\t\tctx->inSize = blkLen+1;\n \t\t\tin = ctx->zlibIn;\n \t\t}\n \n",
"msg_date": "Sat, 17 Mar 2001 11:37:17 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: beta6 pg_restore core dumps "
},
{
"msg_contents": "After looking more closely I see that pg_restore has two different\nbuffer overrun conditions in this one routine. Attached is take two\nof my patch.\n\nThis would be a lot simpler and cleaner if _PrintData() simply didn't\nappend a zero byte to the buffer contents. Philip, is it actually\nnecessary for it to do that?\n\n\t\t\tregards, tom lane\n\n\n*** pg_backup_custom.c.orig\tFri Feb 9 17:32:26 2001\n--- pg_backup_custom.c\tSat Mar 17 12:25:17 2001\n***************\n*** 150,156 ****\n if (ctx->zp == NULL)\n \tdie_horribly(AH, \"%s: unable to allocate zlib stream archive context\",progname);\n \n! ctx->zlibOut = (char*)malloc(zlibOutSize);\n ctx->zlibIn = (char*)malloc(zlibInSize);\n ctx->inSize = zlibInSize;\n ctx->filePos = 0;\n--- 150,163 ----\n if (ctx->zp == NULL)\n \tdie_horribly(AH, \"%s: unable to allocate zlib stream archive context\",progname);\n \n! \t/*\n! \t * zlibOutSize is the buffer size we tell zlib it can output to. We\n! \t * actually allocate one extra byte because some routines want to append\n! \t * a trailing zero byte to the zlib output. The input buffer is expansible\n! \t * and is always of size ctx->inSize; zlibInSize is just the initial\n! \t * default size for it.\n! \t */\n! ctx->zlibOut = (char*)malloc(zlibOutSize+1);\n ctx->zlibIn = (char*)malloc(zlibInSize);\n ctx->inSize = zlibInSize;\n ctx->filePos = 0;\n***************\n*** 518,531 ****\n \n blkLen = ReadInt(AH);\n while (blkLen != 0) {\n! \t\tif (blkLen > (ctx->inSize - 1)) {\n \t\t\tfree(ctx->zlibIn);\n \t\t\tctx->zlibIn = NULL;\n! \t\t\tctx->zlibIn = (char*)malloc(blkLen);\n \t\t\tif (!ctx->zlibIn)\n \t\t\t\tdie_horribly(AH, \"%s: failed to allocate decompression buffer\\n\", progname);\n \n! \t\t\tctx->inSize = blkLen;\n \t\t\tin = ctx->zlibIn;\n \t\t}\n \n--- 525,538 ----\n \n blkLen = ReadInt(AH);\n while (blkLen != 0) {\n! \t\tif (blkLen+1 > ctx->inSize) {\n \t\t\tfree(ctx->zlibIn);\n \t\t\tctx->zlibIn = NULL;\n! \t\t\tctx->zlibIn = (char*)malloc(blkLen+1);\n \t\t\tif (!ctx->zlibIn)\n \t\t\t\tdie_horribly(AH, \"%s: failed to allocate decompression buffer\\n\", progname);\n \n! \t\t\tctx->inSize = blkLen+1;\n \t\t\tin = ctx->zlibIn;\n \t\t}\n \n",
"msg_date": "Sat, 17 Mar 2001 12:31:20 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: beta6 pg_restore core dumps "
},
{
"msg_contents": "Thanks, at least the problem I have reported seems gone after I\napplied your patch.\n--\nTatsuo Ishii\n\n> After looking more closely I see that pg_restore has two different\n> buffer overrun conditions in this one routine. Attached is take two\n> of my patch.\n> \n> This would be a lot simpler and cleaner if _PrintData() simply didn't\n> append a zero byte to the buffer contents. Philip, is it actually\n> necessary for it to do that?\n> \n> \t\t\tregards, tom lane\n> \n> \n> *** pg_backup_custom.c.orig\tFri Feb 9 17:32:26 2001\n> --- pg_backup_custom.c\tSat Mar 17 12:25:17 2001\n> ***************\n> *** 150,156 ****\n> if (ctx->zp == NULL)\n> \tdie_horribly(AH, \"%s: unable to allocate zlib stream archive context\",progname);\n> \n> ! ctx->zlibOut = (char*)malloc(zlibOutSize);\n> ctx->zlibIn = (char*)malloc(zlibInSize);\n> ctx->inSize = zlibInSize;\n> ctx->filePos = 0;\n> --- 150,163 ----\n> if (ctx->zp == NULL)\n> \tdie_horribly(AH, \"%s: unable to allocate zlib stream archive context\",progname);\n> \n> ! \t/*\n> ! \t * zlibOutSize is the buffer size we tell zlib it can output to. We\n> ! \t * actually allocate one extra byte because some routines want to append\n> ! \t * a trailing zero byte to the zlib output. The input buffer is expansible\n> ! \t * and is always of size ctx->inSize; zlibInSize is just the initial\n> ! \t * default size for it.\n> ! \t */\n> ! ctx->zlibOut = (char*)malloc(zlibOutSize+1);\n> ctx->zlibIn = (char*)malloc(zlibInSize);\n> ctx->inSize = zlibInSize;\n> ctx->filePos = 0;\n> ***************\n> *** 518,531 ****\n> \n> blkLen = ReadInt(AH);\n> while (blkLen != 0) {\n> ! \t\tif (blkLen > (ctx->inSize - 1)) {\n> \t\t\tfree(ctx->zlibIn);\n> \t\t\tctx->zlibIn = NULL;\n> ! \t\t\tctx->zlibIn = (char*)malloc(blkLen);\n> \t\t\tif (!ctx->zlibIn)\n> \t\t\t\tdie_horribly(AH, \"%s: failed to allocate decompression buffer\\n\", progname);\n> \n> ! \t\t\tctx->inSize = blkLen;\n> \t\t\tin = ctx->zlibIn;\n> \t\t}\n> \n> --- 525,538 ----\n> \n> blkLen = ReadInt(AH);\n> while (blkLen != 0) {\n> ! \t\tif (blkLen+1 > ctx->inSize) {\n> \t\t\tfree(ctx->zlibIn);\n> \t\t\tctx->zlibIn = NULL;\n> ! \t\t\tctx->zlibIn = (char*)malloc(blkLen+1);\n> \t\t\tif (!ctx->zlibIn)\n> \t\t\t\tdie_horribly(AH, \"%s: failed to allocate decompression buffer\\n\", progname);\n> \n> ! \t\t\tctx->inSize = blkLen+1;\n> \t\t\tin = ctx->zlibIn;\n> \t\t}\n> \n",
"msg_date": "Sun, 18 Mar 2001 10:13:59 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: beta6 pg_restore core dumps "
},
{
"msg_contents": "At 12:31 17/03/01 -0500, Tom Lane wrote:\n>\n>This would be a lot simpler and cleaner if _PrintData() simply didn't\n>append a zero byte to the buffer contents. Philip, is it actually\n>necessary for it to do that?\n>\n\nStrictly, I think the answer is that it is not necessary. The output of the\nuncompress may be a string, which could be passed to one of the str*\nfunctions by a downstream call. AFAICT, this is not the case, and the code\nshould work without it, but it's probably safer in the long run to leave it\nthere. If you have strong feelings about removing it, I'll have a closer\nlook at the code, but my guess is that it was just me being paranoid (and\nstuffing up).\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Sun, 18 Mar 2001 12:46:36 +1100",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: beta6 pg_restore core dumps "
},
{
"msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> At 12:31 17/03/01 -0500, Tom Lane wrote:\n>> This would be a lot simpler and cleaner if _PrintData() simply didn't\n>> append a zero byte to the buffer contents. Philip, is it actually\n>> necessary for it to do that?\n\n> Strictly, I think the answer is that it is not necessary. The output of the\n> uncompress may be a string, which could be passed to one of the str*\n> functions by a downstream call. AFAICT, this is not the case, and the code\n> should work without it, but it's probably safer in the long run to leave it\n> there.\n\nConsidering that the data we are working with is binary, and may contain\nnulls, any code that insisted on null-termination would probably be ipso\nfacto broken.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 17 Mar 2001 20:57:49 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: beta6 pg_restore core dumps "
},
{
"msg_contents": "At 20:57 17/03/01 -0500, Tom Lane wrote:\n>Philip Warner <pjw@rhyme.com.au> writes:\n>> At 12:31 17/03/01 -0500, Tom Lane wrote:\n>>> This would be a lot simpler and cleaner if _PrintData() simply didn't\n>>> append a zero byte to the buffer contents. Philip, is it actually\n>>> necessary for it to do that?\n>\n>> Strictly, I think the answer is that it is not necessary. The output of the\n>> uncompress may be a string, which could be passed to one of the str*\n>> functions by a downstream call. AFAICT, this is not the case, and the code\n>> should work without it, but it's probably safer in the long run to leave it\n>> there.\n>\n>Considering that the data we are working with is binary, and may contain\n>nulls, any code that insisted on null-termination would probably be ipso\n>facto broken.\n\nBut we're not; this is the same code that sends the COPY output back to PG.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Sun, 18 Mar 2001 13:04:18 +1100",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: beta6 pg_restore core dumps "
},
{
"msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n>> Considering that the data we are working with is binary, and may contain\n>> nulls, any code that insisted on null-termination would probably be ipso\n>> facto broken.\n\n> But we're not; this is the same code that sends the COPY output back to PG.\n\nOh, isn't this the code that pushes large-object bodies around? I\nshould think the problem would've been noticed much sooner if not...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 17 Mar 2001 21:08:54 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: beta6 pg_restore core dumps "
},
{
"msg_contents": "At 21:08 17/03/01 -0500, Tom Lane wrote:\n>Philip Warner <pjw@rhyme.com.au> writes:\n>>> Considering that the data we are working with is binary, and may contain\n>>> nulls, any code that insisted on null-termination would probably be ipso\n>>> facto broken.\n>\n>> But we're not; this is the same code that sends the COPY output back to PG.\n>\n>Oh, isn't this the code that pushes large-object bodies around? I\n>should think the problem would've been noticed much sooner if not...\n\nIt does both, which is why I was also surprised.\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Sun, 18 Mar 2001 13:11:45 +1100",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: beta6 pg_restore core dumps "
},
{
"msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n>> Oh, isn't this the code that pushes large-object bodies around? I\n>> should think the problem would've been noticed much sooner if not...\n\n> It does both, which is why I was also surprised.\n\nHmm ... digging through the code, it does look like one of the possible\ndestinations is ExecuteSqlCommandBuf, which is a bit schizo about\nwhether it's dealing with a null-terminated string or not, but is likely\nto get ill if handed one that isn't.\n\nOkay, I'll commit what I have then.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 17 Mar 2001 21:18:38 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: beta6 pg_restore core dumps "
},
{
"msg_contents": "Looking at Tatsuos original message, it looks like the lowest level call was:\n\n#0 0x804abd4 in _enableTriggersIfNecessary (AH=0x8057d30, te=0x0, \n ropt=0x8057c90) at pg_backup_archiver.c:474\n\nwhich probably has nothing to do with BLOBs. I think it's a different\nproblem entirely, caused by a mistake in my recent trigger enable/disable\ncode that only become apparent if BLOBs are being restored. I'll fix it\nsoon...\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Sun, 18 Mar 2001 13:19:25 +1100",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: beta6 pg_restore core dumps "
},
{
"msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> Looking at Tatsuos original message, it looks like the lowest level call was:\n> #0 0x804abd4 in _enableTriggersIfNecessary (AH=0x8057d30, te=0x0, \n> ropt=0x8057c90) at pg_backup_archiver.c:474\n\n> which probably has nothing to do with BLOBs.\n\nOh ... I had assumed it was just dying there because of collateral\ndamage from the buffer overrun stomp, but if you see an actual bug there\nthen by all means fix it ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 17 Mar 2001 21:24:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: beta6 pg_restore core dumps "
},
{
"msg_contents": "At 21:24 17/03/01 -0500, Tom Lane wrote:\n>Philip Warner <pjw@rhyme.com.au> writes:\n>> Looking at Tatsuos original message, it looks like the lowest level call\nwas:\n>> #0 0x804abd4 in _enableTriggersIfNecessary (AH=0x8057d30, te=0x0, \n>> ropt=0x8057c90) at pg_backup_archiver.c:474\n>\n>> which probably has nothing to do with BLOBs.\n>\n>Oh ... I had assumed it was just dying there because of collateral\n>damage from the buffer overrun stomp, but if you see an actual bug there\n>then by all means fix it ;-)\n\nFixed. It happened for Tatsuo because of the test case he used. Any real,\nfull, database dump would have worked. It's just data-only ones that\nfailed, and the test case he cited was an implied data-only restore (there\nwere no tables or other metadata).\n\n\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Sun, 18 Mar 2001 14:47:41 +1100",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: beta6 pg_restore core dumps "
},
{
"msg_contents": "On Sat, Mar 17, 2001 at 12:31:20PM -0500, Tom Lane wrote:\n> After looking more closely I see that pg_restore has two different\n> buffer overrun conditions in this one routine. Attached is take two\n> of my patch.\n> \n> This would be a lot simpler and cleaner if _PrintData() simply didn't\n> append a zero byte to the buffer contents. Philip, is it actually\n> necessary for it to do that?\n\nThis patch seems to fix the problem I was seeing.\n\nMartin\n",
"msg_date": "Mon, 19 Mar 2001 10:40:56 -0500",
"msg_from": "Martin Renters <martin@datafax.com>",
"msg_from_op": false,
"msg_subject": "Re: beta6 pg_restore core dumps"
}
] |
[
{
"msg_contents": "From: Bryan Wu <lfwu@yahoo.com>\nSubject: Urgent Question on Postgresql\nDate: Sun, 18 Mar 2001 04:39:36 -0800 (PST)\nMessage-ID: <20010318123936.79559.qmail@web10003.mail.yahoo.com>\n\n> Hi tatsuo Ishii,\n> \n> I learn from postgresql mailing list that you are\n> concerning the problem the UTF8 support in Postgresql.\n> Currently I want to choose a database to store chinese\n> (BIG5 and GB) information. Could you tell me if the\n> Postgresql can store information in UTF-8 format. I\n> think it is better for me to use UTF-8 since I need to\n> handle the Big5 and GB at the same time.\n\nYes.\n\n> I find very little information on how to configure the\n> postgresql to use default encoding UTF-8 when storing\n> data. Do you have any idea?\n\nEnable the multibyte capability(configure --enable-multibyte) and \ndo createdb -E UNICODE.\n\n> And do you know if postgresql has any import tools so\n> I can import some chinese information directly to the\n> tables?\n\nPostgreSQL 7.1 will be able to do an automatic conversion between\nUTF-8 and Big5 or EUC-CN(GB). Here is a sample:\n\ncreatedb -E UNICODE unicode\npsql unicode\n\\encoding BIG5\ninsert into big_table values('some big5 data');\n\\encoding EUC_CN\ninsert into gb_table values('some EUC_CN data');\n:\n:\n--\nTatsuo Ishii\n",
"msg_date": "Sun, 18 Mar 2001 23:10:27 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Urgent Question on Postgresql"
}
] |
[
{
"msg_contents": "Cyril VELTER <cyril.velter@libertysurf.fr> writes:\n> pg_ctl output when no shm segments left\n\n> pg_ctl: It seems another postmaster is running. Trying to start postmaster \n> anyway.\n> pg_ctl: cannot start postmaster <-------- not true !!!\n> Examine the log output.\n> DEBUG: database system was interrupted at 2001-03-18 12:01:57 CET\n> DEBUG: CheckPoint record at (0, 20204684)\n> DEBUG: Redo record at (0, 20204684); Undo record at (0, 0); Shutdown TRUE\n> DEBUG: NextTransactionId: 5384; NextOid: 153313\n> DEBUG: database system was not properly shut down; automatic recovery in \n> progress...\n> DEBUG: ReadRecord: record with zero len at (0, 20204748)\n> DEBUG: redo is not required\n> DEBUG: database system is in production state \n\nLooking at the pg_ctl script, it seems this must be coming from\n\n eval '$po_path' '$POSTOPTS' $logopt '&'\n\n if [ -f $PIDFILE ];then\n\tif [ \"`sed -n 1p $PIDFILE`\" = \"$pid\" ];then\n\t echo \"$CMDNAME: cannot start postmaster\" 1>&2\n\t echo \"Examine the log output.\" 1>&2\n\t exit 1\n fi\n fi\n\nwhich is clearly not giving the postmaster enough time to remove or\nrewrite the pidfile. Shouldn't we put a \"sleep 1\" in there before\nthe \"if\"?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 18 Mar 2001 12:11:54 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "pg_ctl problem (was Re: BeOS Patch)"
},
{
"msg_contents": "At a minimum, you should do a test, and if it does not yet exist, do a\nsleep, then the test again.\n\n> Cyril VELTER <cyril.velter@libertysurf.fr> writes:\n> > pg_ctl output when no shm segments left\n> \n> > pg_ctl: It seems another postmaster is running. Trying to start postmaster \n> > anyway.\n> > pg_ctl: cannot start postmaster <-------- not true !!!\n> > Examine the log output.\n> > DEBUG: database system was interrupted at 2001-03-18 12:01:57 CET\n> > DEBUG: CheckPoint record at (0, 20204684)\n> > DEBUG: Redo record at (0, 20204684); Undo record at (0, 0); Shutdown TRUE\n> > DEBUG: NextTransactionId: 5384; NextOid: 153313\n> > DEBUG: database system was not properly shut down; automatic recovery in \n> > progress...\n> > DEBUG: ReadRecord: record with zero len at (0, 20204748)\n> > DEBUG: redo is not required\n> > DEBUG: database system is in production state \n> \n> Looking at the pg_ctl script, it seems this must be coming from\n> \n> eval '$po_path' '$POSTOPTS' $logopt '&'\n> \n> if [ -f $PIDFILE ];then\n> \tif [ \"`sed -n 1p $PIDFILE`\" = \"$pid\" ];then\n> \t echo \"$CMDNAME: cannot start postmaster\" 1>&2\n> \t echo \"Examine the log output.\" 1>&2\n> \t exit 1\n> fi\n> fi\n> \n> which is clearly not giving the postmaster enough time to remove or\n> rewrite the pidfile. Shouldn't we put a \"sleep 1\" in there before\n> the \"if\"?\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 18 Mar 2001 12:29:21 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_ctl problem (was Re: BeOS Patch)"
},
{
"msg_contents": "Tom Lane writes:\n\n> eval '$po_path' '$POSTOPTS' $logopt '&'\n>\n> if [ -f $PIDFILE ];then\n> \tif [ \"`sed -n 1p $PIDFILE`\" = \"$pid\" ];then\n> \t echo \"$CMDNAME: cannot start postmaster\" 1>&2\n> \t echo \"Examine the log output.\" 1>&2\n> \t exit 1\n> fi\n> fi\n>\n> which is clearly not giving the postmaster enough time to remove or\n> rewrite the pidfile. Shouldn't we put a \"sleep 1\" in there before\n> the \"if\"?\n\nThis is probably the best we can do.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Sun, 18 Mar 2001 20:22:31 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_ctl problem (was Re: BeOS Patch)"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n>> which is clearly not giving the postmaster enough time to remove or\n>> rewrite the pidfile. Shouldn't we put a \"sleep 1\" in there before\n>> the \"if\"?\n\n> This is probably the best we can do.\n\nActually, the whole thing should only happen if we found a pre-existing\nPIDFILE anyway. Will fix.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 18 Mar 2001 14:39:13 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pg_ctl problem (was Re: BeOS Patch) "
}
] |
[
{
"msg_contents": "In 7.0.3, is it safe to drop a check constraint by simply deleting it from\nthe pg_relcheck table?\n\nChris\n\n--\nChristopher Kings-Lynne\nFamily Health Network (ACN 089 639 243)\n\n",
"msg_date": "Mon, 19 Mar 2001 10:53:18 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Dropping CHECK constraints"
},
{
"msg_contents": "OK, I notice I have to decrement the reltriggers field in the pg_class\ndirectory as well, but other than that is there any problem?\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Christopher\n> Kings-Lynne\n> Sent: Monday, March 19, 2001 10:53 AM\n> To: Hackers\n> Subject: [HACKERS] Dropping CHECK constraints\n>\n>\n> In 7.0.3, is it safe to drop a check constraint by simply deleting it from\n> the pg_relcheck table?\n>\n> Chris\n>\n> --\n> Christopher Kings-Lynne\n> Family Health Network (ACN 089 639 243)\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n",
"msg_date": "Mon, 19 Mar 2001 11:02:30 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "RE: Dropping CHECK constraints"
},
{
"msg_contents": "Doh! Not reltriggers - I meant relchecks...\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Christopher\n> Kings-Lynne\n> Sent: Monday, March 19, 2001 10:53 AM\n> To: Hackers\n> Subject: [HACKERS] Dropping CHECK constraints\n>\n>\n> In 7.0.3, is it safe to drop a check constraint by simply deleting it from\n> the pg_relcheck table?\n>\n> Chris\n>\n> --\n> Christopher Kings-Lynne\n> Family Health Network (ACN 089 639 243)\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n",
"msg_date": "Mon, 19 Mar 2001 11:02:56 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "RE: Dropping CHECK constraints"
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> In 7.0.3, is it safe to drop a check constraint by simply deleting it from\n> the pg_relcheck table?\n\nYou'll need to adjust the relchecks count in the table's pg_class entry\nas well.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 18 Mar 2001 22:12:26 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Dropping CHECK constraints "
}
] |
[
{
"msg_contents": "Hello All!\nI have found the PostgreSQL - JDBC driver from the site\nhttp://www.retep.org.uk/postgres/.\nBut, Iam not finding any tutorial for the same.\nCan anybody tell me the name of the site where I can find both the\nPostgreSQL driver and tutorial containing the examples .\nWith regards,\nSourabh Dixit\n\n",
"msg_date": "Mon, 19 Mar 2001 11:09:19 +0530",
"msg_from": "\"sourabh dixit\" <sourabh.dixit@wipro.com>",
"msg_from_op": true,
"msg_subject": "query on PostgreSQL-JDBC driver"
}
] |
[
{
"msg_contents": "Hello all,\n\nJust to ask you if someone is planning to release beta 6 RPMs.\nI am running Redhat 7.0 test servers and the compiler is broken.\n\nRegards from Jean-Michel POURE, Paris\n",
"msg_date": "Mon, 19 Mar 2001 09:19:49 +0100",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": true,
"msg_subject": "Do you plan an RPM release of beta 6"
},
{
"msg_contents": "Jean-Michel POURE writes:\n\n> Just to ask you if someone is planning to release beta 6 RPMs.\n> I am running Redhat 7.0 test servers and the compiler is broken.\n\nThe \"gcc 2.96\" compiler on Red Hat 7.0 works for PostgreSQL. (And surely\nan RPM would have to be compiled by the same compiler.) Later versions of\nGCC (2.97, 3.0_branch, 3.1/HEAD) have all failed the regression tests for\nme in the past unless all optimization is turned off. But those aren't\nreleased yet anyway.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Mon, 19 Mar 2001 19:40:47 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Do you plan an RPM release of beta 6"
},
{
"msg_contents": "Jean-Michel POURE <jm.poure@freesurf.fr> writes:\n\n> Just to ask you if someone is planning to release beta 6 RPMs.\n> I am running Redhat 7.0 test servers and the compiler is broken.\n\nThe compiler is not broken. If you find some bugs, please submit them\nand we'll fix them.\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n",
"msg_date": "19 Mar 2001 14:52:27 -0500",
"msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)",
"msg_from_op": false,
"msg_subject": "Re: Do you plan an RPM release of beta 6"
},
{
"msg_contents": "Jean-Michel POURE wrote:\n> Just to ask you if someone is planning to release beta 6 RPMs.\n> I am running Redhat 7.0 test servers and the compiler is broken.\n\nYes. Announcement soon -- hopefully, it will snow knee-deep here\ntonight, giving me the day off tomorrow to build a new set at home. \nBeen far too busy for my own good here at work in the last three weeks\nto touch RPM stuff.\n\nThis set will be built on both RH 7 and RH 6.2, if I can swing it. More\nto follow. Pray for snow in Western North Carolina :-).\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Mon, 19 Mar 2001 14:53:48 -0500",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Do you plan an RPM release of beta 6"
}
] |
[
{
"msg_contents": "\n Hi,\n \n after long time I see the /contrib tree and I have I small notes.\n\n - (IMHO) is good evidently that all executable programs stored here use \n prefix 'pg_', but with the exception:\n\n vacuumlo\n pgbench\t(instead pg_bench)\n oid2name\n ipc_check\n fti[.pl]\n findoidjoins \n\n What rename it? After 7.1 output it will late, bacause users integrate\n these stuffs to their programs/scripts. \n \n - everything in the contrib tree has 'uninstall', with the \n exception contrib/rserv\n\n - every \"WANTED_DIRS\" has Makefile and is possible install it, \n What contrib/start-scripts, contrib/tools and contrib/retep are?\n\n I mean user after 'make install' look at $(libdir)/contrib and not\n walk in sources and search what is/isn't installed...\n\n In future ... please ignore patches those ignore the /contrib's practice \n-- the trouble is overhaul the contrib tree during each version.\n\n\n Thanks\n\tKarel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Mon, 19 Mar 2001 11:38:38 +0100",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": true,
"msg_subject": "/contrib 'cosmetic'"
},
{
"msg_contents": "Karel Zak <zakkr@zf.jcu.cz> writes:\n> - (IMHO) is good evidently that all executable programs stored here use \n> prefix 'pg_', but with the exception:\n\n> vacuumlo\n> pgbench\t(instead pg_bench)\n> oid2name\n> ipc_check\n> fti[.pl]\n> findoidjoins \n\n> What rename it? After 7.1 output it will late, bacause users integrate\n> these stuffs to their programs/scripts. \n\nMost of those were around in 7.0 or before, so it's already too late.\nI'd agree with renaming ipc_check (that's just asking for name\nconflicts). oid2name isn't very likely to hit a name conflict, but\nmaybe we should change it too.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 19 Mar 2001 11:33:45 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: /contrib 'cosmetic' "
},
{
"msg_contents": "Karel Zak writes:\n\n> - everything in the contrib tree has 'uninstall', with the\n> exception contrib/rserv\n\nFeel free to implement it. ;-)\n\n> - every \"WANTED_DIRS\" has Makefile and is possible install it,\n> What contrib/start-scripts, contrib/tools and contrib/retep are?\n\nretep is installed via build.xml when you install the jdbc driver.\nstart-scripts needs to be installed manually anway. 'tools' doesn't need\nto be installed because they are source tools.\n\n> I mean user after 'make install' look at $(libdir)/contrib and not\n> walk in sources and search what is/isn't installed...\n\nI don't think the contrib/Makefile is to be relied on except for cleaning.\n\n> In future ... please ignore patches those ignore the /contrib's practice\n> -- the trouble is overhaul the contrib tree during each version.\n\nThe reason it's in contrib is that it's a bit less than perfect. If we\nwere to prioritize on maintaining contrib, then we might as well fold it\ninto the core (which we ought to consider for some items). IMHO.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Mon, 19 Mar 2001 17:50:01 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: /contrib 'cosmetic'"
},
{
"msg_contents": "On Mon, Mar 19, 2001 at 05:50:01PM +0100, Peter Eisentraut wrote:\n\n> \n> > In future ... please ignore patches those ignore the /contrib's practice\n> > -- the trouble is overhaul the contrib tree during each version.\n> \n> The reason it's in contrib is that it's a bit less than perfect. If we\n> were to prioritize on maintaining contrib, then we might as well fold it\n> into the core (which we ought to consider for some items). IMHO.\n\nAgree. You good remember previous state of the contrib -- something \nwasn't compile-able, something was total dead ..etc. I want see nice code \nin contrib and not say \"the contrib is our trash and everybody can postpone\nsomething here\" (it not means currect state is bad. It's better than \nbefore 7.1, it's care about future :-). \n\n\t\tKarel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Mon, 19 Mar 2001 18:22:04 +0100",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": true,
"msg_subject": "Re: /contrib 'cosmetic'"
},
{
"msg_contents": "Added to TODO:\n\n\t* Rename some /contrib modules from pg* to pg_*\n\n> On Mon, Mar 19, 2001 at 05:50:01PM +0100, Peter Eisentraut wrote:\n> \n> > \n> > > In future ... please ignore patches those ignore the /contrib's practice\n> > > -- the trouble is overhaul the contrib tree during each version.\n> > \n> > The reason it's in contrib is that it's a bit less than perfect. If we\n> > were to prioritize on maintaining contrib, then we might as well fold it\n> > into the core (which we ought to consider for some items). IMHO.\n> \n> Agree. You good remember previous state of the contrib -- something \n> wasn't compile-able, something was total dead ..etc. I want see nice code \n> in contrib and not say \"the contrib is our trash and everybody can postpone\n> something here\" (it not means currect state is bad. It's better than \n> before 7.1, it's care about future :-). \n> \n> \t\tKarel\n> \n> -- \n> Karel Zak <zakkr@zf.jcu.cz>\n> http://home.zf.jcu.cz/~zakkr/\n> \n> C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 20 Mar 2001 17:04:38 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: /contrib 'cosmetic'"
}
] |
[
{
"msg_contents": "Been reading\nhttp://www.postgresql.org/docs/pgsql/doc/TODO.detail/replication with\ninterest as we are now approaching a real requirement for it on a project we\nhave finally resurrected for a bit of a dormant state.\n\nWhat is the current state-of-the-art WRT replication of any sort ? If anyone\nhas homebrew solutions that they can share, we would welcome tyring too.\n\nOur requirements, which seem sort of reasonable, are:\n\n1. One \"writer\", many \"reader\" PostgreSQL servers. We will want to write\nprovisioning / configuration information centrally and can tolerate a\n\"writer\" failuer for a time.\n2. Consitency at the transaction level. All changes to the \"writer\" server\nwill be wrapped in transactions, and there will be foreign key consistency\nchecking in many tables.\n3. Delays from \"writer\" through to consistent state on \"readers\" can be\ntolerated to within a few minutes or even more. All read-servers must be in\nthe same state when answering requests.\n\nOur objective is to acheive performance and some fault tolerance as the data\nis going to be used for near-real time configuration of various other\nbackend systems in an almost traditional 'net environment.\n\nAs we are coding various other stuff for this project over the next few\nmonths, any help we can be in developing for this part of PostgreSQL, just\nlet me know. While knowing very little about PostgreSQL internals, we learn\nquick.\n\nrgds,\n--\nPeter Galbavy\nKnowledge Matters Ltd.\nhttp://www.knowledge.com/\n\n",
"msg_date": "Mon, 19 Mar 2001 11:00:20 -0000",
"msg_from": "\"Peter Galbavy\" <peter.galbavy@knowledge.com>",
"msg_from_op": true,
"msg_subject": "FAQ: Current state of replication ?"
},
{
"msg_contents": "> What is the current state-of-the-art WRT replication of any sort ? If anyone\n> has homebrew solutions that they can share, we would welcome tyring too.\n\nThere is some code in contrib/rserv for 7.1 which does table\nreplication. It has some restrictions, but does implement the basic\nconcept. I think a tarball to do the same for 7.0 and earlier is\navailable at www.pgsql.com (just Makefile differences).\n\nWe are currently working through the issues involved with multi-slave\nreplication and the ramifications for failover to (one of) the slaves.\nIt looks like the rserv code may assume too much independence between\nslaves and replication sync information, and failover may be\nnot-quite-right in those cases.\n\nWill be posting to the list when we know the answer (though\ncontributions and inputs are of course always welcome!). afaict changes\nin rserv schema, if necessary, will not be available for 7.1, but we'll\nbe posting patches and updating the CVS tree.\n\nbtw, it looks like TODO.detail/replication predates the replication\nimplementation, and has no real relationship with the implementation.\nThere is some thought that WAL/BAR features can help support replication\nat a different level than is done now, but that is work for the future\nafaik.\n\n - Thomas\n",
"msg_date": "Tue, 20 Mar 2001 05:52:56 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: FAQ: Current state of replication ?"
},
{
"msg_contents": "I found interesting paper http://citeseer.nj.nec.com/330257.html\n\"Don't be lazy, be consistent: Postgres-R, A new way to implement Database Replication\"\n\nAbstract:\nDatabase designers often point out that eager, update everywhere replication suffers from\nhigh deadlock rates, message overhead and poor response times. In this paper, we show that these\nlimitations can be circumvented by using a combination of known and novel techniques. Moreover, we\nshow how the proposed solution can be incorporated into a real database system. The paper discusses\nthe new protocols and their implementation in PostgreSQL. It also provides experimental results proving that many of the dangers and limitations of\nreplication can be avoided by using the appropriate techniques. 1 Introduction Existing replication protocols can be divided into eager and lazy\n\n\n\tRegards,\n\n\t\tOleg\nOn Tue, 20 Mar 2001, Thomas Lockhart wrote:\n\n> > What is the current state-of-the-art WRT replication of any sort ? If anyone\n> > has homebrew solutions that they can share, we would welcome tyring too.\n>\n> There is some code in contrib/rserv for 7.1 which does table\n> replication. It has some restrictions, but does implement the basic\n> concept. I think a tarball to do the same for 7.0 and earlier is\n> available at www.pgsql.com (just Makefile differences).\n>\n> We are currently working through the issues involved with multi-slave\n> replication and the ramifications for failover to (one of) the slaves.\n> It looks like the rserv code may assume too much independence between\n> slaves and replication sync information, and failover may be\n> not-quite-right in those cases.\n>\n> Will be posting to the list when we know the answer (though\n> contributions and inputs are of course always welcome!). afaict changes\n> in rserv schema, if necessary, will not be available for 7.1, but we'll\n> be posting patches and updating the CVS tree.\n>\n> btw, it looks like TODO.detail/replication predates the replication\n> implementation, and has no real relationship with the implementation.\n> There is some thought that WAL/BAR features can help support replication\n> at a different level than is done now, but that is work for the future\n> afaik.\n>\n> - Thomas\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Tue, 20 Mar 2001 14:06:45 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": false,
"msg_subject": "Re: Re: FAQ: Current state of replication ?"
},
{
"msg_contents": "> 1. One \"writer\", many \"reader\" PostgreSQL servers. We will want to write\n> provisioning / configuration information centrally and can tolerate a\n> \"writer\" failuer for a time.\n> 2. Consitency at the transaction level. All changes to the \"writer\" server\n> will be wrapped in transactions, and there will be foreign key consistency\n> checking in many tables.\n> 3. Delays from \"writer\" through to consistent state on \"readers\" can be\n> tolerated to within a few minutes or even more. All read-servers must be\nin\n> the same state when answering requests.\n>\n> Our objective is to acheive performance and some fault tolerance as the\ndata\n> is going to be used for near-real time configuration of various other\n> backend systems in an almost traditional 'net environment.\n>\n> As we are coding various other stuff for this project over the next few\n> months, any help we can be in developing for this part of PostgreSQL, just\n> let me know. While knowing very little about PostgreSQL internals, we\nlearn\n> quick.\n\nPeter,\n\nI've been mostly a lurker here (at least on the hackers list) for a couple\nof years, but I thought I would \"de-lurk\" for long enough to reply to your\nquestion ;)\n\nAttached is the source for a replication solution I recently wrote for a\nproject I'm working on. I think it meets your criteria. I was considering\nsending it to the list as a possible contrib after 7.1 was released (if\nanyone is interested, and the code is worthy), but since you asked, here it\nis. A few disclaimers are in order. First, I am *not* an experienced C\nprogrammer. The code works in the limited testing I've done so far but needs\nto be reviewed and scrubbed by someone with more experience. Second, I have\nnot yet used this in production, so use at your own risk. Third, I have only\ntested it under Red Hat 6.2 and 7.0. Finally, it will only work with >=\nPostgreSQL 7.1 beta3.\n\nBasic installation instructions:\n copy pg_lnk.tgz to contrib under the PostgreSQL source tree\n tar -xzvf pg_lnk.tgz\n cd pg_lnk\n ./install.sh\n\nI'll be happy to answer any questions to help you get it installed and\nworking. I would appreciate any feedback, improvements, general guidance if\nyou decide to use it.\n\nThanks,\n\nJoe\n\n<lurking once again . . .>",
"msg_date": "Tue, 20 Mar 2001 11:30:02 -0800",
"msg_from": "\"Joe Conway\" <joe@conway-family.com>",
"msg_from_op": false,
"msg_subject": "Re: FAQ: Current state of replication ?"
},
{
"msg_contents": "On Mon, Mar 19, 2001 at 11:00:20AM -0000, Peter Galbavy wrote:\n> 1. One \"writer\", many \"reader\" PostgreSQL servers. We will want to write\n> provisioning / configuration information centrally and can tolerate a\n> \"writer\" failuer for a time.\n> 2. Consitency at the transaction level. All changes to the \"writer\" server\n> will be wrapped in transactions, and there will be foreign key consistency\n> checking in many tables.\n> 3. Delays from \"writer\" through to consistent state on \"readers\" can be\n> tolerated to within a few minutes or even more. All read-servers must be in\n> the same state when answering requests.\n> \n> Our objective is to acheive performance and some fault tolerance as the data\n> is going to be used for near-real time configuration of various other\n> backend systems in an almost traditional 'net environment.\n\nYour application sounds like a perfect fit for LDAP.\n\nIn other words, keep your database in Postgres, but export views of it\nthrough for clients to query through LDAP. Rely on LDAP replication,\nsince it has the model you need and works today.\n-- \nChristopher Masto Senior Network Monkey NetMonger Communications\nchris@netmonger.net info@netmonger.net http://www.netmonger.net\n\nFree yourself, free your machine, free the daemon -- http://www.freebsd.org/\n",
"msg_date": "Tue, 20 Mar 2001 14:56:00 -0500",
"msg_from": "Christopher Masto <chris@netmonger.net>",
"msg_from_op": false,
"msg_subject": "Re: FAQ: Current state of replication ?"
},
{
"msg_contents": "Hello,\n\n During repopulation of the database (using the results of the pg_dump\nprogram), I spot two strange things:\n\n- fields defined as TIMESTAMP DEFAULT CURRENT_TIMESTAMP sometimes generate\n invalid format of the date, for instance:\n\n 2001-02-10 13:11:60.00+01 - which follows the records\n 2001-02-10 13:10:59.00+01\n\n Which means, that the proper timestamp should look like:\n 2001-02-10 13:11:00.00+01\n\n- I have a float4 field, which contains the value 3e-40 (approximately).\n I know it's there - the queries return it without any problem. Problem\n occurs again when I try to repopulate the table. Having such a value\n in a line generated by pg_dump (in form of COPY from stdin) I get\n the error:\n\n Bad float4 input format -- underflow.\n\n When I redefine the field as a float8 everything works fine. But why\n does it occur during repopulation - when in fact such a value did exist\n in the table before the table was drop.\n\nI'am running Postgresql 7.0.2\n\n\t\t\t\tthanks for help\n\n\t\t\t\t\t\tMark\n\n\n\n\n",
"msg_date": "Wed, 21 Mar 2001 18:35:03 +0100 (MET)",
"msg_from": "Marek PUBLICEWICZ <M.Publicewicz@elka.pw.edu.pl>",
"msg_from_op": false,
"msg_subject": "Strange results of CURRENT_TIMESTAMP"
},
{
"msg_contents": "> - fields defined as TIMESTAMP DEFAULT CURRENT_TIMESTAMP sometimes generate\n> invalid format of the date, for instance:\n> 2001-02-10 13:11:60.00+01\n\nYou are running the Mandrake RPMs? Or have otherwise compiled using the\n-ffast-math compiler flag?\n\n - Thomas\n",
"msg_date": "Thu, 22 Mar 2001 06:44:16 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: Strange results of CURRENT_TIMESTAMP"
},
{
"msg_contents": "Hi,\n\nOn Thu, 22 Mar 2001, Thomas Lockhart wrote:\n\n> > - fields defined as TIMESTAMP DEFAULT CURRENT_TIMESTAMP sometimes generate\n> > invalid format of the date, for instance:\n> > 2001-02-10 13:11:60.00+01\n>\n> You are running the Mandrake RPMs? Or have otherwise compiled using the\n> -ffast-math compiler flag?\n>\n\nI'm on the Mandrake but I'm using the compiled (from the sources) version\nof PostgreSQL. As a matter of fact I dont't remember, wheter or not I put\nthe -ffast-math flag. Is this the reason for the inaccuracy?\n\nIf so - is it responsible also for the 'underflow' error?\n\n\n\t\t\t\t\tMark\n\n\n\n",
"msg_date": "Thu, 22 Mar 2001 15:59:14 +0100 (MET)",
"msg_from": "Marek PUBLICEWICZ <M.Publicewicz@elka.pw.edu.pl>",
"msg_from_op": false,
"msg_subject": "Re: Re: Strange results of CURRENT_TIMESTAMP"
}
] |
[
{
"msg_contents": "\n> >> It's great as long as you never block, but it sucks for making things\n> >> wait, because the wait interval will be some multiple of 10 msec rather\n> >> than just the time till the lock comes free.\n> \n> > On the AIX platform usleep (3) is able to really sleep microseconds without \n> > busying the cpu when called for more than approx. 100 us (the longer the interval,\n> > the less busy the cpu gets) .\n> > Would this not be ideal for spin_lock, or is usleep not very common ?\n> > Linux sais it is in the BSD 4.3 standard.\n> \n> HPUX has usleep, but the man page says\n> \n> The usleep() function is included for its historical usage. The\n> setitimer() function is preferred over this function.\n\nI doubt that setitimer has microsecond precision on HPUX.\n\n> In any case, I would expect that all these functions offer accuracy\n> no better than the scheduler's regular clock cycle (~ 100Hz) on most\n> kernels.\n\nNot on AIX, and I don't beleive that for the majority of other UNIX platforms eighter. \nI do however suspect, that some implementations need a busy loop, which would, \nif at all, only be acceptable on an SMP system.\n\nAndreas\n",
"msg_date": "Mon, 19 Mar 2001 13:15:52 +0100",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: AW: Re[4]: Allowing WAL fsync to be done via O_SYNC\t "
},
{
"msg_contents": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n>> HPUX has usleep, but the man page says\n>> \n>> The usleep() function is included for its historical usage. The\n>> setitimer() function is preferred over this function.\n\n> I doubt that setitimer has microsecond precision on HPUX.\n\nWell, if you insist on beating this into the ground:\n\n$ cat timetest.c\n#include <stdio.h>\n#include <stdlib.h>\n#include <unistd.h>\n\nint main(int argc, char** argv)\n{\n int i;\n int delay;\n\n delay = atoi(argv[1]);\n\n for (i = 0; i < 1000; i++)\n usleep(delay);\n\n return 0;\n}\n$ gcc -O -Wall timetest.c\n$ time ./a.out 1\n\nreal 0m20.02s\nuser 0m0.04s\nsys 0m0.09s\n$ time ./a.out 1000\n\nreal 0m20.04s\nuser 0m0.04s\nsys 0m0.09s\n$ time ./a.out 10000\n\nreal 0m20.01s\nuser 0m0.03s\nsys 0m0.08s\n$ time ./a.out 20000\n\nreal 0m30.03s\nuser 0m0.04s\nsys 0m0.09s\n$\n$ cat timetest2.c\n#include <stdio.h>\n#include <stdlib.h>\n#include <signal.h>\n#include <time.h>\n#include <unistd.h>\n\ntypedef void (*pqsigfunc) (int);\n\npqsigfunc\npqsignal(int signo, pqsigfunc func)\n{\n\tstruct sigaction act,\n\t\t\t\toact;\n\n\tact.sa_handler = func;\n\tsigemptyset(&act.sa_mask);\n\tact.sa_flags = 0;\n\tif (signo != SIGALRM)\n\t\tact.sa_flags |= SA_RESTART;\n\tif (sigaction(signo, &act, &oact) < 0)\n\t\treturn SIG_ERR;\n\treturn oact.sa_handler;\n}\n\nvoid\ncatch_alarm(int sig)\n{\n}\n\nint main(int argc, char** argv)\n{\n\tint i;\n\tstruct itimerval iv;\n\tint delay;\n\n\tdelay = atoi(argv[1]);\n\n\tpqsignal(SIGALRM, catch_alarm);\n\n\tfor (i = 0; i < 1000; i++)\n\t{\n\t\tiv.it_value.tv_sec = 0;\n\t\tiv.it_value.tv_usec = delay;\n\t\tiv.it_interval.tv_sec = 0;\n\t\tiv.it_interval.tv_usec = 0;\n\t\tsetitimer(ITIMER_REAL, &iv, NULL);\n\t\tpause();\n\t}\n\n\treturn 0;\n}\n$ gcc -O -Wall timetest2.c\n$ time ./a.out 1\n\nreal 0m20.04s\nuser 0m0.01s\nsys 0m0.05s\n$ time ./a.out 1000\n\nreal 0m20.02s\nuser 0m0.01s\nsys 0m0.06s\n$ time ./a.out 10000\n\nreal 0m20.01s\nuser 0m0.01s\nsys 0m0.05s\n$ time ./a.out 20000\n\nreal 0m30.01s\nuser 0m0.01s\nsys 0m0.06s\n$\n\nThe usleep man page implies that usleep is actually implemented as a\nsetitimer call, which would explain the interchangeable results. In\nany case, neither one is useful for timing sub-clock-tick intervals;\nin fact they're worse than select().\n\nAnyone else want to try these examples on other platforms?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 19 Mar 2001 12:27:50 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: Re[4]: Allowing WAL fsync to be done via O_SYNC "
}
] |
[
{
"msg_contents": "For those interested in the topic, this is something that went through\nthe Vorbis-dev mailing list not that long ago which implements the\nabove topic into the vorbis decoder. Might be useful to see what\nsystems it works on, and where it breaks as well as a reference\nimplementation. (patch included for files affected)\n\n--\nRod Taylor\n\nThere are always four sides to every story: your side, their side, the\ntruth, and what really happened.\n----- Original Message -----\nFrom: \"Christian Weisgerber\" <naddy@mips.inka.de>\nNewsgroups: list.vorbis.dev\nTo: <vorbis-dev@xiph.org>\nSent: Saturday, March 17, 2001 12:01 PM\nSubject: [vorbis-dev] ogg123: shared memory by mmap()\n\n\n> The patch below adds:\n>\n> - acinclude.m4: A new macro A_FUNC_SMMAP to check that sharing\npages\n> through mmap() works. This is taken from Joerg Schilling's star.\n> - configure.in: A_FUNC_SMMAP\n> - ogg123/buffer.c: If we have a working mmap(), use it to create\n> a region of shared memory instead of using System V IPC.\n>\n> Works on BSD. Should also work on SVR4 and offspring (Solaris),\n> and Linux.\n>\n>\n> --- acinclude.m4.orig Wed Feb 28 03:36:50 2001\n> +++ acinclude.m4 Sat Mar 17 17:39:58 2001\n> @@ -300,3 +300,65 @@\n> AC_SUBST(AO_LIBS)\n> rm -f conf.aotest\n> ])\n<-- SNIP -->\n> +if test $ac_cv_func_smmap = yes; then\n> + AC_DEFINE(HAVE_SMMAP)\n> +fi])\n> --- configure.in.orig Mon Feb 26 06:56:46 2001\n> +++ configure.in Sat Mar 17 17:39:45 2001\n> @@ -67,7 +67,7 @@\n> dnl Check for library functions\n> dnl --------------------------------------------------\n>\n> -dnl none\n> +AC_FUNC_SMMAP\n>\n> dnl --------------------------------------------------\n> dnl Work around FHS stupidity\n> --- ogg123/buffer.c.old Sat Mar 17 15:37:07 2001\n> +++ ogg123/buffer.c Sat Mar 17 17:40:16 2001\n> @@ -6,16 +6,16 @@\n<-- SNIP -->\n> buffer_init (buf, size);\n>\n> --\n> Christian \"naddy\" Weisgerber\nnaddy@mips.inka.de\n>\n> --- >8 ----\n> List archives: http://www.xiph.org/archives/\n> Ogg project homepage: http://www.xiph.org/ogg/\n> To unsubscribe from this list, send a message to\n'vorbis-dev-request@xiph.org'\n> containing only the word 'unsubscribe' in the body. No subject is\nneeded.\n> Unsubscribe messages sent to the list will be ignored/filtered.\n\n\n",
"msg_date": "Mon, 19 Mar 2001 07:28:21 -0500",
"msg_from": "\"Rod Taylor\" <rod.taylor@inquent.com>",
"msg_from_op": true,
"msg_subject": "Fw: [vorbis-dev] ogg123: shared memory by mmap()"
},
{
"msg_contents": "WOOT WOOT! DANGER WILL ROBINSON!\n\n> ----- Original Message -----\n> From: \"Christian Weisgerber\" <naddy@mips.inka.de>\n> Newsgroups: list.vorbis.dev\n> To: <vorbis-dev@xiph.org>\n> Sent: Saturday, March 17, 2001 12:01 PM\n> Subject: [vorbis-dev] ogg123: shared memory by mmap()\n> \n> \n> > The patch below adds:\n> >\n> > - acinclude.m4: A new macro A_FUNC_SMMAP to check that sharing\n> pages\n> > through mmap() works. This is taken from Joerg Schilling's star.\n> > - configure.in: A_FUNC_SMMAP\n> > - ogg123/buffer.c: If we have a working mmap(), use it to create\n> > a region of shared memory instead of using System V IPC.\n> >\n> > Works on BSD. Should also work on SVR4 and offspring (Solaris),\n> > and Linux.\n\nThis is a really bad idea performance wise. Solaris has a special\ncode path for SYSV shared memory that doesn't require tons of swap\ntracking structures per-page/per-process. FreeBSD also has this\noptimization (it's off by default, but should work since FreeBSD\n4.2 via the sysctl kern.ipc.shm_use_phys=1)\n\nBoth OS's use a trick of making the pages non-pageable, this allows\nsignifigant savings in kernel space required for each attached\nprocess, as well as the use of large pages which reduce the amount\nof TLB faults your processes will incurr.\n\nAnyhow, if you could make this a runtime option it wouldn't be so\nevil, but as a compile time option, it's a really bad idea for\nSolaris and FreeBSD.\n\n--\n-Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]\n",
"msg_date": "Mon, 19 Mar 2001 04:55:01 -0800",
"msg_from": "Alfred Perlstein <bright@wintelcom.net>",
"msg_from_op": false,
"msg_subject": "Re: Fw: [vorbis-dev] ogg123: shared memory by mmap()"
},
{
"msg_contents": "> > > The patch below adds:\n> > >\n> > > - acinclude.m4: A new macro A_FUNC_SMMAP to check that sharing\n> > pages\n> > > through mmap() works. This is taken from Joerg Schilling's star.\n> > > - configure.in: A_FUNC_SMMAP\n> > > - ogg123/buffer.c: If we have a working mmap(), use it to create\n> > > a region of shared memory instead of using System V IPC.\n> > >\n> > > Works on BSD. Should also work on SVR4 and offspring (Solaris),\n> > > and Linux.\n> \n> This is a really bad idea performance wise. Solaris has a special\n> code path for SYSV shared memory that doesn't require tons of swap\n> tracking structures per-page/per-process. FreeBSD also has this\n> optimization (it's off by default, but should work since FreeBSD\n> 4.2 via the sysctl kern.ipc.shm_use_phys=1)\n\n> \n> Both OS's use a trick of making the pages non-pageable, this allows\n> signifigant savings in kernel space required for each attached\n> process, as well as the use of large pages which reduce the amount\n> of TLB faults your processes will incurr.\n\nThat is interesting. BSDi has SysV shared memory as non-pagable, and I\nalways thought of that as a bug. Seems you are saying that having it\npagable has a significant performance penalty. Interesting.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 20 Mar 2001 17:10:33 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fw: [vorbis-dev] ogg123: shared memory by mmap()"
},
{
"msg_contents": "* Bruce Momjian <pgman@candle.pha.pa.us> [010320 14:10] wrote:\n> > > > The patch below adds:\n> > > >\n> > > > - acinclude.m4: A new macro A_FUNC_SMMAP to check that sharing\n> > > pages\n> > > > through mmap() works. This is taken from Joerg Schilling's star.\n> > > > - configure.in: A_FUNC_SMMAP\n> > > > - ogg123/buffer.c: If we have a working mmap(), use it to create\n> > > > a region of shared memory instead of using System V IPC.\n> > > >\n> > > > Works on BSD. Should also work on SVR4 and offspring (Solaris),\n> > > > and Linux.\n> > \n> > This is a really bad idea performance wise. Solaris has a special\n> > code path for SYSV shared memory that doesn't require tons of swap\n> > tracking structures per-page/per-process. FreeBSD also has this\n> > optimization (it's off by default, but should work since FreeBSD\n> > 4.2 via the sysctl kern.ipc.shm_use_phys=1)\n> \n> > \n> > Both OS's use a trick of making the pages non-pageable, this allows\n> > signifigant savings in kernel space required for each attached\n> > process, as well as the use of large pages which reduce the amount\n> > of TLB faults your processes will incurr.\n> \n> That is interesting. BSDi has SysV shared memory as non-pagable, and I\n> always thought of that as a bug. Seems you are saying that having it\n> pagable has a significant performance penalty. Interesting.\n\nYes, having it pageable is actually sort of bad.\n\nIt doesn't allow you to do several important optimizations.\n\n-- \n-Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]\n\n",
"msg_date": "Tue, 20 Mar 2001 15:44:10 -0800",
"msg_from": "Alfred Perlstein <bright@wintelcom.net>",
"msg_from_op": false,
"msg_subject": "Re: Fw: [vorbis-dev] ogg123: shared memory by mmap()"
}
] |
[
{
"msg_contents": "I think it is time to start giving people official responsibility for\ncertain areas of the code. \n\nIn the old says, we didn't have many _exports_, and people submitting\npatches often knew more than we did because they had spent time studying\nthe code.\n\nNow, we have much more expertise, to the point that people not involved\nin those areas can't really contribute very well without the assistance\nof those experts.\n\nFor example, I can't seem to evaluate any Makefile changes because Peter\nE. knows how the system is designed much better. The same is true for\nJDBC and many other areas.\n\nI would like to create a web page in the developer's corner that\ncontains module names and the people who are most knowledgeable. I will\nno longer apply changes to those areas without getting approval from\nthose people. My recent attempts have made things worse rather than\nbetter. I suggest other committers do the same.\n\nMy short list right now is:\n\n\tMakefiles/configure\tPeter E.\n\tpsql\t\t\tPeter E.\n\tJdbc\t\t\tPeter M.\n\tOdbc\t\t\tHiroshi?\n\tEcpg\t\t\tMichael\n\tPython\t\t\tD'Arcy\n\tOptimizer\t\tTom Lane\n\tRewrite\t\t\tJan\n\tLocking\t\t\tTom\n\tCache\t\t\tTom\n\tDate/Time\t\tThomas\n\tPl/PgSQL\t\tJan\n\tSGML\t\t\tPeter E, Thomas\n\tWAL\t\t\tVadim, Tom\n\tFAQ/TODO\t\tBruce\n\tRegression\t\tPeter E?\n\tMultibyte\t\tTatsuo \n\tGIST\t\t\tOleg\n\nComments?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 19 Mar 2001 11:34:03 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Patch application"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I think it is time to start giving people official responsibility for\n> certain areas of the code. \n\nThis strikes me as overly formalistic, and more likely to lead to\narteriosclerosis than any improvement in code quality. Particularly\nwith a breakdown such as you have proposed, which would likely mean\nasking multiple people to approve any given patch.\n\nI think the procedural error in this past weekend's contrib mess was\nsimply that you didn't pay attention to the fact that Oleg's patch was\nbased on an out-of-date copy of the contrib module. You should have\neither merged the changes or bounced it back to Oleg for him to do so.\n\nInsisting on CVS $Header$ or $Id$ markers in all code files might help\nto detect this kind of error --- but nothing will help if you are\nwilling to overwrite other people's changes simply because you didn't\nrecall the reason for them at the moment.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 19 Mar 2001 14:42:55 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Patch application "
},
{
"msg_contents": "\nThe below basically summarizes my opinion quite well ...\n\nOn Mon, 19 Mar 2001, Tom Lane wrote:\n\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I think it is time to start giving people official responsibility for\n> > certain areas of the code.\n>\n> This strikes me as overly formalistic, and more likely to lead to\n> arteriosclerosis than any improvement in code quality. Particularly\n> with a breakdown such as you have proposed, which would likely mean\n> asking multiple people to approve any given patch.\n>\n> I think the procedural error in this past weekend's contrib mess was\n> simply that you didn't pay attention to the fact that Oleg's patch was\n> based on an out-of-date copy of the contrib module. You should have\n> either merged the changes or bounced it back to Oleg for him to do so.\n>\n> Insisting on CVS $Header$ or $Id$ markers in all code files might help\n> to detect this kind of error --- but nothing will help if you are\n> willing to overwrite other people's changes simply because you didn't\n> recall the reason for them at the moment.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n",
"msg_date": "Mon, 19 Mar 2001 15:50:56 -0400 (AST)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Patch application "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I think it is time to start giving people official responsibility for\n> > certain areas of the code. \n> \n> This strikes me as overly formalistic, and more likely to lead to\n> arteriosclerosis than any improvement in code quality. Particularly\n> with a breakdown such as you have proposed, which would likely mean\n> asking multiple people to approve any given patch.\n> \n> I think the procedural error in this past weekend's contrib mess was\n> simply that you didn't pay attention to the fact that Oleg's patch was\n> based on an out-of-date copy of the contrib module. You should have\n> either merged the changes or bounced it back to Oleg for him to do so.\n> \n> Insisting on CVS $Header$ or $Id$ markers in all code files might help\n> to detect this kind of error --- but nothing will help if you are\n> willing to overwrite other people's changes simply because you didn't\n> recall the reason for them at the moment.\n\nI understand the formalistic problem, and maybe I overstated its\nformality, but it seems it would be good to maintain a list for two\nreasons:\n\n\t1) With formalize experts in various areas, if someone replies\nto an email, the recipient can clearly know this person is an expert in\nthat area. It also helps focus attention on certain people for\ndevelopment assistance.\n\n\t2) The number of patches that I apply that need fixing by\nsomeone else is getting more frequent. The most recent patch is just\none of many that had to be cleaned up for various reasons. I reviewed\nthe patch and still didn't see the intent of the Makefile change. In\nthis case, the CVS logs would have helped, but in others there is a\ndesign goal that I just can not comprehend. \n\nLooking at the list, I feel I would have to contact someone before\nmaking any changes to these areas. Even if I can get the patch applied\nproperly, I doubt I would do it the _right_ way. Sometimes it is just\nthat the style of the patcher doesn't match the style in our sources.\n\nMaybe we don't have to make it required, but plain patches from people I\ndon't know really need some review. Perhaps I can attach the patch to\nthe PATCHES list when I apply it so people can see exactly what was\nchanged.\n\nAren't people upset about the minor fixes they have to make to patches I\napply? Is it easier to just clean up things rather than find/apply the\npatches?\n\nFor example, almost any change to an SGML file seems to require Peter E\nto fix some part of it, usually the markup. Is that OK, Peter? Most of\nthe interfaces require an interface expert's comment I would think.\n\n---------------------------------------------------------------------------\n\n Makefiles/configure Peter E.\n psql Peter E.\n Jdbc Peter M.\n Odbc Hiroshi?\n Ecpg Michael\n Python D'Arcy\n Optimizer Tom Lane\n Rewrite Jan\n Locking Tom\n Cache Tom\n Date/Time Thomas\n Pl/PgSQL Jan\n SGML Peter E, Thomas\n WAL Vadim, Tom\n FAQ/TODO Bruce\n Regression Peter E?\n Multibyte Tatsuo \n GIST Oleg\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 19 Mar 2001 15:39:24 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Patch application"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I understand the formalistic problem, and maybe I overstated its\n> formality, but it seems it would be good to maintain a list for two\n> reasons:\n\nI don't have a problem with keeping an informal list of area experts.\nI was just objecting to the notion of formal signoffs (I doubt Peter E.\nwants to look at every single Makefile change, for example).\n\nAlso, there's a point that keeps coming up in these discussions: the\nstandards need to be different depending on what time of the release\ncycle it is. Perhaps formal signoffs *are* appropriate when we're\nthis late in beta.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 19 Mar 2001 15:56:47 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Patch application "
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n\n> I understand the formalistic problem, and maybe I overstated its\n> formality, but it seems it would be good to maintain a list for two\n> reasons:\n\nIn projects like gcc and the GNU binutils, we use a MAINTAINERS file.\nSome people have blanket write privileges. Some people have write\npriviliges to certain areas of the code. Anybody else needs a patch\nto be approved before they can check it in. Patches which are\n``obviously correct'' are always OK.\n\nThe MAINTAINERS file can be used as a guide for who to ask in certain\nareas of the code.\n\nThis may be overly complex for Postgres now. But I believe that you\nwill need something of this nature as the project continues to grow.\nThis permits you to scale to more developers.\n\nNote that the MAINTAINERS file is not enforced by a program. It is\nonly enforced by people noticing an unapproved checkin message, and\ntheoreticalliy removing write privileges.\n\nFor example, I have appended the gcc MAINTAINERS file.\n\nIan\n\n\t\t\tBlanket Write Privs.\n\nCraig Burley\t\t\t\t\tcraig@jcb-sc.com\nJohn Carr\t\t\t\t\tjfc@mit.edu\nRichard Earnshaw\t\t\t\trearnsha@arm.com\nRichard Henderson rth@redhat.com\nGeoffrey Keating\t\t\t\tgeoffk@redhat.com\nRichard Kenner\t\t\t\t\tkenner@nyu.edu\nJeff Law\t\t\t\t\tlaw@redhat.com\nJason Merrill\t\t\t\t\tjason@redhat.com\nMichael Meissner\t\t\t\tmeissner@redhat.com\nDavid S. Miller\t\t\t\t\tdavem@redhat.com\nMark Mitchell\t\t\t\t\tmark@codesourcery.com\nBernd Schmidt\t\t\t\t\tbernds@redhat.com\nJim Wilson\t\t\t\t\twilson@redhat.com\n\n\n\t\t\tVarious Maintainers\n\nsh port\t\t\tJoern Rennecke\t\tamylaar@redhat.com\n\t\t\tAlexandre Oliva\t\taoliva@redhat.com\nv850 port\t\tNick Clifton\t\tnickc@redhat.com\nv850 port\t\tMichael Meissner\tmeissner@redhat.com\narm port\t\tNick Clifton\t\tnickc@redhat.com\narm port\t\tRichard Earnshaw\trearnsha@arm.com\nm32r port\t\tNick Clifton\t\tnickc@redhat.com\n\t\t\tMichael Meissner\tmeissner@redhat.com\nh8 port\t\t\tJeff Law\t\tlaw@redhat.com\nmcore\t\t\tNick Clifton\t\tnickc@redhat.com\n\t\t\tJim Dein\t\tjdein@windriver.com\nmn10200 port\t\tJeff Law\t\tlaw@redhat.com\nmn10300 port\t\tJeff Law\t\tlaw@redhat.com\n\t\t\tAlexandre Oliva\t\taoliva@redhat.com\nhppa port\t\tJeff Law\t\tlaw@redhat.com\nm68hc11 port\t\tStephane Carrez\t\tStephane.Carrez@worldnet.fr\nm68k port (?)\t\tJeff Law\t\tlaw@redhat.com\nm68k-motorola-sysv port\tPhilippe De Muyter\tphdm@macqel.be\nrs6000 port\t\tGeoff Keating\t\tgeoffk@redhat.com\nrs6000 port\t\tDavid Edelsohn\t\tdje@watson.ibm.com\nmips port\t\tGavin Romig-Koch\tgavin@redhat.com\nia64 port\t\tJim Wilson\t\twilson@redhat.com\ni860 port\t\tJason Eckhardt\t\tjle@redhat.com\ni960 port\t\tJim Wilson\t\twilson@redhat.com\na29k port\t\tJim Wilson\t\twilson@redhat.com\nalpha port\t\tRichard Henderson\trth@redhat.com\nsparc port\t\tRichard Henderson\trth@redhat.com\nsparc port\t\tDavid S. Miller\t\tdavem@redhat.com\nsparc port\t\tJakub Jelinek\t\tjakub@redhat.com\nx86 ports\t\tStan Cox\t\tscox@redhat.com\nc4x port\t\tMichael Hayes\t\tm.hayes@elec.canterbury.ac.nz\narc port\t\tRichard Kenner\t\tkenner@nyu.edu\nfr30 port\t\tNick Clifton\t\tniclc@redhat.com\nvax port\t\tDave Anglin\t\tdave.anglin@nrc.ca\nfortran\t\t\tRichard Henderson\trth@redhat.com\nfortran\t\t\tToon Moene\t\ttoon@moene.indiv.nluug.nl\nc++\t\t\tJason Merrill\t\tjason@redhat.com\nc++ Mark Mitchell\t\tmark@codesourcery.com\nchill\t\t\tDave Brolley\t\tbrolley@redhat.com\nchill\t\t\tPer Bothner\t\tper@bothner.com\njava\t\t\tPer Bothner\t\tper@bothner.com\njava\t\t\tAlexandre Petit-Bianco\tapbianco@redhat.com\nmercury\t\t\tFergus Henderson\tfjh@cs.mu.oz.au\nobjective-c\t\tStan Shebs\t\tshebs@apple.com\nobjective-c\t\tOvidiu Predescu\t\tovidiu@cup.hp.com\ncpplib\t\t\tDave Brolley\t\tbrolley@redhat.com\ncpplib\t\t\tPer Bothner\t\tper@bothner.com\ncpplib\t\t\tZack Weinberg\t\tzackw@stanford.edu\ncpplib\t\t\tNeil Booth\t\tneil@daikokuya.demon.co.uk\nalias analysis\t\tJohn Carr\t\tjfc@mit.edu\nloop unrolling\t\tJim Wilson\t\twilson@redhat.com\nloop discovery\t\tMichael Hayes\t\tm.hayes@elec.canterbury.ac.nz\nscheduler (+ haifa)\tJim Wilson\t\twilson@redhat.com\nscheduler (+ haifa)\tMichael Meissner\tmeissner@redhat.com\nscheduler (+ haifa)\tJeff Law\t\tlaw@redhat.com\nreorg\t\t\tJeff Law\t\tlaw@redhat.com\ncaller-save.c\t\tJeff Law\t\tlaw@redhat.com\ndebugging code\t\tJim Wilson\t\twilson@redhat.com\ndwarf debugging code\tJason Merrill\t\tjason@redhat.com\nc++ runtime libs Gabriel Dos Reis dosreis@cmla.ens-cachan.fr\nc++ runtime libs\tUlrich Drepper\t\tdrepper@redhat.com\nc++ runtime libs\tPhil Edwards\t\tpedwards@jaj.com\nc++ runtime libs\tBenjamin Kosnik\t\tbkoz@redhat.com\n*synthetic multiply\tTorbjorn Granlund\ttege@swox.com\n*c-torture\t\tTorbjorn Granlund\ttege@swox.com\n*f-torture\t\tKate Hedstrom\t\tkate@ahab.rutgers.edu\nsco5, unixware, sco udk\tRobert Lipe\t\trobertlipe@usa.net\nfixincludes\t\tBruce Korb\t\tbkorb@gnu.org\ngcse.c \t\t\tJeff Law\t\tlaw@redhat.com\nglobal opt framework\tJeff Law\t\tlaw@redhat.com\njump.c\t\t\tDavid S. Miller\t\tdavem@redhat.com\nweb pages\t\tGerald Pfeifer\t\tpfeifer@dbai.tuwien.ac.at\nC front end/ISO C99\tGavin Romig-Koch\tgavin@redhat.com\nconfig.sub/config.guess\tBen Elliston\t\tbje@redhat.com\navr port\t\tDenis Chertykov\t\tdenisc@overta.ru\n\t\t\tMarek Michalkiewicz\tmarekm@linux.org.pl\nbasic block reordering\tJason Eckhardt\t\tjle@redhat.com\ni18n\t\t\tPhilipp Thomas\t\tpthomas@suse.de\ndiagnostic messages\tGabriel Dos Reis\tgdr@codesourcery.com\nwindows, cygwin, mingw\tChristopher Faylor\tcgf@redhat.com\nwindows, cygwin, mingw\tDJ Delorie\t\tdj@redhat.com\nDJGPP\t\t\tDJ Delorie\t\tdj@delorie.com\nlibiberty\t\tDJ Delorie\t\tdj@redhat.com\nbuild machinery (*.in)\tDJ Delorie\t\tdj@redhat.com\nbuild machinery (*.in)\tAlexandre Oliva\t\taoliva@redhat.com\n\nNote individuals who maintain parts of the compiler need approval to check\nin changes outside of the parts of the compiler they maintain.\n\n\n\t\t\tWrite After Approval\nScott Bambrough\t\t\t\t\tscottb@netwinder.org\nLaurynas Biveinis\t\t\t\tlauras@softhome.net\nPhil Blundell\t\t\t\t\tpb@futuretv.com\nHans Boehm\t\t\t\t\thboehm@gcc.gnu.org\nAndrew cagney\t\t\t\t\tcagney@redhat.com\nEric Christopher\t\t\t\techristo@redhat.com\nWilliam Cohen\t\t\t\t\twcohen@redhat.com\n*Paul Eggert\t\t\t\t\teggert@twinsun.com\nBen Elliston\t\t\t\t\tbje@redhat.com\nMarc Espie\t\t\t\t\tespie@cvs.openbsd.org\nKaveh Ghazi\t\t\t\t\tghazi@caip.rutgers.edu\nAnthony Green\t\t\t\t\tgreen@redhat.com\nStu Grossman\t\t\t\t\tgrossman@redhat.com\nAndrew Haley\t\t\t\t\taph@redhat.com\nAldy Hernandez\t\t\t\t\taldyh@redhat.com\nKazu Hirata\t\t\t\t\tkazu@hxi.com\nManfred Hollstein\t\t\t\tmhollstein@redhat.com\nJan Hubicka\t\t\t\t\thubicka@freesoft.cz\nAndreas Jaeger\t\t\t\t\taj@suse.de\nJakub Jelinek\t\t\t\t\tjakub@redhat.com\nKlaus Kaempf\t\t\t\t\tkkaempf@progis.de\nBrendan Kehoe\t\t\t\t\tbrendan@redhat.com\nMumit Khan\t\t\t\t\tkhan@xraylith.wisc.edu\nMarc Lehmann\t\t\t\t\tpcg@goof.com\nAlan Lehotsky\t\t\t\t\tapl@alum.mit.edu\nWarren Levy\t\t\t\t\twarrenl@redhat.com\nKriang Lerdsuwanakij\t\t\t\tlerdsuwa@users.sourceforge.net\nDon Lindsay\t\t\t\t\tdlindsay@redhat.com\nDave Love\t\t\t\t\td.love@dl.ac.uk\nMartin v. L�wis\t\t\t\t\tloewis@informatik.hu-berlin.de\n*HJ Lu\t\t\t\t\t\thjl@lucon.org\nAndrew Macleod\t\t\t\t\tamacleod@redhat.com\nVladimir Makarov\t\t\t\tvmakarov@redhat.com\nGreg McGary\t\t\t\t\tgkm@gnu.org\nBryce McKinlay\t\t\t\t\tbryce@gcc.gnu.org\nAlan Modra\t\t\t\t\talan@linuxcare.com.au\nToon Moene\t\t\t\t\ttoon@moene.indiv.nluug.nl\nCatherine Moore\t\t\t\t\tclm@redhat.com\nJoseph Myers\t\t\t\t\tjsm28@cam.ac.uk\nHans-Peter Nilsson\t\t\t\thp@bitrange.com\nDiego Novillo\t\t\t\t\tdnovillo@redhat.com\nDavid O'Brien\t\t\t\t\tobrien@FreeBSD.org\nJeffrey D. Oldham\t\t\t\toldham@codesourcery.com\nAlexandre Petit-Bianco\t\t\t\tapbianco@redhat.com\nClinton Popetz\t\t\t\t\tcpopetz@cpopetz.com\nKen Raeburn\t\t\t\t\traeburn@redhat.com\nRolf Rasmussen\t\t\t\t\trolfwr@gcc.gnu.org\nGabriel Dos Reis dosreis@cmla.ens-cachan.fr\nAlex Samuel\t\t\t\t\tsamuel@codesourcery.com\nBernd Schmidt\t\t\t\t\tbernds@redhat.com\nAndreas Schwab\t\t\t\t\tschwab@suse.de\nStan Shebs\t\t\t\t\tshebs@apple.com\nNathan Sidwell\t\t\t\t\tnathan@acm.org\nFranz Sirl\t\t\t\t\tfranz.sirl-kernel@lauterbach.com\nMichael Sokolov\t\t\t\t\tmsokolov@ivan.Harhan.ORG\nMike Stump\t\t\t\t\tmrs@windriver.com\nIan Taylor\t\t\t\t\tian@zembu.com\nPhilipp Thomas\t\t\t\t\tpthomas@suse.de\nKresten Krab Thorup\t\t\t\tkrab@gcc.gnu.org\nTom Tromey\t\t\t\t\ttromey@redhat.com\nJohn Wehle\t\t\t\t\tjohn@feith.com\nMark Wielaard\t\t\t\t\tmark@gcc.gnu.org\n* Indicates folks we need to get Kerberos/ssh accounts ready so they\ncan write in the source tree\n",
"msg_date": "19 Mar 2001 12:57:19 -0800",
"msg_from": "Ian Lance Taylor <ian@airs.com>",
"msg_from_op": false,
"msg_subject": "Re: Patch application"
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I understand the formalistic problem, and maybe I overstated its\n> > formality, but it seems it would be good to maintain a list for two\n> > reasons:\n> \n> I don't have a problem with keeping an informal list of area experts.\n> I was just objecting to the notion of formal signoffs (I doubt Peter E.\n> wants to look at every single Makefile change, for example).\n\nOK, that is good. I think it will make a nice web page.\n\n\n> Also, there's a point that keeps coming up in these discussions: the\n> standards need to be different depending on what time of the release\n> cycle it is. Perhaps formal signoffs *are* appropriate when we're\n> this late in beta.\n\nOh, yes, agreed, beta requires signoffs.\n\nBut I can't seem to get patches in that don't require some _expert_ to\ncome along and improve it. If the _experts_ are OK with that, then that\nis fine, but if they want things done differently somehow, I want to\nmeet their needs.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 19 Mar 2001 16:04:33 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Patch application"
},
{
"msg_contents": "Ian Lance Taylor <ian@airs.com> writes:\n> In projects like gcc and the GNU binutils, we use a MAINTAINERS file.\n> Some people have blanket write privileges. Some people have write\n> priviliges to certain areas of the code. Anybody else needs a patch\n> to be approved before they can check it in. Patches which are\n> ``obviously correct'' are always OK.\n\nWould you enlarge on what that fourth sentence means in practice?\n\nSeems like the sticky issue here is what constitutes \"approval\".\nWe already have a policy that changes originating from non-committers\nare supposed to be reviewed before they get applied, but what Bruce\nis worried about is the quality of the review process.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 19 Mar 2001 16:08:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Patch application "
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> Ian Lance Taylor <ian@airs.com> writes:\n> > In projects like gcc and the GNU binutils, we use a MAINTAINERS file.\n> > Some people have blanket write privileges. Some people have write\n> > priviliges to certain areas of the code. Anybody else needs a patch\n> > to be approved before they can check it in. Patches which are\n> > ``obviously correct'' are always OK.\n> \n> Would you enlarge on what that fourth sentence means in practice?\n> \n> Seems like the sticky issue here is what constitutes \"approval\".\n> We already have a policy that changes originating from non-committers\n> are supposed to be reviewed before they get applied, but what Bruce\n> is worried about is the quality of the review process.\n\nIn practice, what it means is that somebody who has a patch, and does\nnot have the appropriate privileges, sends it to the mailing list.\nMost patches apply to a single part of the code. The person\nresponsible for that part of the code, or a person with blanket write\nprivileges, reviews the patch, and approves it, or denies it, or\napproves it with changes.\n\nIf approved, and the original submitter has write privileges, he or\nshe checks it in. Otherwise, the maintainer who did the review checks\nit in.\n\nIf approved with changes, and the original submitter has write\nprivileges, he or she makes the changes and checks it in. Otherwise\nhe or she makes the changes, sends them back, and the maintainer who\ndid the review checks it in.\n\nOne advantage of the MAINTAINERS file with respect to the review\nprocess is that it tells you who knows most about particular areas of\nthe code. For example, in the gcc MAINTAINERS file I sent earlier,\nthere are people with blanket write privileges who are also listed as\nresponsible for particular areas of gcc.\n\nAnother advantage is that it reduces load on the maintainers, since\nmany people can check in their own patches. Since there are many\npeople, some sort of control is needed.\n\nThe goal is not excessive formalism; as I said, there is nothing which\nactually prevents anybody with write privileges from checking in a\npatch to any part of the code. The goal is to guide people to the\nright person to approve a particular patch.\n\nIan\n",
"msg_date": "19 Mar 2001 13:34:14 -0800",
"msg_from": "Ian Lance Taylor <ian@airs.com>",
"msg_from_op": false,
"msg_subject": "Re: Patch application"
},
{
"msg_contents": "Bruce, what is the point of even an informal list of \"experts\"? There\nhave always been areas that folks have \"adopted\" or have developed an\ninterest and expertise in. And we've gotten lots of really great\ncontributions both large and small from people we barely knew existed\nuntil the patch arrived.\n\nUnless there is a \"process improvement\" which comes with developing this\nlist, I don't really see what the benefit will be wrt existance of the\nlist, developing the list, of deciding who should be on a list, etc etc.\n\nistm that the list of active developers serves the function of\nacknowledging contributors. Not sure what another list will do for us,\nand it may have the effect of being an artificial barrier or distinction\nbetween folks.\n\nAll imho of course ;)\n\n - Thomas\n",
"msg_date": "Tue, 20 Mar 2001 05:36:04 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: Patch application"
},
{
"msg_contents": "At 05:36 20/03/01 +0000, Thomas Lockhart wrote:\n>\n>Unless there is a \"process improvement\" which comes with developing this\n>list, I don't really see what the benefit will be wrt existance of the\n>list, developing the list, of deciding who should be on a list, etc etc.\n>\n\nTotally agree; such formality will be barrier and we will gain almost\nnothing. It will also further disenfranchise the wider developer community.\nISTM that the motivation is based on one or two isolated incidents where\npatches were applied when they should not have been. Tom's suggestion of\ninsisting on CVS headers will deal with the specific case in point, at far\nlesser social and labour overhead.\n\nThe last thing we want to do to people who contribute heavily to the\nproject is say 'Gee, thanks. And you are now responsible for all approvals\non this area of code', especially in late beta, when they are likely to be\nquite busy. Similarly, when we are outside the beta phase, we don't need\nthe process.\n\nI suspect that anybody who has worked on a chunk of code in a release cycle\nwill keep an eye on what is happening to the code.\n\nMy suggestion for handling this process would be to allow an opt-in 'watch'\nto be placed on files/modules in CVS (CVS supports this, from memory).\nThen, eg, I can say 'send me info when someone makes a match to pg_dump'.\nSimilarly, when I start working on a module, I can add myself to the list\nto be informed when someone changes it underneath me. This would be\ngenuinely useful to me as a part-time developer.\n\nAs a further development from this, it would be good to see a way for bug\nreports to be redirected to 'subsystems' (or something like that), which\ndevelopers could opt into. So, using myself as an example, I would like\nspecial notification when a pg_dump bug is reported. Similarly, I might\nlike to be notified when changes are made to the WAL code...\n\nIf you want to make cute web pages, and define domain experts, make it\nuseful, make it opt-in, and make it dynamic.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Tue, 20 Mar 2001 17:09:29 +1100",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Re: Patch application"
},
{
"msg_contents": "\nOK, seems there have been enough objections that I will not implement a\n\"experts\" page, nor change the way patches are applied.\n\nI will be posting a diff -c of any patches I have to munge into place,\nso people can see how stuff was merged into the code.\n\nIt seems the problem of people having to massage patches after they are\napplied is either not a big deal, or the other ways of applying patches\nare considered worse.\n\n\n\n> At 05:36 20/03/01 +0000, Thomas Lockhart wrote:\n> >\n> >Unless there is a \"process improvement\" which comes with developing this\n> >list, I don't really see what the benefit will be wrt existance of the\n> >list, developing the list, of deciding who should be on a list, etc etc.\n> >\n> \n> Totally agree; such formality will be barrier and we will gain almost\n> nothing. It will also further disenfranchise the wider developer community.\n> ISTM that the motivation is based on one or two isolated incidents where\n> patches were applied when they should not have been. Tom's suggestion of\n> insisting on CVS headers will deal with the specific case in point, at far\n> lesser social and labour overhead.\n> \n> The last thing we want to do to people who contribute heavily to the\n> project is say 'Gee, thanks. And you are now responsible for all approvals\n> on this area of code', especially in late beta, when they are likely to be\n> quite busy. Similarly, when we are outside the beta phase, we don't need\n> the process.\n> \n> I suspect that anybody who has worked on a chunk of code in a release cycle\n> will keep an eye on what is happening to the code.\n> \n> My suggestion for handling this process would be to allow an opt-in 'watch'\n> to be placed on files/modules in CVS (CVS supports this, from memory).\n> Then, eg, I can say 'send me info when someone makes a match to pg_dump'.\n> Similarly, when I start working on a module, I can add myself to the list\n> to be informed when someone changes it underneath me. This would be\n> genuinely useful to me as a part-time developer.\n> \n> As a further development from this, it would be good to see a way for bug\n> reports to be redirected to 'subsystems' (or something like that), which\n> developers could opt into. So, using myself as an example, I would like\n> special notification when a pg_dump bug is reported. Similarly, I might\n> like to be notified when changes are made to the WAL code...\n> \n> If you want to make cute web pages, and define domain experts, make it\n> useful, make it opt-in, and make it dynamic.\n> \n> \n> ----------------------------------------------------------------\n> Philip Warner | __---_____\n> Albatross Consulting Pty. Ltd. |----/ - \\\n> (A.B.N. 75 008 659 498) | /(@) ______---_\n> Tel: (+61) 0500 83 82 81 | _________ \\\n> Fax: (+61) 0500 83 82 82 | ___________ |\n> Http://www.rhyme.com.au | / \\|\n> | --________--\n> PGP key available upon request, | /\n> and from pgp5.ai.mit.edu:11371 |/\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 20 Mar 2001 17:14:07 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Re: Patch application"
}
] |
[
{
"msg_contents": "\nIs there any way to get just the ODBC RPM to install with OUT\ninstalling the whole DB? \n\nI have a strange situation:\n\nStarOffice 5.2 (Linux) Running under FreeBSD Linux Emulation\nPG running NATIVE.\n\nI want the two to talk, using ODBC.\n\nHow do I make this happen?\n\nLER\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Mon, 19 Mar 2001 12:35:13 -0600",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "ODBC/FreeBSD/LinuxEmulation/RPM?"
},
{
"msg_contents": "* Larry Rosenman <ler@lerctr.org> [010319 10:35] wrote:\n> \n> Is there any way to get just the ODBC RPM to install with OUT\n> installing the whole DB? \n> \n> I have a strange situation:\n> \n> StarOffice 5.2 (Linux) Running under FreeBSD Linux Emulation\n> PG running NATIVE.\n> \n> I want the two to talk, using ODBC.\n> \n> How do I make this happen?\n\nrpm2cpio <pg_rpmfile.rpm> > pg_rpmfile.cpio\ncpio -i < pg_rpmfile.cpio\ntar xzvf pg_rpmfile.tgz\n\n-- \n-Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]\n\n",
"msg_date": "Mon, 19 Mar 2001 11:13:23 -0800",
"msg_from": "Alfred Perlstein <bright@wintelcom.net>",
"msg_from_op": false,
"msg_subject": "Re: ODBC/FreeBSD/LinuxEmulation/RPM?"
},
{
"msg_contents": "* Alfred Perlstein <bright@wintelcom.net> [010319 11:27] wrote:\n> * Larry Rosenman <ler@lerctr.org> [010319 10:35] wrote:\n> > \n> > Is there any way to get just the ODBC RPM to install with OUT\n> > installing the whole DB? \n> > \n> > I have a strange situation:\n> > \n> > StarOffice 5.2 (Linux) Running under FreeBSD Linux Emulation\n> > PG running NATIVE.\n> > \n> > I want the two to talk, using ODBC.\n> > \n> > How do I make this happen?\n> \n> rpm2cpio <pg_rpmfile.rpm> > pg_rpmfile.cpio\n> cpio -i < pg_rpmfile.cpio\n> tar xzvf pg_rpmfile.tgz\n\nSorry, i was just waking up when I wrote this... the idea is to\nextract the rpm then just grab the required ODBC files.\n\nbest of luck,\n-- \n-Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]\n\n",
"msg_date": "Mon, 19 Mar 2001 11:44:02 -0800",
"msg_from": "Alfred Perlstein <bright@wintelcom.net>",
"msg_from_op": false,
"msg_subject": "Re: ODBC/FreeBSD/LinuxEmulation/RPM?"
},
{
"msg_contents": "I figured that out, now to get the ODBC stuff totally right on the LINUX \nside\nof the box.\n\nDo we work with unixODBC or the other one? \n\nLER\n\n\n>>>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<<\n\nOn 3/19/01, 1:44:02 PM, Alfred Perlstein <bright@wintelcom.net> wrote \nregarding Re: [HACKERS] ODBC/FreeBSD/LinuxEmulation/RPM?:\n\n\n> * Alfred Perlstein <bright@wintelcom.net> [010319 11:27] wrote:\n> > * Larry Rosenman <ler@lerctr.org> [010319 10:35] wrote:\n> > >\n> > > Is there any way to get just the ODBC RPM to install with OUT\n> > > installing the whole DB?\n> > >\n> > > I have a strange situation:\n> > >\n> > > StarOffice 5.2 (Linux) Running under FreeBSD Linux Emulation\n> > > PG running NATIVE.\n> > >\n> > > I want the two to talk, using ODBC.\n> > >\n> > > How do I make this happen?\n> >\n> > rpm2cpio <pg_rpmfile.rpm> > pg_rpmfile.cpio\n> > cpio -i < pg_rpmfile.cpio\n> > tar xzvf pg_rpmfile.tgz\n\n> Sorry, i was just waking up when I wrote this... the idea is to\n> extract the rpm then just grab the required ODBC files.\n\n> best of luck,\n> --\n> -Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]\n",
"msg_date": "Mon, 19 Mar 2001 19:51:21 GMT",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: ODBC/FreeBSD/LinuxEmulation/RPM?"
}
] |
[
{
"msg_contents": "I've looked at the elog calls in the source, about 1700 in total (only\nelog(ERROR)). If we mapped these to the SQL error codes then we'd have\nabout two dozen calls with an assigned code and the rest being \"other\".\nThe way I estimate it (I didn't really look at *each* call, of course) is\nthat about 2/3 of the calls are internal panic calls (\"cache lookup of %s\nfailed\"), 1/6 are SQL-level problems, and the rest are operating system,\nstorage problems, \"not implemented\", misconfigurations, etc.\n\nA problem that makes this quite hard to manage is that many errors can be\nreported from several places, e.g., the parser, the executor, the access\nmethod. Some of these messages are probably not readily reproduceable\nbecause they are caught elsewhere.\n\nConsequentially, the most pragmatic approach to assigning error codes\nmight be to just pick some numbers and give them out gradually. A\nhierarchical subsystem+code might be useful, beyond that it really depends\non what we expect from error codes in the first place. Does anyone have\ngood experiences from other products?\n\nEssentially, I envision making up a new function, say \"elogc\", which has\n\n elogc(<level>, [<subsys>,?] <code>, message...)\n\nwhere the code is some macro, the expansion of which is to be determined.\nA call to \"elogc\" would also require a formalized message wording, adding\nthe error code to the documentation, which also requires having a fairly\ngood idea how the error can happen and how to handle it. This could\nperhaps even be automated to some extent.\n\nAll the calls that are not converted yet will be assigned a to the generic\n\"internal error\" class; most of them will stay this way.\n\n\nAs for translations, I don't think we have to worry about this right now.\nAssuming that we would use gettext or something similar, we can tell it\nthat all calls to elog (or \"elogc\" or whatever) contain translatable\nstrings, so we don't have to uglify it with gettext(...) or _(...) calls\nor what else.\n\n\nSo we need some good error numbering scheme. Any ideas?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Mon, 19 Mar 2001 23:56:32 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "More on elog and error codes"
},
{
"msg_contents": "At 23:56 19/03/01 +0100, Peter Eisentraut wrote:\n>\n>Essentially, I envision making up a new function, say \"elogc\", which has\n>\n> elogc(<level>, [<subsys>,?] <code>, message...)\n>\n>where the code is some macro, the expansion of which is to be determined.\n>A call to \"elogc\" would also require a formalized message wording, adding\n>the error code to the documentation, which also requires having a fairly\n>good idea how the error can happen and how to handle it. This could\n>perhaps even be automated to some extent.\n>\n>All the calls that are not converted yet will be assigned a to the generic\n>\"internal error\" class; most of them will stay this way.\n>\n...\n>\n>So we need some good error numbering scheme. Any ideas?\n>\n\nFWIW, the VMS scheme has error numbers broken down to include system,\nsubsystem, error number & severity. These are maintained in an error\nmessage source file. eg. the file system's 'file not found' error message\nis something like:\n\nFACILITY RMS (the file system)\n...\nSEVERITY WARNING\n...\nFILNFND \"File %AS not found\"\n...\n\nIt's a while since I used VMS messages files regularly, this is at least\nrepresentative. It has the drawback that severity is often tied to the\nmessage, not the circumstance, but this is a problem only rarely.\n\nIn code, the messages are used as external symbols (probably in our case\nrepresenting pointers to C format strings). In making extensive use of such\na mnemonics, I never really needed to have full text messages. Once a set\nof standards is in place for message abbreviations, the most people can\nread the message codes. This would mean that:\n\n elogc(<level>, [<subsys>,?] <code>, message...)\n\nbecomes:\n\n elogc(<code> [, parameter...])\n\neg.\n\n \"cache lookup of %s failed\"\n\nmight be replaced by:\n\n elog(CACHELOOKUPFAIL, cacheItemThatFailed);\n\nand \n \"internal error: %s\"\n\nbecomes\n\n elog(INTERNAL, \"could not find the VeryImportantThing\");\n\nUnlike VMS, it's probably a good idea to separate the severity from the\nerror code, since a CACHELOOKUPFAIL in one place may be less significant\nthan another (eg. severity=debug).\n\nI also think it's important that we get the source file and line number\nsomewhere in the message, and if we have these, we may not need the subsystem.\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Tue, 20 Mar 2001 10:48:55 +1100",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: More on elog and error codes"
},
{
"msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> I also think it's important that we get the source file and line number\n> somewhere in the message, and if we have these, we may not need the\n> subsystem.\n\nI agree that the subsystem concept is not necessary, except possibly as\na means of avoiding collisions in the error-symbol namespace, and for\nthat it would only be a naming convention (PGERR_subsys_IDENTIFIER).\nWe probably do not need it considering that we have much less than 1000\ndistinct error identifiers to assign, judging from Peter's survey.\n\nWe do need severity to be distinct from the error code (\"internal\nerrors\" are surely not all the same severity, even if we don't bother\nto assign formal error codes to each one).\n\nBTW, the symbols used in the source code do need to have a common prefix\n(PGERR_CACHELOOKUPFAIL not CACHELOOKUPFAIL) to avoid namespace pollution\nproblems. We blew this before with \"DEBUG\" and friends, let's learn\nfrom that mistake.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 19 Mar 2001 19:35:22 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: More on elog and error codes "
},
{
"msg_contents": "> So we need some good error numbering scheme. Any ideas?\n\nSQL9x specifies some error codes, with no particular numbering scheme\nother than negative numbers indicate a problem afaicr.\n\nShouldn't we map to those where possible?\n\n - Thomas\n",
"msg_date": "Tue, 20 Mar 2001 06:01:19 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: More on elog and error codes"
},
{
"msg_contents": "Thomas Lockhart <lockhart@alumni.caltech.edu> writes:\n\n> > So we need some good error numbering scheme. Any ideas?\n> \n> SQL9x specifies some error codes, with no particular numbering scheme\n> other than negative numbers indicate a problem afaicr.\n> \n> Shouldn't we map to those where possible?\n> \n\nGood point, but I guess most of the errors produced are pgsql\nspecific. If I remember right Sybase had several different SQL types of error\nmapped to one of the standard error codes. \n\nAlso the JDBC API provides methods to look at the database dependent error\ncode and standard error code. I've found both useful when working with\nSybase. \n\ncheers, \n\n\tGunnar\n",
"msg_date": "20 Mar 2001 14:39:55 +0100",
"msg_from": "Gunnar R|nning <gunnar@candleweb.no>",
"msg_from_op": false,
"msg_subject": "Re: More on elog and error codes"
},
{
"msg_contents": "Philip Warner writes:\n\n> elog(CACHELOOKUPFAIL, cacheItemThatFailed);\n\nThe disadvantage of this approach, which I tried to explain in a previous\nmessage, is that we might want to have different wordings for different\noccurences of the same class of error.\n\nAdditionally, the whole idea behind having error *codes* is that the\nclient program can easily distinguish errors that it can handle specially.\nThus the codes should be numeric or some other short, fixed scheme. In\nthe backend they could be replaced by macros.\n\nExample:\n\n#define PGERR_TYPE 1854\n\n/* somewhere... */\n\nelogc(ERROR, PGERR_TYPE, \"type %s cannot be created because it already exists\", ...)\n\n/* elsewhere... */\n\nelogc(ERROR, PGERR_TYPE, \"type %s used as argument %d of function %s doesn't exist\", ...)\n\n\nIn fact, this is my proposal. The \"1854\" can be argued, but I like the\nrest.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Tue, 20 Mar 2001 17:35:42 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: More on elog and error codes"
},
{
"msg_contents": "On Tue, 20 Mar 2001 10:56, you wrote:\n> I've looked at the elog calls in the source, about 1700 in total (only\n\n[ ... ]\n\n> So we need some good error numbering scheme. Any ideas?\n\nJust that it might be a good idea to incorporate the version / release \ndetails in some way so that when somebody on the list is squeaking about \nan error message it is obvious to the helper that the advice needed is to \nupgrade from the Cretatious Period version to a modern release, and have \nanother go.\n\n-- \nSincerely etc.,\n\n NAME Christopher Sawtell\n CELL PHONE 021 257 4451\n ICQ UIN 45863470\n EMAIL csawtell @ xtra . co . nz\n CNOTES ftp://ftp.funet.fi/pub/languages/C/tutorials/sawtell_C.tar.gz\n\n -->> Please refrain from using HTML or WORD attachments in e-mails to me \n<<--\n\n",
"msg_date": "Wed, 21 Mar 2001 09:41:44 +1200",
"msg_from": "Christopher Sawtell <csawtell@xtra.co.nz>",
"msg_from_op": false,
"msg_subject": "Re: More on elog and error codes"
},
{
"msg_contents": "On Wed, Mar 21, 2001 at 09:41:44AM +1200, Christopher Sawtell wrote:\n> On Tue, 20 Mar 2001 10:56, you wrote:\n> \n> Just that it might be a good idea to incorporate the version / release \n> details in some way so that when somebody on the list is squeaking about \n> an error message it is obvious to the helper that the advice needed is to \n> upgrade from the Cretatious Period version to a modern release, and have \n\nROFL - parsed this as Cretinous period on the first pass.\n\nRoss\n",
"msg_date": "Tue, 20 Mar 2001 16:10:57 -0600",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: More on elog and error codes"
},
{
"msg_contents": "At 17:35 20/03/01 +0100, Peter Eisentraut wrote:\n>Philip Warner writes:\n>\n>> elog(CACHELOOKUPFAIL, cacheItemThatFailed);\n>\n>The disadvantage of this approach, which I tried to explain in a previous\n>message, is that we might want to have different wordings for different\n>occurences of the same class of error.\n>\n>Additionally, the whole idea behind having error *codes* is that the\n>client program can easily distinguish errors that it can handle specially.\n>Thus the codes should be numeric or some other short, fixed scheme. In\n>the backend they could be replaced by macros.\n\nThis seems to be just an argument for constructing the value of\nPGERR_CACHELOOKUPFAIL carefully (which is what the VMS message source files\ndid). The point is that when they are used by a developer, they are simple.\n\n\n\n>#define PGERR_TYPE 1854\n>\n>/* somewhere... */\n>\n>elogc(ERROR, PGERR_TYPE, \"type %s cannot be created because it already\nexists\", ...)\n>\n>/* elsewhere... */\n>\n>elogc(ERROR, PGERR_TYPE, \"type %s used as argument %d of function %s\ndoesn't exist\", ...)\n>\n\nI can appreciate that there may be cases where the same message is reused,\nbut that is where parameter substitution comes in. \n\nIn the specific example above, returning the same error code is not going\nto help the client. What if they want to handle \"type %s used as argument\n%d of function %s doesn't exist\" by creating the type, and silently ignore\n\"type %s cannot be created because it already exists\"?\n\nHow do you handle \"type %s can not be used as a function return type\"? Is\nthis PGERR_FUNC or PGERR_TYPE?\n\nIf the motivation behind this is to alloy easy translation to SQL error\ncodes, then I suggest we have an error definition file with explicit\ntranslation:\n\nCode SQL Text\nPGERR_TYPALREXI 02xxx \"type %s cannot be created because it already exists\"\nPGERR_FUNCNOTYPE 02xxx \"type %s used as argument %d of function %s doesn't\nexist\"\n\nand if we want a generic 'type does not exist', then:\n\nPGERR_NOSUCHTYPE 02xxx \"type %s does not exist - %s\"\n\nwhere the %s might contain 'it can't be used as a function argument'.\n\nthe we just have\n\nelogc(ERROR, PGERR_TYPALEXI, ...)\n\n/* elsewhere... */\n\nelogc(ERROR, PGERR_FUNCNOTYPE, ...)\n\n\nCreating central message files/objects has the added advantage of a much\nsimpler locale support - they're just resource files, and they're NOT\nembedded throughout the code.\n\nFinally, if you do want to have some kind of error classification beyond\nthe SQL code, it could be encoded in the error message file.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Wed, 21 Mar 2001 09:43:52 +1100",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: More on elog and error codes"
},
{
"msg_contents": "At 09:41 21/03/01 +1200, Christopher Sawtell wrote:\n>Just that it might be a good idea to incorporate the version / release \n>details in some way so that when somebody on the list is squeaking about \n>an error message it is obvious to the helper that the advice needed is to \n>upgrade from the Cretatious Period version to a modern release, and have \n>another go.\n\nThis is better handled by the bug *reporting* system; the users can easily\nget the current version number from PG and send it with their reports. We\ndon't really want all the error codes changing between releases.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Wed, 21 Mar 2001 09:46:55 +1100",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: More on elog and error codes"
},
{
"msg_contents": "At 09:43 21/03/01 +1100, Philip Warner wrote:\n>\n>Code SQL Text\n>PGERR_TYPALREXI 02xxx \"type %s cannot be created because it already exists\"\n>PGERR_FUNCNOTYPE 02xxx \"type %s used as argument %d of function %s doesn't\n>exist\"\n>\n\nPeter,\n\nJust to clarify, because in a previous email you seemed to believe that I\nwanted 'PGERR_TYPALREXI' to resolve to a string. I have no such desire; a\nmeaningful number is fine, but we should never have to type it. One\npossibility is that it is the address of an error-info function (built by\n'compiling' the message file). Another possibility is that it could be a\nprefix to several external symbols, PGERR_TYPALREXI_msg,\nPGERR_TYPALREXI_code, PGERR_TYPALREXI_num, PGERR_TYPALREXI_sqlcode etc,\nwhich are again built by compiling the message file. We can then encode\nwhatever we like into the message, have flexible text, and ease of use for\ndevelopers.\n\nHope this clarifies things...\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Wed, 21 Mar 2001 13:43:25 +1100",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: More on elog and error codes"
},
{
"msg_contents": "> Creating central message files/objects has the added advantage of a much\n> simpler locale support - they're just resource files, and they're NOT\n> embedded throughout the code.\n> Finally, if you do want to have some kind of error classification beyond\n> the SQL code, it could be encoded in the error message file.\n\nWe could also (automatically) build a DBMS reference table *from* this\nmessage file (or files), which would allow lookup of messages from codes\nfor applications which are not \"message-aware\".\n\nNot a requirement, and it does not meet all needs (e.g. you would have\nto be connected to get the messages in that case) but it would be\nhelpful for some use cases...\n\n - Thomas\n",
"msg_date": "Wed, 21 Mar 2001 03:28:24 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: More on elog and error codes"
},
{
"msg_contents": "At 03:28 21/03/01 +0000, Thomas Lockhart wrote:\n>> Creating central message files/objects has the added advantage of a much\n>> simpler locale support - they're just resource files, and they're NOT\n>> embedded throughout the code.\n>> Finally, if you do want to have some kind of error classification beyond\n>> the SQL code, it could be encoded in the error message file.\n>\n>We could also (automatically) build a DBMS reference table *from* this\n>message file (or files), which would allow lookup of messages from codes\n>for applications which are not \"message-aware\".\n>\n>Not a requirement, and it does not meet all needs (e.g. you would have\n>to be connected to get the messages in that case) but it would be\n>helpful for some use cases...\n\nIf we extended the message definitions to have (optional) description &\nuser-resolution sections, then we have the possibilty of asking psql to\nexplain the last error, and (broadly) how to fix it. Of course, in the\nfirst pass, these would all be empty.\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Wed, 21 Mar 2001 14:38:21 +1100",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: More on elog and error codes"
},
{
"msg_contents": "Philip Warner writes:\n\n> If the motivation behind this is to alloy easy translation to SQL error\n> codes, then I suggest we have an error definition file with explicit\n> translation:\n>\n> Code SQL Text\n> PGERR_TYPALREXI 02xxx \"type %s cannot be created because it already exists\"\n> PGERR_FUNCNOTYPE 02xxx \"type %s used as argument %d of function %s doesn't\n> exist\"\n>\n> and if we want a generic 'type does not exist', then:\n>\n> PGERR_NOSUCHTYPE 02xxx \"type %s does not exist - %s\"\n>\n> where the %s might contain 'it can't be used as a function argument'.\n>\n> the we just have\n>\n> elogc(ERROR, PGERR_TYPALEXI, ...)\n>\n> /* elsewhere... */\n>\n> elogc(ERROR, PGERR_FUNCNOTYPE, ...)\n\nThis is going to be a disaster for the coder. Every time you look at an\nelog you don't know what it does? Is the first arg a %s or a %d? What's\nthe first %s, what the second? How can this be checked against bugs? (I\nknow GCC can be pretty helpful here, but does it catch all problems?)\n\nConversely, when you look at the error message you don't know from what\ncontexts it's called. The error messages will degrade rapidly in quality\nbecause changing one will become a major project.\n\n> Creating central message files/objects has the added advantage of a much\n> simpler locale support - they're just resource files, and they're NOT\n> embedded throughout the code.\n\nActually, the fact that the messages are in the code, where they're used,\nand not in a catalog file is a reason why gettext is so popular and\ncatgets gets laughed at.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Wed, 21 Mar 2001 22:03:09 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: More on elog and error codes"
},
{
"msg_contents": "At 22:03 21/03/01 +0100, Peter Eisentraut wrote:\n>Philip Warner writes:\n>\n>> If the motivation behind this is to alloy easy translation to SQL error\n>> codes, then I suggest we have an error definition file with explicit\n>> translation:\n>>\n>> Code SQL Text\n>> PGERR_TYPALREXI 02xxx \"type %s cannot be created because it already\nexists\"\n>> PGERR_FUNCNOTYPE 02xxx \"type %s used as argument %d of function %s doesn't\n>> exist\"\n>>\n>> and if we want a generic 'type does not exist', then:\n>>\n>> PGERR_NOSUCHTYPE 02xxx \"type %s does not exist - %s\"\n>>\n>> where the %s might contain 'it can't be used as a function argument'.\n>>\n>> the we just have\n>>\n>> elogc(ERROR, PGERR_TYPALEXI, ...)\n>>\n>> /* elsewhere... */\n>>\n>> elogc(ERROR, PGERR_FUNCNOTYPE, ...)\n>\n>This is going to be a disaster for the coder. Every time you look at an\n>elog you don't know what it does? Is the first arg a %s or a %d? What's\n>the first %s, what the second?\n\n From experience using this sort of system, probably 80% of errors in new\ncode are new; if you don't know the format of your own errors, then you\nhave a larger problem. Secondly, most errors have obvious parameters, and\nit only ever gets confusing when they have more than one parameter, and\neven then it's pretty obvious. This concern was often raised by people new\nto the system, but generally turned out to be more FUD than fact.\n\n\n>How can this be checked against bugs? \n>Conversely, when you look at the error message you don't know from what\n>contexts it's called.\n\nAm I missing something here? The user gets a message like: \n\n TYPALREXI: Specified type 'fred' already exists.\n\nthen we do \n\n glimpse TYPALREXI\n\nIt is actually a lot easier than the plain text search we already have to\ndo, when we have to guess at the words that have been substituted into the\nmessage. Besides, in *both* proposed systems, if we have done things\nproperly, then the postgres log also contains the module name & line #.\n\n\n>The error messages will degrade rapidly in quality\n>because changing one will become a major project.\n\nChanging one will be a major project only if it is used everywhere. Most\nwill be relatively localized. And, with glimpse 'XYZ', it's not really that\nbig a task. Finally, you would need to ask why it was being changed - would\na new message work better? Tell me where the degradation in quality is in\ncomparison with text-in-the-source versions, with umpteen dozen slightly\ndifferent versions of essentially the same error messages?\n\n\n>> Creating central message files/objects has the added advantage of a much\n>> simpler locale support - they're just resource files, and they're NOT\n>> embedded throughout the code.\n>\n>Actually, the fact that the messages are in the code, where they're used,\n>and not in a catalog file is a reason why gettext is so popular and\n>catgets gets laughed at.\n\nIs there a URL for a getcats vs. gettext debate would help me understand\nthe reason for the laughter? I can understand laughing at code that looks\nlike:\n\n elog(ERROR, 123456, typename);\n\nbut\n\n elog(ERROR, TYPALREXI, typename);\n\nis a whole lot more readable.\n\n\nAlso, you failed to address the two points below:\n\n>#define PGERR_TYPE 1854\n>\n>/* somewhere... */\n>\n>elogc(ERROR, PGERR_TYPE, \"type %s cannot be created because it already\nexists\", ...)\n>\n>/* elsewhere... */\n>\n>elogc(ERROR, PGERR_TYPE, \"type %s used as argument %d of function %s\ndoesn't exist\", ...)\n>\n\nIn the specific example above, returning the same error code is not going\nto help the client. What if they want to handle \"type %s used as argument\n%d of function %s doesn't exist\" by creating the type, and silently ignore\n\"type %s cannot be created because it already exists\"?\n\nHow do you handle \"type %s can not be used as a function return type\"? Is\nthis PGERR_FUNC or PGERR_TYPE?\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Thu, 22 Mar 2001 12:30:19 +1100",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: More on elog and error codes"
},
{
"msg_contents": "I've pretty much got to agree with Peter on both of these points.\n\nPhilip Warner <pjw@rhyme.com.au> writes:\n> At 22:03 21/03/01 +0100, Peter Eisentraut wrote:\n>>>> elogc(ERROR, PGERR_FUNCNOTYPE, ...)\n>> \n>> This is going to be a disaster for the coder. Every time you look at an\n>> elog you don't know what it does? Is the first arg a %s or a %d? What's\n>> the first %s, what the second?\n\n>> From experience using this sort of system, probably 80% of errors in new\n> code are new; if you don't know the format of your own errors, then you\n> have a larger problem. Secondly, most errors have obvious parameters, and\n> it only ever gets confusing when they have more than one parameter, and\n> even then it's pretty obvious.\n\nThe general set of parameters might be pretty obvious, but the exact\ntype that the format string expects them to be is not so obvious. We\nhave enough ints, longs, unsigned longs, etc etc running around the\nsystem that care is required. If you look at the existing elog calls\nyou'll find quite a lot of explicit casts to make certain that the right\nthing will happen. If the format strings are not directly visible to\nthe guy writing an elog call, then errors of that kind will creep in\nmore easily.\n\n>> The error messages will degrade rapidly in quality\n>> because changing one will become a major project.\n\n> Changing one will be a major project only if it is used everywhere.\n\nI agree with Peter on this one too. Even having to edit a separate\nfile will create enough friction that people will tend to use an\nexisting string if it's even marginally appropriate. What I fear even\nmore is that people will simply not code error checks, especially for\n\"can't happen\" cases, because it's too much of a pain in the neck to\nregister the appropriate message.\n\nWe must not raise the cost of adding error checks significantly, or we\nwill lose the marginal checks that sometimes save our bacon by revealing\nbugs.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 21 Mar 2001 23:24:57 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: More on elog and error codes "
},
{
"msg_contents": "At 23:24 21/03/01 -0500, Tom Lane wrote:\n>I've pretty much got to agree with Peter on both of these points.\n\nDamn.\n\n\n>Philip Warner <pjw@rhyme.com.au> writes:\n>> At 22:03 21/03/01 +0100, Peter Eisentraut wrote:\n>>>>> elogc(ERROR, PGERR_FUNCNOTYPE, ...)\n>>> \n>>> This is going to be a disaster for the coder. Every time you look at an\n>>> elog you don't know what it does? Is the first arg a %s or a %d? What's\n>>> the first %s, what the second?\n>\n>>> From experience using this sort of system, probably 80% of errors in new\n>> code are new; if you don't know the format of your own errors, then you\n>> have a larger problem. Secondly, most errors have obvious parameters, and\n>> it only ever gets confusing when they have more than one parameter, and\n>> even then it's pretty obvious.\n>\n>The general set of parameters might be pretty obvious, but the exact\n>type that the format string expects them to be is not so obvious. We\n>have enough ints, longs, unsigned longs, etc etc running around the\n>system that care is required. If you look at the existing elog calls\n>you'll find quite a lot of explicit casts to make certain that the right\n>thing will happen. If the format strings are not directly visible to\n>the guy writing an elog call, then errors of that kind will creep in\n>more easily.\n\nI agree it's more likely, but most (all?) cases can be caught by the\ncompiler. It's not ideal, but neither is having eight different versions of\nthe same message.\n\n\n>>> The error messages will degrade rapidly in quality\n>>> because changing one will become a major project.\n>\n>> Changing one will be a major project only if it is used everywhere.\n>\n>I agree with Peter on this one too. Even having to edit a separate\n>file will create enough friction that people will tend to use an\n>existing string if it's even marginally appropriate. What I fear even\n>more is that people will simply not code error checks, especially for\n>\"can't happen\" cases, because it's too much of a pain in the neck to\n>register the appropriate message.\n>\n>We must not raise the cost of adding error checks significantly, or we\n>will lose the marginal checks that sometimes save our bacon by revealing\n>bugs.\n\nThis is a problem, I agree - but a procedural one. We need to make\nregistering messages easy. To do this, rather than having a central message\nfile, perhaps do the following:\n\n- allow multiple message files (which can be processed to produce .h\nfiles). eg. pg_dump would have it's own pg_dump_messages.xxx file.\n\n- define a message that will assume it's first arg is really a format\nstring for use in the \"can't happen\" classes, and which has the SQLCODE for\n'internal error'.\n\nWe do need some central control, but by creating module-based message files\nwe can allocate number ranges easily, and we at least take a step down the\npath towards a both easy locale handling and a 'big book of error codes'.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Thu, 22 Mar 2001 16:19:38 +1100",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: More on elog and error codes "
},
{
"msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> This is a problem, I agree - but a procedural one. We need to make\n> registering messages easy. To do this, rather than having a central message\n> file, perhaps do the following:\n\n> - allow multiple message files (which can be processed to produce .h\n> files). eg. pg_dump would have it's own pg_dump_messages.xxx file.\n\nI guess I fail to see why that's better than processing the .c files\nto extract the message strings from them.\n\nI agree that the sort of system Peter proposes doesn't have any direct\nforcing function to discourage gratuitous variations of what's basically\nthe same message. The forcing function would have to come from the\ntranslators, who will look at the extracted list of messages and\ncomplain that there are near-duplicates. Then we fix the\nnear-duplicates. Seems like no big deal.\n\nHowever, a system that uses multiple message files is also not going to\ndiscourage near-duplicates very effectively. I don't think you can have\nit both ways: if you are discouraging near-duplicates, then you are\nmaking it harder to for people to create new messages, whether\nduplicates or not.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 22 Mar 2001 00:35:48 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: More on elog and error codes "
},
{
"msg_contents": "At 00:35 22/03/01 -0500, Tom Lane wrote:\n>Philip Warner <pjw@rhyme.com.au> writes:\n>> This is a problem, I agree - but a procedural one. We need to make\n>> registering messages easy. To do this, rather than having a central message\n>> file, perhaps do the following:\n>\n>> - allow multiple message files (which can be processed to produce .h\n>> files). eg. pg_dump would have it's own pg_dump_messages.xxx file.\n>\n>However, a system that uses multiple message files is also not going to\n>discourage near-duplicates very effectively. I don't think you can have\n>it both ways: if you are discouraging near-duplicates, then you are\n>making it harder to for people to create new messages, whether\n>duplicates or not.\n\nMany of the near duplicates are in the same, or related, code so with local\nmessage files there should be a good chance of reduced duplicates.\n\nOther advantages of a separate definition include:\n\n- Extra fields (eg. description, resolution) which could be used by client\nprograms.\n- Message IDs which can be checked by clients to detect specific errors,\nindependent of locale.\n- SQLCODE set in one place, rather than developers having to code it in\nmultiple places.\n\nThe original proposal also included a 'class' field:\n\n elogc(ERROR, PGERR_TYPE, \"type %s cannot be created because it already \n\nISTM that we will have a similar allocation problem with these. But, more\nrecent example have exluded them, so I am not sure about their status is\nPeter's plans.\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Thu, 22 Mar 2001 17:40:22 +1100",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: More on elog and error codes "
}
] |
[
{
"msg_contents": "It has been brought up that elog should be able to automatically fill in\nthe file, line, and perhaps the function name where it's called, to avoid\nhaving to prefix each message with the function name by hand, which is\nquite ugly.\n\nThis is doable, but it requires a C preprocessor that can handle varargs\nmacros. Since this is required by C99 and has been available in GCC for a\nwhile, it *might* be okay to rely on this.\n\nAdditionally, C99 (and GCC for a while) would allow filling in the\nfunction name automatically.\n\nSince these would be mostly developer features, how do people feel about\nrelying on modern tools for implementing these? The bottom line seems to\nbe that without these tools it would simply not be possible.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Tue, 20 Mar 2001 00:10:43 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "elog with automatic file, line, and function"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> It has been brought up that elog should be able to automatically fill in\n> the file, line, and perhaps the function name where it's called, to avoid\n> having to prefix each message with the function name by hand, which is\n> quite ugly.\n\n> Since these would be mostly developer features, how do people feel about\n> relying on modern tools for implementing these?\n\nNot happy. A primary reason for wanting the exact location is to make\nbug reports more specific. If Joe User's copy of Postgres doesn't\nreport error location then it doesn't help me much that my copy does\n(if I could reproduce the reported failure, then gdb will tell me where\nthe elog call is...). In particular, we *cannot* remove the habit of\nmentioning the reporting routine name in the message text unless there\nis an adequate substitute in all builds.\n\n> The bottom line seems to be that without these tools it would simply\n> not be possible.\n\nSure it is, it just requires a marginal increase in ugliness, namely\ndouble parentheses:\n\n\tELOG((level, format, arg1, arg2, ...))\n\nwhich might work like\n\n#define ELOG(ARGS) (elog_setloc(__FILE__, __LINE__), elog ARGS)\n\n\n> Additionally, C99 (and GCC for a while) would allow filling in the\n> function name automatically.\n\nWe could probably treat the function name as something that's optionally\nadded to the file/line error report info if the compiler supports it.\n\nBTW, how does that work exactly? I assume it can't be a macro ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 19 Mar 2001 18:23:30 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: elog with automatic file, line, and function "
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> > Additionally, C99 (and GCC for a while) would allow filling in the\n> > function name automatically.\n> \n> We could probably treat the function name as something that's optionally\n> added to the file/line error report info if the compiler supports it.\n> \n> BTW, how does that work exactly? I assume it can't be a macro ...\n\nIt's a macro just like __FILE__ and __LINE__ are macros.\n\ngcc has supported __FUNCTION__ and __PRETTY_FUNCTION__ for a long time\n(the latter is the demangled version of the function name when using\nC++).\n\nIan\n",
"msg_date": "19 Mar 2001 16:33:28 -0800",
"msg_from": "Ian Lance Taylor <ian@airs.com>",
"msg_from_op": false,
"msg_subject": "Re: elog with automatic file, line, and function"
},
{
"msg_contents": "Ian Lance Taylor <ian@airs.com> writes:\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n>> BTW, how does that work exactly? I assume it can't be a macro ...\n\n> It's a macro just like __FILE__ and __LINE__ are macros.\n\n> gcc has supported __FUNCTION__ and __PRETTY_FUNCTION__ for a long time\n> (the latter is the demangled version of the function name when using\n> C++).\n\nNow that I know the name, I can find it in the gcc docs, which clearly\nexplain that these names are not macros ;-). The preprocessor would\nhave a tough time making such a substitution.\n\nHowever, if the C99 spec has such a concept, they didn't use that name\nfor it ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 19 Mar 2001 19:38:36 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: elog with automatic file, line, and function "
},
{
"msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [010319 18:58]:\n> Ian Lance Taylor <ian@airs.com> writes:\n> > Tom Lane <tgl@sss.pgh.pa.us> writes:\n> >> BTW, how does that work exactly? I assume it can't be a macro ...\n> \n> > It's a macro just like __FILE__ and __LINE__ are macros.\n> \n> > gcc has supported __FUNCTION__ and __PRETTY_FUNCTION__ for a long time\n> > (the latter is the demangled version of the function name when using\n> > C++).\n> \n> Now that I know the name, I can find it in the gcc docs, which clearly\n> explain that these names are not macros ;-). The preprocessor would\n> have a tough time making such a substitution.\n> \n> However, if the C99 spec has such a concept, they didn't use that name\n> for it ...\nMy C99 compiler (SCO, UDK FS 7.1.1b), defines the following:\nPredefined names\n\nThe following identifiers are predefined as object-like macros: \n\n\n__LINE__\n The current line number as a decimal constant. \n\n__FILE__\n A string literal representing the name of the file being compiled. \n\n__DATE__\n The date of compilation as a string literal in the form ``Mmm dd\nyyyy.'' \n\n__TIME__\n The time of compilation, as a string literal in the form\n``hh:mm:ss.'' \n\n__STDC__\n The constant 1 under compilation mode -Xc, otherwise 0. \n\n__USLC__\n A positive integer constant; its definition signifies a USL C\ncompilation system. \n\nNothing for function that I can find. \n\nLER\n\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Mon, 19 Mar 2001 19:25:48 -0600",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: elog with automatic file, line, and function"
},
{
"msg_contents": "Larry Rosenman writes:\n > * Tom Lane <tgl@sss.pgh.pa.us> [010319 18:58]:\n > > However, if the C99 spec has such a concept, they didn't use that name\n > > for it ...\n > My C99 compiler (SCO, UDK FS 7.1.1b), defines the following:\n > Predefined names\n > \n > The following identifiers are predefined as object-like macros: \n > \n > \n > __LINE__\n > The current line number as a decimal constant. \n > \n > __FILE__\n > A string literal representing the name of the file being compiled. \n > \n > __DATE__\n > The date of compilation as a string literal in the form ``Mmm dd\n > yyyy.'' \n > \n > __TIME__\n > The time of compilation, as a string literal in the form\n > ``hh:mm:ss.'' \n > \n > __STDC__\n > The constant 1 under compilation mode -Xc, otherwise 0. \n > \n > __USLC__\n > A positive integer constant; its definition signifies a USL C\n > compilation system. \n > \n > Nothing for function that I can find.\n\nIt is called __func__ in C99 but it is not an object-like macro. The\ndifference is that it behaves as if it were declared thus.\n\n static const char __func__[] = \"function-name\";\n\nThose other identifiers can be used in this sort of way.\n\n printf(\"Error in \" __FILE__ \" at line \" __LINE__ \"\\n\");\n\nBut you've got to do something like this for __func__.\n\n printf(\"Error in %s\\n\", __func__);\n\n-- \nPete Forman -./\\.- Disclaimer: This post is originated\nWesternGeco -./\\.- by myself and does not represent\npete.forman@westerngeco.com -./\\.- opinion of Schlumberger, Baker\nhttp://www.crosswinds.net/~petef -./\\.- Hughes or their divisions.\n",
"msg_date": "Tue, 20 Mar 2001 09:51:06 +0000",
"msg_from": "Pete Forman <pete.forman@westerngeco.com>",
"msg_from_op": false,
"msg_subject": "Re: elog with automatic file, line, and function"
},
{
"msg_contents": "* Pete Forman <pete.forman@westerngeco.com> [010320 04:22]:\n> Larry Rosenman writes:\n> > * Tom Lane <tgl@sss.pgh.pa.us> [010319 18:58]:\n> > > However, if the C99 spec has such a concept, they didn't use that name\n> > > for it ...\n> > My C99 compiler (SCO, UDK FS 7.1.1b), defines the following:\n> > Predefined names\n> > \n> > The following identifiers are predefined as object-like macros: \n> > \n> > \n> > __LINE__\n> > The current line number as a decimal constant. \n> > \n> > __FILE__\n> > A string literal representing the name of the file being compiled. \n> > \n> > __DATE__\n> > The date of compilation as a string literal in the form ``Mmm dd\n> > yyyy.'' \n> > \n> > __TIME__\n> > The time of compilation, as a string literal in the form\n> > ``hh:mm:ss.'' \n> > \n> > __STDC__\n> > The constant 1 under compilation mode -Xc, otherwise 0. \n> > \n> > __USLC__\n> > A positive integer constant; its definition signifies a USL C\n> > compilation system. \n> > \n> > Nothing for function that I can find.\n> \n> It is called __func__ in C99 but it is not an object-like macro. The\n> difference is that it behaves as if it were declared thus.\n> \n> static const char __func__[] = \"function-name\";\n> \n> Those other identifiers can be used in this sort of way.\n> \n> printf(\"Error in \" __FILE__ \" at line \" __LINE__ \"\\n\");\n> \n> But you've got to do something like this for __func__.\n> \n> printf(\"Error in %s\\n\", __func__);\n> \nI couldn't find it in the docs, but it is in the compiler. \n\nWierd.\n\nI'll look more.\n\nLER\n\n> -- \n> Pete Forman -./\\.- Disclaimer: This post is originated\n> WesternGeco -./\\.- by myself and does not represent\n> pete.forman@westerngeco.com -./\\.- opinion of Schlumberger, Baker\n> http://www.crosswinds.net/~petef -./\\.- Hughes or their divisions.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Tue, 20 Mar 2001 07:31:30 -0600",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: Re: elog with automatic file, line, and function"
},
{
"msg_contents": "Tom Lane writes:\n\n> Sure it is, it just requires a marginal increase in ugliness, namely\n> double parentheses:\n>\n> \tELOG((level, format, arg1, arg2, ...))\n>\n> which might work like\n>\n> #define ELOG(ARGS) (elog_setloc(__FILE__, __LINE__), elog ARGS)\n\nWould the first function save the data in global variables?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Wed, 21 Mar 2001 21:57:04 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: elog with automatic file, line, and function "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> #define ELOG(ARGS) (elog_setloc(__FILE__, __LINE__), elog ARGS)\n\n> Would the first function save the data in global variables?\n\nYes, that's what I was envisioning. Not a super clean solution,\nbut workable, and better than requiring varargs macros.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 21 Mar 2001 21:54:23 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: elog with automatic file, line, and function "
}
] |
[
{
"msg_contents": "> \"Vadim Mikheev\" <vmikheev@sectorbase.com> writes:\n> > Anyway, deadlock in my tests are very correlated with new log file\n> > creation - something probably is still wrong...\n> \n> Well, if you can reproduce it easily, seems like you could \n> get in there and verify or disprove my theory about where\n> the deadlock is.\n\nYou were right - deadlock disappeared.\n\nBTW, I've got ~320tps with 50 clients inserting (int4, text[1-256])\nrecords into 50 tables (-B 16384, wal_buffers = 256) on Ultra10\nwith 512Mb RAM, IDE (clients run on the same host as server).\n\nVadim\n",
"msg_date": "Mon, 19 Mar 2001 16:59:57 -0800",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: Stuck spins in current "
},
{
"msg_contents": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n>> \"Vadim Mikheev\" <vmikheev@sectorbase.com> writes:\n> Anyway, deadlock in my tests are very correlated with new log file\n> creation - something probably is still wrong...\n>> \n>> Well, if you can reproduce it easily, seems like you could \n>> get in there and verify or disprove my theory about where\n>> the deadlock is.\n\n> You were right - deadlock disappeared.\n\nOkay, good. I'll bet the correlation to new-log-file was just because\nthe WAL insert_lck gets held for a longer time than usual if XLogInsert\nis forced to call XLogWrite and that in turn is forced to make a new\nlog file. Were you running with wal_files = 0? The problem would\nlikely not have shown up at all if logfiles were created in advance...\n\n> BTW, I've got ~320tps with 50 clients inserting (int4, text[1-256])\n> records into 50 tables (-B 16384, wal_buffers = 256) on Ultra10\n> with 512Mb RAM, IDE (clients run on the same host as server).\n\nNot bad. What were you getting before these recent changes?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 19 Mar 2001 20:08:38 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Stuck spins in current "
},
{
"msg_contents": "> > BTW, I've got ~320tps with 50 clients inserting (int4, text[1-256])\n> > records into 50 tables (-B 16384, wal_buffers = 256) on Ultra10\n> > with 512Mb RAM, IDE (clients run on the same host as server).\n> \n> Not bad. What were you getting before these recent changes?\n\nAs I already reported - with O_DSYNC this test shows 30% better\nperformance than with fsync.\n\n(BTW, seems in all my tests I was using -O0 flag...)\n\nVadim\n\n\n",
"msg_date": "Wed, 21 Mar 2001 01:46:23 -0800",
"msg_from": "\"Vadim Mikheev\" <vmikheev@sectorbase.com>",
"msg_from_op": false,
"msg_subject": "Re: Stuck spins in current "
},
{
"msg_contents": "> > > BTW, I've got ~320tps with 50 clients inserting (int4, text[1-256])\n> > > records into 50 tables (-B 16384, wal_buffers = 256) on Ultra10\n> > > with 512Mb RAM, IDE (clients run on the same host as server).\n> > \n> > Not bad. What were you getting before these recent changes?\n> \n> As I already reported - with O_DSYNC this test shows 30% better\n> performance than with fsync.\n> \n> (BTW, seems in all my tests I was using -O0 flag...)\n\nGood data point. I could never understand why we would ever use the\nnormal sync if we had a data-only sync option available. I can imagine\nthe data-only being the same, but never slower.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Mar 2001 08:25:00 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Stuck spins in current"
}
] |
[
{
"msg_contents": "Peter Eisentraut wrote:\n> \n> It has been brought up that elog should be able to automatically fill in\n> the file, line, and perhaps the function name where it's called, to avoid\n> having to prefix each message with the function name by hand, which is\n> quite ugly.\n> \n> This is doable, but it requires a C preprocessor that can handle varargs\n> macros. Since this is required by C99 and has been available in GCC for a\n> while, it *might* be okay to rely on this.\n>\n> Additionally, C99 (and GCC for a while) would allow filling in the\n> function name automatically.\n> \n> Since these would be mostly developer features, how do people feel about\n> relying on modern tools for implementing these? The bottom line seems to\n> be that without these tools it would simply not be possible.\n\nIt is possible, however, the macros require an extra set of parentheses:\n\nvoid elog_internal(const char* file, unsigned long line, ... );\n#define ELOG(args) elog_internal(__FILE__, __LINE__, args)\n\nELOG((\"%s error\", string))\n\nFor portability to older compilers, you should probably not require C99.\n\nAlso, I'm not positive, but I think that varargs are not part of C++\nyet. \nHowever, they will likely be added (if not already in draft form).\n\nNeal\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Mon, 19 Mar 2001 20:44:11 -0500",
"msg_from": "Neal Norwitz <nnorwitz@yahoo.com>",
"msg_from_op": true,
"msg_subject": "Re: elog with automatic file, line, and function"
}
] |
[
{
"msg_contents": "Stephen van Egmond (svanegmond@home.com) reports a bug with a severity of 3\nThe lower the number the more severe it is.\n\nShort Description\ncomments on columns aren't displayed by \\dd\n\nLong Description\nComments on columns appear to be broken. CREATE COMMENT ON COLUMN inserts stuff into the backend pg_* tables correctly. However \\dd from the psql interactive monitor doesn't send a query to retrieve it (i.e. no join on pg_attribute).\n\nI am working from the Debian package postgresql-client 7.0.3-2.\n\nSample Code\ncreate table foo (a text);\ncmment on column foo.a is 'hello!';\n\\dd foo.a\n\nNo file was uploaded with this report\n\n",
"msg_date": "Tue, 20 Mar 2001 00:15:28 -0500 (EST)",
"msg_from": "pgsql-bugs@postgresql.org",
"msg_from_op": true,
"msg_subject": "comments on columns aren't displayed by \\dd"
},
{
"msg_contents": "Psql manual pages says:\n\n The command form \\d+ is identical, but any comments\n associated with the table columns are shown as\n well.\n\n> Stephen van Egmond (svanegmond@home.com) reports a bug with a severity of 3\n> The lower the number the more severe it is.\n> \n> Short Description\n> comments on columns aren't displayed by \\dd\n> \n> Long Description\n> Comments on columns appear to be broken. CREATE COMMENT ON COLUMN inserts stuff into the backend pg_* tables correctly. However \\dd from the psql interactive monitor doesn't send a query to retrieve it (i.e. no join on pg_attribute).\n> \n> I am working from the Debian package postgresql-client 7.0.3-2.\n> \n> Sample Code\n> create table foo (a text);\n> cmment on column foo.a is 'hello!';\n> \\dd foo.a\n> \n> No file was uploaded with this report\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 20 Mar 2001 00:23:04 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: comments on columns aren't displayed by \\dd"
},
{
"msg_contents": "Bruce Momjian (pgman@candle.pha.pa.us) wrote:\n> Psql manual pages says:\n> \n> The command form \\d+ is identical, but any comments\n> associated with the table columns are shown as\n> well.\n\nThank you for the response, Bruce. I was working from the HTML docs\nfor COMMENT ON [1] where only \\dd is mentioned. \\d+ doesn't occur in the \n\\? psql help either.\n\n-Steve\n [1] http://www.postgresql.org/users-lounge/docs/7.0/postgres/sql-comment.htm\n",
"msg_date": "Tue, 20 Mar 2001 00:56:25 -0500",
"msg_from": "Stephen van Egmond <svanegmond@bang.dhs.org>",
"msg_from_op": false,
"msg_subject": "Re: comments on columns aren't displayed by \\dd"
},
{
"msg_contents": "> Bruce Momjian (pgman@candle.pha.pa.us) wrote:\n> > Psql manual pages says:\n> > \n> > The command form \\d+ is identical, but any comments\n> > associated with the table columns are shown as\n> > well.\n> \n> Thank you for the response, Bruce. I was working from the HTML docs\n> for COMMENT ON [1] where only \\dd is mentioned. \\d+ doesn't occur in the \n> \\? psql help either.\n> \n> -Steve\n> [1] http://www.postgresql.org/users-lounge/docs/7.0/postgres/sql-comment.htm\n\nThe + options to psql are sort of COMMENT additions to the backslash\ncommands.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 20 Mar 2001 00:56:48 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: comments on columns aren't displayed by \\dd"
},
{
"msg_contents": "Hi all,\n\nI'm just wondering if this is an error on my part, or a bug. I have the\nsame trouble with PG 7.1beta6 and PG7.1 snapshot (March 8th) on Solaris\n8 INTEL, Solaris 8 SPARC and Linux Mandrake 7.2.\n\nWhen using the libpqeasy library in a C function, I have the following\nsection of code :\n\n // Get the sequence number for the next directory entry (PostgreSQL\ncommands)\n doquery(\"BEGIN WORK\");\n doquery(\"DECLARE c_getdirid BINARY CURSOR FOR SELECT\nnextval('prescan_directories_idnum_seq'::text)\");\n doquery(\"FETCH ALL IN c_getdirid\");\n fetch(&enumdirstruc_p->presentdirid);\n doquery(\"CLOSE c_getdirid\");\n doquery(\"COMMIT WORK\");\n\nThis is called once per entry in a filesystem (this is a filesystem\nscanning utility) but after about 1000 or so calls, it errors out and\nwon't work again. I have to actually DROP the database and re-create it\nagain before the code will work again at all. Just vacumming doesn't\nhelp, nor does just shutting down the database and starting it again\n(doing both and vacuum and restarting the database doesn't help either).\n\nThe error message is :\n\n<list of files correctly inserted so far, then>\n/archive/install/kde/kdeadmin-2.1/ksysctrl/.cvsignore\nNOTICE: PerformPortalFetch: portal \"c_getdirid\" not found\nNOTICE: PerformPortalClose: portal \"c_getdirid\" not found\nDirectory query failed, trying again...New directory idnum = -2147483648\n(This is my error message from the program)\nquery error:\nfailed request: insert into prescan_files(filename, dirent, ownername,\nowenerid, groupname, groupid, filesize, os, os_version, package_id)\nvalues ('/archive/install/kde/kdeadmin-2.1/add-on/.cvsignore',\n2147483648, 'jclift', 100, 'staff', 10, 21, 1, '8 INTEL', 16777216)\n$\n\nI can include the database schema and complete source code if needed,\nbut I'm just not sure where to start debugging... is it my app or is it\nPostgreSQL?\n\nRegards and best wishes,\n\nJustin Clift\n",
"msg_date": "Tue, 20 Mar 2001 20:54:21 +1100",
"msg_from": "Justin Clift <jclift@iprimus.com.au>",
"msg_from_op": false,
"msg_subject": "libpqeasy cursor error after multiple calls"
},
{
"msg_contents": "\nI am kind of stumped. Glad to see _someone_ is using libpgeasy. :-)\n\nI would be glad to run tests here if you can shoot over the code.\n\n\n> Hi all,\n> \n> I'm just wondering if this is an error on my part, or a bug. I have the\n> same trouble with PG 7.1beta6 and PG7.1 snapshot (March 8th) on Solaris\n> 8 INTEL, Solaris 8 SPARC and Linux Mandrake 7.2.\n> \n> When using the libpqeasy library in a C function, I have the following\n> section of code :\n> \n> // Get the sequence number for the next directory entry (PostgreSQL\n> commands)\n> doquery(\"BEGIN WORK\");\n> doquery(\"DECLARE c_getdirid BINARY CURSOR FOR SELECT\n> nextval('prescan_directories_idnum_seq'::text)\");\n> doquery(\"FETCH ALL IN c_getdirid\");\n> fetch(&enumdirstruc_p->presentdirid);\n> doquery(\"CLOSE c_getdirid\");\n> doquery(\"COMMIT WORK\");\n> \n> This is called once per entry in a filesystem (this is a filesystem\n> scanning utility) but after about 1000 or so calls, it errors out and\n> won't work again. I have to actually DROP the database and re-create it\n> again before the code will work again at all. Just vacumming doesn't\n> help, nor does just shutting down the database and starting it again\n> (doing both and vacuum and restarting the database doesn't help either).\n> \n> The error message is :\n> \n> <list of files correctly inserted so far, then>\n> /archive/install/kde/kdeadmin-2.1/ksysctrl/.cvsignore\n> NOTICE: PerformPortalFetch: portal \"c_getdirid\" not found\n> NOTICE: PerformPortalClose: portal \"c_getdirid\" not found\n> Directory query failed, trying again...New directory idnum = -2147483648\n> (This is my error message from the program)\n> query error:\n> failed request: insert into prescan_files(filename, dirent, ownername,\n> owenerid, groupname, groupid, filesize, os, os_version, package_id)\n> values ('/archive/install/kde/kdeadmin-2.1/add-on/.cvsignore',\n> 2147483648, 'jclift', 100, 'staff', 10, 21, 1, '8 INTEL', 16777216)\n> $\n> \n> I can include the database schema and complete source code if needed,\n> but I'm just not sure where to start debugging... is it my app or is it\n> PostgreSQL?\n> \n> Regards and best wishes,\n> \n> Justin Clift\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Mar 2001 00:34:56 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: libpqeasy cursor error after multiple calls"
},
{
"msg_contents": "I don't know but it may be that you're trying to insert a number larger than\nmaxint?\n\nie: 2147483648\n\n???\n\nChris\n\n-----Original Message-----\nFrom: pgsql-hackers-owner@postgresql.org\n[mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Justin Clift\nSent: Tuesday, 20 March 2001 5:54 PM\nTo: pgsql-hackers@postgresql.org\nSubject: [HACKERS] libpqeasy cursor error after multiple calls\n\n\nHi all,\n\nI'm just wondering if this is an error on my part, or a bug. I have the\nsame trouble with PG 7.1beta6 and PG7.1 snapshot (March 8th) on Solaris\n8 INTEL, Solaris 8 SPARC and Linux Mandrake 7.2.\n\nWhen using the libpqeasy library in a C function, I have the following\nsection of code :\n\n // Get the sequence number for the next directory entry (PostgreSQL\ncommands)\n doquery(\"BEGIN WORK\");\n doquery(\"DECLARE c_getdirid BINARY CURSOR FOR SELECT\nnextval('prescan_directories_idnum_seq'::text)\");\n doquery(\"FETCH ALL IN c_getdirid\");\n fetch(&enumdirstruc_p->presentdirid);\n doquery(\"CLOSE c_getdirid\");\n doquery(\"COMMIT WORK\");\n\nThis is called once per entry in a filesystem (this is a filesystem\nscanning utility) but after about 1000 or so calls, it errors out and\nwon't work again. I have to actually DROP the database and re-create it\nagain before the code will work again at all. Just vacumming doesn't\nhelp, nor does just shutting down the database and starting it again\n(doing both and vacuum and restarting the database doesn't help either).\n\nThe error message is :\n\n<list of files correctly inserted so far, then>\n/archive/install/kde/kdeadmin-2.1/ksysctrl/.cvsignore\nNOTICE: PerformPortalFetch: portal \"c_getdirid\" not found\nNOTICE: PerformPortalClose: portal \"c_getdirid\" not found\nDirectory query failed, trying again...New directory idnum = -2147483648\n(This is my error message from the program)\nquery error:\nfailed request: insert into prescan_files(filename, dirent, ownername,\nowenerid, groupname, groupid, filesize, os, os_version, package_id)\nvalues ('/archive/install/kde/kdeadmin-2.1/add-on/.cvsignore',\n2147483648, 'jclift', 100, 'staff', 10, 21, 1, '8 INTEL', 16777216)\n$\n\nI can include the database schema and complete source code if needed,\nbut I'm just not sure where to start debugging... is it my app or is it\nPostgreSQL?\n\nRegards and best wishes,\n\nJustin Clift\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n",
"msg_date": "Wed, 21 Mar 2001 14:54:00 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "RE: libpqeasy cursor error after multiple calls"
}
] |
[
{
"msg_contents": "Hi,\n\nI want to ask question:\n\ncan i write my own concurrency control algorithm and\napply it using postgresql?\n\nThanks in advance\nManal\n\n__________________________________________________\nDo You Yahoo!?\nGet email at your own domain with Yahoo! Mail. \nhttp://personal.mail.yahoo.com/\n",
"msg_date": "Mon, 19 Mar 2001 23:57:22 -0800 (PST)",
"msg_from": "Manal S <manal1_s@yahoo.com>",
"msg_from_op": true,
"msg_subject": "Question"
}
] |
[
{
"msg_contents": "\n> > Nothing for function that I can find.\n> \n> It is called __func__ in C99 but it is not an object-like macro. The\n> difference is that it behaves as if it were declared thus.\n\nAIX xlc has __FUNCTION__, but unfortunately no __func__ or __PRETTY...\nIt outputs the full demagled funcname with __FUNCTION__ (like __PRETTY...). \n\nI do not think it would be appropriate to send file, line and func infos to the \nclient though.\n\nAndreas\n",
"msg_date": "Tue, 20 Mar 2001 12:51:14 +0100",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: Re: elog with automatic file, line, and function"
},
{
"msg_contents": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n> I do not think it would be appropriate to send file, line and func\n> infos to the client though.\n\nWe still need to work out the details, but my first thought would be to\nmake this conditional on the value of some SET variable. Also, probably\nthe info should always be recorded in the postmaster log.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 20 Mar 2001 10:21:54 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: AW: Re: elog with automatic file, line, and function "
}
] |
[
{
"msg_contents": "\n> > So we need some good error numbering scheme. Any ideas?\n> \n> SQL9x specifies some error codes, with no particular numbering scheme\n> other than negative numbers indicate a problem afaicr.\n> \n> Shouldn't we map to those where possible?\n\nYes, it defines at least a few dozen char(5) error codes. These are hierarchical, \ngrouped into Warnings and Errors, and have room for implementation specific \nmessage codes.\nImho there is no room for inventing something new here, or only in addition.\n\nAndreas\n",
"msg_date": "Tue, 20 Mar 2001 17:28:51 +0100",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: Re: More on elog and error codes"
},
{
"msg_contents": "Zeugswetter Andreas SB writes:\n\n> > SQL9x specifies some error codes, with no particular numbering scheme\n> > other than negative numbers indicate a problem afaicr.\n> >\n> > Shouldn't we map to those where possible?\n>\n> Yes, it defines at least a few dozen char(5) error codes. These are hierarchical,\n> grouped into Warnings and Errors, and have room for implementation specific\n> message codes.\n\nLet's use those then to start with.\n\nAnyone got a good idea for a client API to this? I think we could just\nprefix the actual message with the error code, at least as a start.\nSince they're all fixed width the client could take them apart easily. I\nrecall other RDBMS' (Oracle?) also having an error code before each\nmessage.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Tue, 20 Mar 2001 17:53:42 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: AW: Re: More on elog and error codes"
},
{
"msg_contents": "Coming from an IBM Mainframe background, I'm used to ALL OS/Product \nmessages having a message number, and a fat messages and codes book.\n\nI hope we can do that eventually. \n(maybe a database of the error numbers and codes?)\n\nLER\n\n\n>>>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<<\n\nOn 3/20/01, 10:53:42 AM, Peter Eisentraut <peter_e@gmx.net> wrote regarding \nRe: AW: [HACKERS] Re: More on elog and error codes:\n\n\n> Zeugswetter Andreas SB writes:\n\n> > > SQL9x specifies some error codes, with no particular numbering scheme\n> > > other than negative numbers indicate a problem afaicr.\n> > >\n> > > Shouldn't we map to those where possible?\n> >\n> > Yes, it defines at least a few dozen char(5) error codes. These are \nhierarchical,\n> > grouped into Warnings and Errors, and have room for implementation \nspecific\n> > message codes.\n\n> Let's use those then to start with.\n\n> Anyone got a good idea for a client API to this? I think we could just\n> prefix the actual message with the error code, at least as a start.\n> Since they're all fixed width the client could take them apart easily. I\n> recall other RDBMS' (Oracle?) also having an error code before each\n> message.\n\n> --\n> Peter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n\n> http://www.postgresql.org/users-lounge/docs/faq.html\n",
"msg_date": "Tue, 20 Mar 2001 16:57:38 GMT",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: AW: Re: More on elog and error codes"
},
{
"msg_contents": "> So we need some good error numbering scheme. Any ideas?\n\nI'm a newbie, but have been following dev and have a few comments\nand these are thoughts not criticisms:\n\n1) I've seen a huge mixture of \"how to implement\" to support some\n desired feature without first knowing \"all\" of the features that\n are desired. Examination over all of the mailings reveals some\n but not all of possible features you may want to include.\n2) Define what you want to have without worrying about how to do it.\n3) Design something that can implement all of the features.\n4) Reconsider design if there are performance issues.\n\ne.g.\n\nFeatures desired\n* system\n* subsystem\n* function\n* file, line, etc\n* severity\n* user-ability-to-recover\n* standards conformance - e.g.. SQL std\n* default msg statement\n* locale msg statement lookup mech, os dep or indep (careful here)\n* success/warning/failure\n* semantic taxonomy\n* syntactic taxonomy\n* forced to user, available to api, logging or not, tracing\n* concept of level\n* reports filtering on some attribute\n* interoperation with existing system reports e.g. syslog, event log,...\n* system environment snapshot option\n (e.g. resource low/empty may/should trigger a log of conn cnt,\n sys resource counts, load, etc)\n* non-mnemonic internal numbers (mnemonic only to obey stds and then\n only as a function call, not by implementation)\n* ease of use (i.e. pgsql-dev-hacker use)\n* ease of use (i.e. api development use)\n* ease of use (i.e. rolling into an existing system, e.g. during\n transition both may need to be in use.)\n* ease of use (i.e. looking through existing errors to find one\n that may \"correctly\" fit the situation, instead of\n creating yet-another-error-message.)\n* ease of use (i.e. maybe having each \"sub-system\" having its own\n \"error domain\" but using the same error mechanism)\n* distinction btwn error report, debug report, tracing report, etc\n* separate the concepts of\n - report creation\n - report delivery\n - report reception\n - report interpretation\n* what do other's do, other's as in os, db, middleware, etc\n along with their strong and weak points\n... what else do you want... and lets flesh out the meaning of\neach of these. Then we can go on to a design...\n\nSorry if this sounds like a lecture.\n\nWith regards to mnemonic things - ugh - this is a database.\nI've worked with a LARGE electronics company that had\n10 and 12 digit mnemonic part numbers. The mnemonic-ness\nbegins to break down. (So you have a part number of an eprom,\nwhat is the part number when it is blown - still an eprom?\nhow about including the version of the sw on the eprom? is it\nnow an assembly? opps that tended to mean multiple parts attached\ntogether, humm still looks like an eprom?) They have gone through\na huge transition to move away, as has the industry from mnemonic\nnumbers to simply an id number. You look up the id number in a\n>database< :-) to find out what it is.\n\nSo why not drop the mnemonic concept and apply a function to a\nblackbox dataitem to determine its attribute? But again first\ndetermine what attributes you want, which are mandatory, optional,\nsystem supplied (e.g. __LINE__ etc), is it for erroring, tracing,\ndebugging, some combo; then the appropriate dataitem can be\ndesigned and functions defined. Functions (macros) for both the\nreport creation, report distribution, report reception, and\nreport interpretation. Some other email pointed out that\nthere are different people doing different things. Each of these\npeople-groups should identify what they need with regards to\nerror, debug, tracing reports. Each may have some nuances that\nare not needed elsewhere, but the reporting system should be able\nto support them all.\n\nOk, so I've got my flame suit on... but I am really trying to give\nan \"outsiders\" birdseye view of what I've been reading, hopefully\nwhich may be helpful.\n\nBest regards,\n\n.. Otto\n\nOtto Hirr\nOLAB Inc.\notto.hirr@olabinc.com\n503 / 617-6595\n\n",
"msg_date": "Tue, 20 Mar 2001 13:48:49 -0800",
"msg_from": "\"Otto A. Hirr, Jr.\" <otto.hirr@olabinc.com>",
"msg_from_op": false,
"msg_subject": "RE: Re: More on elog and error codes"
}
] |
[
{
"msg_contents": "> #define PGERR_TYPE 1854\n\n#define PGSQLSTATE_TYPE\t\"S0021\" // char(5) SQLSTATE \n\nThe standard calls this error variable SQLSTATE \n(look up in ESQL standard)\n\nfirst 2 chars are class next 3 are subclass\n\n\"00000\" is e.g. Success \n\"02000\" is Data not found\n\"U0xxx\" user defined routine error xxx is user defined\n\n> /* somewhere... */\n> \n> elogc(ERROR, PGERR_TYPE, \"type %s cannot be created because it already exists\", ...)\n\nPGELOG(ERROR, PGSQLSTATE_TYPE, (\"type %s cannot be created because it already exists\", ...))\n\nput varargs into parentheses to avoid need for ... macros see Tom's proposal\n\nI also agree, that we can group different text messages into the same SQLSTATE,\nif it seems appropriate for the client to handle them alike.\n\nAndreas\n",
"msg_date": "Tue, 20 Mar 2001 17:53:47 +0100",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: More on elog and error codes"
},
{
"msg_contents": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n> PGELOG(ERROR, PGSQLSTATE_TYPE, (\"type %s cannot be created because it already exists\", ...))\n\n> put varargs into parentheses to avoid need for ... macros see Tom's proposal\n\nI'd be inclined to make it\n\nPGELOG((ERROR, PGSQLSTATE_TYPE, \"type %s cannot be created because it already exists\", ...))\n\nThe extra parens are ugly and annoying in any case, but they seem\nslightly less so if you just double the parens associated with the\nPGELOG call. Takes less thought than adding a paren somewhere in the\nmiddle of the call. IMHO anyway...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 20 Mar 2001 12:29:38 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: AW: More on elog and error codes "
}
] |
[
{
"msg_contents": "I was configuring postmaster to allow more buffers to be used (1250 of them) and\nonce that change was made, postmaster would no longer allow connections. I have\nsince removed the option and it still does the same thing. I added -N 33 to it\nto see if she would recover to no avail...\n\nany ideas?\n\n Chris Bowlby, \n ----------------------------------------------------- \n Web Developer @ Hub.org.\n excalibur@hub.org\n www.hub.org\n 1-902-542-3657 \n -----------------------------------------------------\n",
"msg_date": "Tue, 20 Mar 2001 13:54:38 -0500 (EST)",
"msg_from": "Chris Bowlby <excalibur@hub.org>",
"msg_from_op": true,
"msg_subject": "Client can not connect to socket..."
}
] |
[
{
"msg_contents": "> Hmm ... so you think the people who have complained of this are all\n> working with databases that have suffered previous crash corruption?\n> I doubt it. There's too much consistency to the reports: in\n> particular, it's generally triggered by creation of lots of large\n> objects, and it's always the indexes on pg_attribute,\n> never any other table (even though large object creation inserts into\n> several system tables). I don't see how the unfinished-split hypothesis\n> explains that.\n\nI saw this error after PG' crashes and power off in my employer' project\nwhere large objects were not used. As for pg_attributes - PG inserts\ninto this table more rows than into others => more splits => higher\nprobability of unfinished split in the event of crash.\n\n> My thought was that it is somehow related to the many-equal-keys issues\n> that we had in 7.0.* and before, and/or the poor behavior for purely\n\npg_attribute_relid_attnum_index is unique index, so I doubt that\n\"many-equal-keys issue\" is related to subj.\n\n> sequential key insertion that we still have. But without a test case\n> it's hard to be sure.\n\nThis is hypothesis and we don't know how to test it. But unfinished splits\nis not hypothesis. It's *obviously* may cause \"my bits moved right off the\nend of the world\" error and we can test this very easy.\n\nVadim\n",
"msg_date": "Tue, 20 Mar 2001 10:59:16 -0800",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: Re: PostgreSQL; Strange error "
}
] |
[
{
"msg_contents": "> \tWe'd like to wrap up an RC1 and get this release happening\n> this year sometime :) Tom mentioned to me that he has no\n> outstandings left on his plate ... does anyone else have any\n> *show stoppers* left that need to be addressed, or can I package\n> things up?\n\nI wonder if anybody tried to stop PG in the time of high write\nactivity with pg_ctl -m immediate stop or power off to see\nhow recovery works..?\n\nVadim\n",
"msg_date": "Tue, 20 Mar 2001 11:01:18 -0800",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: Final Call: RC1 about to go out the door ..."
}
] |
[
{
"msg_contents": "> And still no LAZY vacuum. *sigh*\n\nPatch will be available in a few days after release.\nSorry, Alfred.\n\nVadim\n",
"msg_date": "Tue, 20 Mar 2001 11:02:23 -0800",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: Final Call: RC1 about to go out the door ..."
}
] |
[
{
"msg_contents": "Added to TODO:\n\n\t* Make elog(LOG) in WAL its own output type, distinct from DEBUG\n\t* Delay fsync() when other backends are about to commit too [fsync]\n\t * Determine optimal commit_delay value\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 20 Mar 2001 14:56:04 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Added to TODO"
}
] |
[
{
"msg_contents": "Cedar Cox <cedarc@visionforisrael.com> writes:\n> Added note: The trigger is a BEFORE trigger.\n\nAFAIK the \"triggered data change\" message comes out of the AFTER trigger\ncode. You sure you don't have any AFTER triggers on the table? Perhaps\nones added implicitly by a foreign-key constraint?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 20 Mar 2001 15:39:53 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] triggered data change violation "
},
{
"msg_contents": "\nAdded note: The trigger is a BEFORE trigger.\n\n---------- Forwarded message ----------\nTo: pgsql-interfaces@postgresql.org\nDate: Tue, 20 Mar 2001 20:43:59 +0200 (IST)\nSubject: triggered data change violation\n\n\nERROR: triggered data change violation on relation \"tblstsc2options\"\n\nWhat is this? It doesn't happen unless I'm in a transaction. I'm\nINSERTing a record and then DELETEing it (in the same transaction) and on\ndelete I get this error. If I commit and begin a new transaction before\nthe delete everything is fine. Is it something my trigger causing? I\ndon't have any UPDATE, INSERT, or DELETE statements in my trigger (and I\nam returning old on delete).\n\nThanks,\n-Cedar\n\n",
"msg_date": "Tue, 20 Mar 2001 23:13:14 +0200 (IST)",
"msg_from": "Cedar Cox <cedarc@visionforisrael.com>",
"msg_from_op": false,
"msg_subject": "triggered data change violation "
},
{
"msg_contents": "Tom Lane writes:\n\n> Cedar Cox <cedarc@visionforisrael.com> writes:\n> > Added note: The trigger is a BEFORE trigger.\n>\n> AFAIK the \"triggered data change\" message comes out of the AFTER trigger\n> code. You sure you don't have any AFTER triggers on the table? Perhaps\n> ones added implicitly by a foreign-key constraint?\n\nA \"triggered data change violation\" happens everytime you change twice\nwithin a transaction a value (column) that is part of a foreign key\nconstraint (don't recall exactly which part).\n\nThis error shouldn't really happen, but I recall there were some\nimplementation and definition problems with deferred constraints.\n\n...FAQ alert...\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Tue, 20 Mar 2001 22:13:35 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] triggered data change violation "
},
{
"msg_contents": "Cedar Cox <cedarc@visionforisrael.com> writes:\n>> AFAIK the \"triggered data change\" message comes out of the AFTER trigger\n>> code. You sure you don't have any AFTER triggers on the table? Perhaps\n>> ones added implicitly by a foreign-key constraint?\n\n> Not any that I wrote. Ok, the table def is:\n\n> CREATE TABLE tblStSC2Options (\n> SC2OptionID int4 NOT NULL,\n> SC2OptionName character varying(50) NOT NULL CHECK (SC2OptionName<>''),\n> SC2OptionValue float4 CHECK (SC2OptionValue>0),\n> SurID character varying(50) NOT NULL REFERENCES tblStSC2 ON UPDATE \n> CASCADE ON DELETE CASCADE,\n ^^^^^^^^^^^^^^^^^^^\n\nSure looks like a foreign key to me. If you dump the table definition\nwith pg_dump you'll see some AFTER triggers.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 20 Mar 2001 16:14:03 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] triggered data change violation "
},
{
"msg_contents": "> A \"triggered data change violation\" happens everytime you change twice\n> within a transaction a value (column) that is part of a foreign key\n> constraint (don't recall exactly which part).\n> \n> This error shouldn't really happen, but I recall there were some\n> implementation and definition problems with deferred constraints.\n> \n> ...FAQ alert...\n\nYes, I just got it in the TODO list a few weeks ago:\n\n* INSERT & UPDATE/DELETE in transaction of primary key fails with \n deferredTriggerGetPreviousEvent or \"change violation\" [foreign]\n\nI personally think we could do better on the wording of that error\nmessage, at least until we get it fixed.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 20 Mar 2001 16:15:09 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: [HACKERS] triggered data change violation"
},
{
"msg_contents": "\nOn Tue, 20 Mar 2001, Tom Lane wrote:\n> Cedar Cox <cedarc@visionforisrael.com> writes:\n> > Added note: The trigger is a BEFORE trigger.\n> \n> AFAIK the \"triggered data change\" message comes out of the AFTER trigger\n> code. You sure you don't have any AFTER triggers on the table? Perhaps\n> ones added implicitly by a foreign-key constraint?\n\nNot any that I wrote. Ok, the table def is:\n\nCREATE TABLE tblStSC2Options (\n SC2OptionID int4 NOT NULL,\n SC2OptionName character varying(50) NOT NULL CHECK (SC2OptionName<>''),\n SC2OptionValue float4 CHECK (SC2OptionValue>0),\n SurID character varying(50) NOT NULL REFERENCES tblStSC2 ON UPDATE \nCASCADE ON DELETE CASCADE,\n PRIMARY KEY (SC2OptionID)\n);\n\nAnd there is one other table, tblListRequestSentItems, which has a field:\n\n SC2OptionID int4 DEFAULT 0 NOT NULL REFERENCES tblStSC2Options,\n\nHave I answered your question? (I think so.)\n\n-Cedar\n\n",
"msg_date": "Tue, 20 Mar 2001 23:55:25 +0200 (IST)",
"msg_from": "Cedar Cox <cedarc@visionforisrael.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] triggered data change violation "
}
] |
[
{
"msg_contents": "\n(First of all, is this the right list?)\n\nWhen doing\n pg_dump testdb -u\nI get\n failed sanity check, type with oid 899762 was not found\n\nI searched my backend log for this oid and found something near the\n'tryme' function. As far as I can find I have two functions defined with\ndifferent args and one has a problem. These are an old unused functions I\nwrote in plpgsql. I'm guessing that if I remove them the problem will go\naway.\n\n Result | Function | Arguments\n-----------+---------------------------+----------\n bool | tryme | - \n bool | tryme | record \n\ntestdb=# select proargtypes from pg_proc where proname='tryme';\n proargtypes \n-------------\n 298035\n 899762\n(2 rows)\n\n\nAm I making sense? .. comments? What's going on?\n\nThanks\n-Cedar\n\n",
"msg_date": "Tue, 20 Mar 2001 23:41:54 +0200 (IST)",
"msg_from": "Cedar Cox <cedarc@visionforisrael.com>",
"msg_from_op": true,
"msg_subject": "pg_dump - failed sanity check, type"
}
] |
[
{
"msg_contents": "Ok, thanks to our snowstorm :-0 I have been working on the beta 6 RPM situation\non my _slow_ notebook today (power outages for ten minutes at a time happening\nat hour or so intervals due to 45mph+ winds and a foot of snow....).\n\nWell, I have preliminary RPM's built -- just need to work on the contrib tree\nsituation. I ran regression the usual RPM way (which I am fully aware is not\nthe normally approved method, but it _would_ be the method any RPM beta testers\nwould use), and got a different failure, one that is not locale related\n(LC_ALL=C both for the initdb and the postmaster startup in the newest\ninitscript). See attached regression.diffs for details of the temptest failure\nI experienced.\n\nRegression run with CWD=/usr/share/test/regress, user=postgres.\n./pg_regress --schedule=parallel_schedule\n\nThis is the only regression test failure I have found thus far. I have never\nseen this failure before, so I'm not sure where to proceed.\n\nNow to attack the contrib tree (looking forward to my new notebook, as this old\nP133 takes an hour and twenty minutes to slog through a full build....).\n\nSeeing that RC1 is in prep, is there a pressing need to upload and release beta\n6 RPM's, or will it be a day or two before RC1?\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11",
"msg_date": "Tue, 20 Mar 2001 16:48:55 -0500",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": true,
"msg_subject": "Beta 6 Regression results on Redat 7.0."
},
{
"msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> DROP TABLE temptest;\n> + NOTICE: FlushRelationBuffers(temptest, 0): block 0 is referenced (private 0, global 1)\n> + ERROR: heap_drop_with_catalog: FlushRelationBuffers returned -2\n> SELECT * FROM temptest;\n\nHoo, that's interesting ... Exactly what fileset were you using again?\n\n> Seeing that RC1 is in prep, is there a pressing need to upload and\n> release beta 6 RPM's, or will it be a day or two before RC1?\n\nI think you might as well wait for RC1 as far as actually making RPMs\ngoes. But do you want to let anyone else check out the RPM build\nprocess? For instance, I've been wondering what you did about the\nwhich-set-of-headers-to-install issue.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 20 Mar 2001 17:21:28 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Beta 6 Regression results on Redat 7.0. "
},
{
"msg_contents": "On Tue, 20 Mar 2001, Tom Lane wrote:\n> Lamar Owen <lamar.owen@wgcr.org> writes:\n> > DROP TABLE temptest;\n> > + NOTICE: FlushRelationBuffers(temptest, 0): block 0 is referenced (private 0, global 1)\n> > + ERROR: heap_drop_with_catalog: FlushRelationBuffers returned -2\n> > SELECT * FROM temptest;\n \n> Hoo, that's interesting ... Exactly what fileset were you using again?\n\nWhen you say 'fileset', I'm assuming you are referring to the --schedule\nparameter -- I am invoking the following command:\n./pg_regress --schedule=parallel_schedule \n\n7.1beta6 distribution tarball. LC_ALL=C. Compiled on RedHat 7 as shipped.\n\nI'm rerunning to see if it is intermittent. Second run -- no error. Running a\nthird time......no error. Now I'm confused. What would cause such an error,\nTom? I'm going to check on my desktop, once power gets more stable (and it\nquits lightning -- yes, a snowstorm with lightning :-0 I certainly got what I\nwanted.....). So, more to come later.\n\n> > Seeing that RC1 is in prep, is there a pressing need to upload and\n> > release beta 6 RPM's, or will it be a day or two before RC1?\n \n> I think you might as well wait for RC1 as far as actually making RPMs\n> goes. But do you want to let anyone else check out the RPM build\n> process? For instance, I've been wondering what you did about the\n> which-set-of-headers-to-install issue.\n\nOh, ok. Spec file attached. All other files needed are the beta6 tarball and\nthe contents of the beta4-1 source rpm, with names changed to match the beta6\nversion number. There are some other changes I have to merge in --\nparticularly a set from Karl for the optional PL/Perl build, as well as others,\nso this is a preliminary spec file.\n\nBut I was just getting the basic build done and tested.\n\nTo directly answer your question, I'm using 'make install-all-headers' and\nstuffing it into the devel rpm in one piece at this time.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11",
"msg_date": "Tue, 20 Mar 2001 17:28:08 -0500",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": true,
"msg_subject": "Re: Beta 6 Regression results on Redat 7.0."
},
{
"msg_contents": "On Tue, 20 Mar 2001, Lamar Owen wrote:\n\n> Ok, thanks to our snowstorm :-0 I have been working on the beta 6 RPM situation\n> on my _slow_ notebook today (power outages for ten minutes at a time happening\n> at hour or so intervals due to 45mph+ winds and a foot of snow....).\n>\n> Well, I have preliminary RPM's built -- just need to work on the contrib tree\n> situation. I ran regression the usual RPM way (which I am fully aware is not\n> the normally approved method, but it _would_ be the method any RPM beta testers\n> would use), and got a different failure, one that is not locale related\n> (LC_ALL=C both for the initdb and the postmaster startup in the newest\n> initscript). See attached regression.diffs for details of the temptest failure\n> I experienced.\n>\n> Regression run with CWD=/usr/share/test/regress, user=postgres.\n> ./pg_regress --schedule=parallel_schedule\n>\n> This is the only regression test failure I have found thus far. I have never\n> seen this failure before, so I'm not sure where to proceed.\n>\n> Now to attack the contrib tree (looking forward to my new notebook, as this old\n> P133 takes an hour and twenty minutes to slog through a full build....).\n>\n> Seeing that RC1 is in prep, is there a pressing need to upload and release beta\n> 6 RPM's, or will it be a day or two before RC1?\n\nIm going to do RC1 tonight ... so no pressng need :)\n\n\n",
"msg_date": "Tue, 20 Mar 2001 18:43:45 -0400 (AST)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Beta 6 Regression results on Redat 7.0."
},
{
"msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> DROP TABLE temptest;\n> + NOTICE: FlushRelationBuffers(temptest, 0): block 0 is referenced (private 0, global 1)\n> + ERROR: heap_drop_with_catalog: FlushRelationBuffers returned -2\n> SELECT * FROM temptest;\n \n>> Hoo, that's interesting ... Exactly what fileset were you using again?\n\n> When you say 'fileset', I'm assuming you are referring to the --schedule\n> parameter --\n\nNo, I was wondering about whether you had an inconsistent set of source\nfiles, or had managed to not do a complete rebuild, or something like\nthat. The above error should be entirely impossible considering that\nthe table in question is a temp table that's not been touched by any\nother backend. If you did manage to get this from a clean build then\nI think we have a serious problem to look at.\n\n>> I think you might as well wait for RC1 as far as actually making RPMs\n>> goes. But do you want to let anyone else check out the RPM build\n>> process? For instance, I've been wondering what you did about the\n>> which-set-of-headers-to-install issue.\n\n> Oh, ok. Spec file attached. All other files needed are the beta6 tarball and\n> the contents of the beta4-1 source rpm, with names changed to match the beta6\n> version number.\n\nOK, I will pull the files and try to replicate this on my own laptop.\nDoes anyone else have time to try to duplicate the problem tonight?\nIf it's replicatable at all, I think it's a release stopper.\n\n> To directly answer your question, I'm using 'make install-all-headers' and\n> stuffing it into the devel rpm in one piece at this time.\n\nWorks for me.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 20 Mar 2001 18:01:21 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Beta 6 Regression results on Redat 7.0. "
},
{
"msg_contents": "On Tue, 20 Mar 2001, Tom Lane wrote:\n> Lamar Owen <lamar.owen@wgcr.org> writes:\n> > DROP TABLE temptest;\n> > + NOTICE: FlushRelationBuffers(temptest, 0): block 0 is referenced (private 0, global 1)\n> > + ERROR: heap_drop_with_catalog: FlushRelationBuffers returned -2\n> > SELECT * FROM temptest;\n\n> >> Hoo, that's interesting ... Exactly what fileset were you using again?\n \n> > When you say 'fileset', I'm assuming you are referring to the --schedule\n> > parameter --\n \n> No, I was wondering about whether you had an inconsistent set of source\n> files, or had managed to not do a complete rebuild, or something like\n> that. The above error should be entirely impossible considering that\n> the table in question is a temp table that's not been touched by any\n> other backend. If you did manage to get this from a clean build then\n> I think we have a serious problem to look at.\n\nStandard RPM rebuild -- always wipes the whole build tree out and re-expands\nfrom the tarball, reapplies patches, and rebuilds from scratch every time I\nchange even the smallest detail in the spec file -- which is why it takes so\nlong to get these things out. So, no, this is a scratch build from a fresh\ntarball.\n\n> Does anyone else have time to try to duplicate the problem tonight?\n> If it's replicatable at all, I think it's a release stopper.\n\nI have not yet been able to repeat the problem. I am running my fifth\nregression test run (which takes a long time on this P133) with a freshly\ninitdb'ed PGDATA -- the previous regression runs were done on the same PGDATA\ntree as the first run was done on. Took 12 minutes 40 seconds, but I can't\nrepeat the error. I'm hoping it was a problem on my machine -- educate me on\nwhat caused the error so I can see if something in my setup did something not\nso nice. So, the score is one error out of six test runs, thus far.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 20 Mar 2001 18:07:03 -0500",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": true,
"msg_subject": "Re: Beta 6 Regression results on Redat 7.0."
},
{
"msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> I'm hoping it was a problem on my machine -- educate me on\n> what caused the error\n\nWell, that's exactly what I'd like to know. The direct cause of the\nerror is that DROP TABLE is finding that some other backend has a\nreference-count hold on a page of the temp table it's trying to drop.\nSince no other backend should be trying to touch this temp table,\nthere's something pretty fishy here.\n\nGiven that this is a parallel test, you may be looking at a\nlow-probability timing-dependent failure. I'd say set up the machine\nand run repeat tests for an hour or three ... that's what I plan to do\nhere.\n\nBTW, what postmaster parameters are you using --- -B and so forth?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 20 Mar 2001 18:22:42 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Beta 6 Regression results on Redat 7.0. "
},
{
"msg_contents": "On Tue, 20 Mar 2001, Tom Lane wrote:\n> Since no other backend should be trying to touch this temp table,\n> there's something pretty fishy here.\n\nI see.\n \n> Given that this is a parallel test, you may be looking at a\n> low-probability timing-dependent failure. I'd say set up the machine\n> and run repeat tests for an hour or three ... that's what I plan to do\n> here.\n\nAs a broadcast engineer, I'm a little too familiar with such things. But this\nisn't an engineer list, so I'll spare you the war stories. :-)\n\n> BTW, what postmaster parameters are you using --- -B and so forth?\n\nDefault. To be changed before RPM release, but currently it is the default.\nThe only option that postmaster.opts records is -D, and I'm not passing\nanything else. \n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 20 Mar 2001 18:34:00 -0500",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": true,
"msg_subject": "Re: Beta 6 Regression results on Redat 7.0."
},
{
"msg_contents": "On Tue, 20 Mar 2001, Tom Lane wrote:\n\n> Lamar Owen <lamar.owen@wgcr.org> writes:\n> > I'm hoping it was a problem on my machine -- educate me on\n> > what caused the error\n>\n> Well, that's exactly what I'd like to know. The direct cause of the\n> error is that DROP TABLE is finding that some other backend has a\n> reference-count hold on a page of the temp table it's trying to drop.\n> Since no other backend should be trying to touch this temp table,\n> there's something pretty fishy here.\n>\n> Given that this is a parallel test, you may be looking at a\n> low-probability timing-dependent failure. I'd say set up the machine\n> and run repeat tests for an hour or three ... that's what I plan to do\n> here.\n\nOkay, I roll'd an RC1 but haven't put it up for FTP yet ... I'll wait for\na few hours to see if anyone can reproduce this, and, if not, put out what\nI've rolled ...\n\nsay, 00:00AST ...\n\n",
"msg_date": "Tue, 20 Mar 2001 19:46:02 -0400 (AST)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Beta 6 Regression results on Redat 7.0. "
},
{
"msg_contents": "The Hermit Hacker <scrappy@hub.org> writes:\n> Okay, I roll'd an RC1 but haven't put it up for FTP yet ... I'll wait for\n> a few hours to see if anyone can reproduce this, and, if not, put out what\n> I've rolled ...\n\nThis will not be RC1 :-(\n\nI'm been running one backend doing repeated iterations of\n\nCREATE TABLE temptest(col int);\nINSERT INTO temptest VALUES (1);\n\nCREATE TEMP TABLE temptest(col int);\nINSERT INTO temptest VALUES (2);\nSELECT * FROM temptest;\nDROP TABLE temptest;\n\nSELECT * FROM temptest;\nDROP TABLE temptest;\n\nand another one doing repeated CHECKPOINTs. I've already gotten a\ncouple occurrences of Lamar's failure.\n\nI think the problem is that BufferSync unconditionally does PinBuffer\non each buffer, and holds the pin during intervals where it's released\nBufMgrLock, even if there's not really anything for it to do on that\nbuffer. If someone else is running FlushRelationBuffers then it's\npossible for that routine to see a nonzero pin count when it looks.\n\nVadim, what do you think about how to change this? I think this is\nBufferSync's fault not FlushRelationBuffers's ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 20 Mar 2001 19:19:43 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Beta 6 Regression results on Redat 7.0. "
},
{
"msg_contents": "On Tue, 20 Mar 2001, Tom Lane wrote:\n> This will not be RC1 :-(\n> 'Ive already gotten a\n> couple occurrences of Lamar's failure.\n\nWell, I was at least hoping it was a problem here -- particularly since I\nhaven't been able to reproduce it. But, since it is not a local problem, I'm\nglad I caught it -- on the first regression test run, no less. I've run a\ndozen tests since without duplication.\n\nAlthough, like you, Tom, I'm curious as to why it hadn't showed up before -- is\nthe fact that this is a slow machine a factor, possibly?\n\nAlthough I am now much more leery of our regression suite -- this issue isn't\neven tested, in reality. Do we have _any_ WAL-related tests? The parallel\ntesting is a good thing -- but I wonder what boundary conditions aren't getting\ntested.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 20 Mar 2001 19:25:24 -0500",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": true,
"msg_subject": "Re: Beta 6 Regression results on Redat 7.0."
},
{
"msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> Although I am now much more leery of our regression suite\n\nThe regression tests are not at all designed to test concurrent\nbehavior, and never have been. The parallel form runs some tests\nin parallel, true, but those tests are deliberately designed not to\ninteract. So I don't put any faith in the regression tests as a means\nto catch bugs like this. We need some thought and work on better\nconcurrent tests...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 20 Mar 2001 19:40:49 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Beta 6 Regression results on Redat 7.0. "
},
{
"msg_contents": "> Seeing that RC1 is in prep, is there a pressing need to upload and release beta\n> 6 RPM's, or will it be a day or two before RC1?\n\nCan I get the src rpm to give a try on Mandrake? I had trouble with\n7.0.3 (a mysterious disappearing file in the perl build) and would like\nto see where we are at with 7.1...\n\n - Thomas\n",
"msg_date": "Wed, 21 Mar 2001 01:10:14 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: Beta 6 Regression results on Redat 7.0."
},
{
"msg_contents": "On Tue, 20 Mar 2001, Thomas Lockhart wrote:\n> > Seeing that RC1 is in prep, is there a pressing need to upload and release beta\n> > 6 RPM's, or will it be a day or two before RC1?\n \n> Can I get the src rpm to give a try on Mandrake? I had trouble with\n> 7.0.3 (a mysterious disappearing file in the perl build) and would like\n> to see where we are at with 7.1...\n\nSure. If you want to try out one already up there, pull the beta4 set off the\nftp site. I'm on dialup right now -- it will take quite some time to get an\nsrc.rpm up for beta 6. Although, it does look like it may be a little bit\nbefore RC1, now. I'm at beta6-0.2 right now, with several changes to make in\nthe line, but, I can upload if you can wait a couple of hours (I'm in a rebuild\nright now for 0.2, which will take 77 minutes or more on this machine, and then\nI have to scp it over to hub.).\n\nTomorrow morning, if I can get out of the snow-covered driveway and to work, I\ncan upload it much quicker.\n\nI'll go ahead and upload the one I'm testing with right now if you'd like.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 20 Mar 2001 20:27:44 -0500",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": true,
"msg_subject": "Re: Beta 6 Regression results on Redat 7.0."
},
{
"msg_contents": "> I'll go ahead and upload the one I'm testing with right now if you'd like.\n\nNot necessary, unless (I suppose) that you know the rpm for beta 4 is\nbroken. That vintage CVS tree behaved well enough for me try it out\nafaicr...\n\n - Thomas\n",
"msg_date": "Wed, 21 Mar 2001 01:48:50 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: Beta 6 Regression results on Redat 7.0."
},
{
"msg_contents": "On Tue, 20 Mar 2001, Thomas Lockhart wrote:\n> > I'll go ahead and upload the one I'm testing with right now if you'd like.\n \n> Not necessary, unless (I suppose) that you know the rpm for beta 4 is\n> broken. That vintage CVS tree behaved well enough for me try it out\n> afaicr...\n\nIt's a good start to test with for the purposes for which I think you want to\ntest for. (and I'm an English teacher by night -- argh). Beta 6 changes a few\nminor things and one major thing -- the minor things are:\n- Separate libs package with requisite dependency redo\n- Change in the initscript to use pg_ctl to (properly) stop postmaster (no\n kill -9's here this time :-))\n- Change in the initscript to initdb with LC_ALL=C and to start postmaster \n with LC_ALL=C as well.\n- devel subpackage now uses make install-all-headers instead of cpp hack to\n pull in required headers for client and server development.\n\nThe major thing is going to be a build of the contrib tree and a contrib\nsubpackage -- the source will remain as part of the docs, but now that whole\nset of useful files will be built out. That is what I was beginning to do when\nI stumbled across the regression failure that subsequently took the rest of the\nafternoon to track.\n\nBefore final release I have a rewrite of the README to do, as well as a full\nupdate of the migration scripts for testing.\n\nI'm looking at /usr/lib/pgsql/contrib/* for the contrib stuff.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 20 Mar 2001 21:30:02 -0500",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": true,
"msg_subject": "Re: Beta 6 Regression results on Redat 7.0."
},
{
"msg_contents": "> It's a good start to test with for the purposes for which I think you want to\n> test for. (and I'm an English teacher by night -- argh).\n\n:)\n\nMandrake (as of 7.2) still does a brain-dead mix of \"-O3\" and\n\"-ffast-math\", which is a risky and unnecessary combination according to\nthe gcc folks (and which kills some of our date/time rounding). From the\nman page for gcc:\n\n-ffast-math\n This option should never be turned on by any `-O' option\n since it can result in incorrect output for programs which\n depend on an exact implementation of IEEE or ANSI\n rules/specifications for math functions.\n\nI'd like to get away from having to post a non-brain-dead /root/.rpmrc\nfile which omits the -ffast-math flag. Can you suggest mechanisms for\nputting a \"-fno-fast-math\" into the spec file? Isn't there a mechanism\nto mark things as \"distro specific\"? Suggestions?\n\nAlso, I'm getting the same symptom as I had for 7.0.3 with a\n\"disappearing file\". Anyone seen this? I recall tracing this back for\nthe 7.0.3 case and found that Pg.bs existed in the build tree, at least\nat some point in the build, but then goes away. 7.0.2, at least at the\ntime I did the build, did not have the problem :(\n\nFile not found: /var/tmp/postgresql-7.1beta4-root/ (cont'd)\n usr/lib/perl5/site_perl/5.6.0/i386-linux/auto/Pg/Pg.bs\n\n - Thomas\n",
"msg_date": "Wed, 21 Mar 2001 03:24:43 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "RPM building (was regression on RedHat)"
},
{
"msg_contents": "At 3/20/2001 09:24 PM, Thomas Lockhart wrote:\n> > It's a good start to test with for the purposes for which I think you \n> want to\n> > test for. (and I'm an English teacher by night -- argh).\n>\n>:)\n>\n>Mandrake (as of 7.2) still does a brain-dead mix of \"-O3\" and\n>\"-ffast-math\", which is a risky and unnecessary combination according to\n>the gcc folks (and which kills some of our date/time rounding). From the\n>man page for gcc:\n>\n>-ffast-math\n> This option should never be turned on by any `-O' option\n> since it can result in incorrect output for programs which\n> depend on an exact implementation of IEEE or ANSI\n> rules/specifications for math functions.\n>\n>I'd like to get away from having to post a non-brain-dead /root/.rpmrc\n>file which omits the -ffast-math flag. Can you suggest mechanisms for\n>putting a \"-fno-fast-math\" into the spec file? Isn't there a mechanism\n>to mark things as \"distro specific\"? Suggestions?\n\nI don't know if it helps. But, a stock install has the environment \nMACHTYPE=i586-mandrake-linux.\n\nIf you hunt for mandrake in the MACHTYPE variable you could reset those \nvariables.\n\nAlso, I think those are set in the rpmrc file of the distro for the i386 \ntarget. If you specify anything else like i486, i686, you don't have that \nproblem.\n\nIt would be in the RPM_OPT_FLAGS or RPM_OPTS part of the build \nenvironment. I don't think there would be a problem overriding it, in \nfact, I would recommend the following : RPM_OPTS=\"$RPM_OPTS \n-fno-fast-math\". Since gcc will take the last argument as overriding the \nfirst, it would be a nice safeguard.\n\nEven setting CFLAGS=\"$CFLAGS -fno-fast-math\" might be good idea.\n\nHope this helps,\nThomas\n\n",
"msg_date": "Tue, 20 Mar 2001 22:15:22 -0600",
"msg_from": "Thomas Swan <tswan-lst@ics.olemiss.edu>",
"msg_from_op": false,
"msg_subject": "Re: RPM building (was regression on RedHat)"
},
{
"msg_contents": "Thomas Lockhart <lockhart@alumni.caltech.edu> writes:\n\n> > It's a good start to test with for the purposes for which I think you want to\n> > test for. (and I'm an English teacher by night -- argh).\n> \n> :)\n> \n> Mandrake (as of 7.2) still does a brain-dead mix of \"-O3\" and\n> \"-ffast-math\", which is a risky and unnecessary combination according to\n> the gcc folks (and which kills some of our date/time rounding). From the\n> man page for gcc:\n> \n> -ffast-math\n> This option should never be turned on by any `-O' option\n> since it can result in incorrect output for programs which\n> depend on an exact implementation of IEEE or ANSI\n> rules/specifications for math functions.\n> \n> I'd like to get away from having to post a non-brain-dead /root/.rpmrc\n> file which omits the -ffast-math flag. Can you suggest mechanisms for\n> putting a \"-fno-fast-math\" into the spec file? Isn't there a mechanism\n> to mark things as \"distro specific\"? Suggestions?\n\nIf Mandrake wants to be broken, let them - and tell them.\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n",
"msg_date": "21 Mar 2001 10:12:40 -0500",
"msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)",
"msg_from_op": false,
"msg_subject": "Re: RPM building (was regression on RedHat)"
},
{
"msg_contents": "> If Mandrake wants to be broken, let them - and tell them.\n\nThey know ;) But just as with RH, they build ~1500 packages, so it is\nprobably not realistic to get them to change their build standards over\none misbehavior in one package.\n\nThe goal here is to get PostgreSQL to work well for as many platforms as\npossible. Heck, we even build for M$ ;)\n\nSo, I'm still looking for the best way to add a compile flag while\nmaking it clear that it is for one distro only. Of course, it would be\npossible to just add it at the end of the flags, but it would be nice to\ndo that only when necessary.\n\nRegards.\n\n - Thomas\n",
"msg_date": "Wed, 21 Mar 2001 16:02:06 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: RPM building (was regression on RedHat)"
},
{
"msg_contents": "Thomas Lockhart writes:\n\n> Mandrake (as of 7.2) still does a brain-dead mix of \"-O3\" and\n> \"-ffast-math\", which is a risky and unnecessary combination according to\n> the gcc folks (and which kills some of our date/time rounding). From the\n> man page for gcc:\n>\n> -ffast-math\n> This option should never be turned on by any `-O' option\n> since it can result in incorrect output for programs which\n> depend on an exact implementation of IEEE or ANSI\n> rules/specifications for math functions.\n\nYou're reading this wrong. What this means is:\n\n\"If you're working on GCC, do not ever think of enabling -ffast-math\nimplicitly by any -Ox level [since most other -fxxx options are grouped\nunder some -Ox], since programs that might want optimization could still\ndepend on correct IEEE math.\"\n\nIn particular, Mandrake is not wrong to compile with -O3 and -ffast-math.\nThe consequence would only be slightly incorrect math results, and that is\nwhat indeed happened.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Wed, 21 Mar 2001 19:35:00 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: RPM building (was regression on RedHat)"
},
{
"msg_contents": "NO!\n\nIt's not \"Mandrake\" that will be broken. Mandrake is also often used by\nnew Linux users who wouldn't have the slightest idea about setting GCC\noptions. It'll be THEM that have broken installations if we take this\napproach (as an aside, that means that WE will be probably also be\nanswering more questions about PostgreSQL being broken on Mandrake\nsystems).\n\nIsn't it better that PostgreSQL works with what it's got on a system AND\nALSO that someone notifies the Mandrake people regarding the problem?\n\nRegards and best wishes,\n\nJustin Clift\n\nTrond Eivind Glomsr�d wrote:\n> \n<snip>\n>\n> If Mandrake wants to be broken, let them - and tell them.\n> \n> --\n> Trond Eivind Glomsr�d\n> Red Hat, Inc.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n",
"msg_date": "Thu, 22 Mar 2001 11:16:44 +1100",
"msg_from": "Justin Clift <aa2@bigpond.net.au>",
"msg_from_op": false,
"msg_subject": "Re: RPM building (was regression on RedHat)"
},
{
"msg_contents": "Is the right approach for the ./configure script to check for the\nexistence of the /etc/mandrake-release file as at least an initial\nindicator that the compile is happening on Mandrake?\n\nRegards and best wishes,\n\nJustin Clift\n\nThomas Lockhart wrote:\n> \n> > If Mandrake wants to be broken, let them - and tell them.\n> \n> They know ;) But just as with RH, they build ~1500 packages, so it is\n> probably not realistic to get them to change their build standards over\n> one misbehavior in one package.\n> \n> The goal here is to get PostgreSQL to work well for as many platforms as\n> possible. Heck, we even build for M$ ;)\n> \n> So, I'm still looking for the best way to add a compile flag while\n> making it clear that it is for one distro only. Of course, it would be\n> possible to just add it at the end of the flags, but it would be nice to\n> do that only when necessary.\n> \n> Regards.\n> \n> - Thomas\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n",
"msg_date": "Thu, 22 Mar 2001 11:18:41 +1100",
"msg_from": "Justin Clift <aa2@bigpond.net.au>",
"msg_from_op": false,
"msg_subject": "Re: RPM building (was regression on RedHat)"
},
{
"msg_contents": "Justin Clift <aa2@bigpond.net.au> writes:\n\n> It's not \"Mandrake\" that will be broken. Mandrake is also often used by\n> new Linux users who wouldn't have the slightest idea about setting GCC\n> options. It'll be THEM that have broken installations if we take this\n> approach (as an aside, that means that WE will be probably also be\n> answering more questions about PostgreSQL being broken on Mandrake\n> systems).\n> \n> Isn't it better that PostgreSQL works with what it's got on a system AND\n> ALSO that someone notifies the Mandrake people regarding the problem?\n\nMost people will use what the vendor ship - a vendor (like us) look\ninto the benefits (stability, performance, compatiblity) of different\npackages, and make a selection. If they've done a choice of which\noptions are used in their distribution, they are obviously fine with\nthe consequences.\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n",
"msg_date": "21 Mar 2001 19:43:08 -0500",
"msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)",
"msg_from_op": false,
"msg_subject": "Re: RPM building (was regression on RedHat)"
},
{
"msg_contents": "Justin Clift <aa2@bigpond.net.au> writes:\n>> So, I'm still looking for the best way to add a compile flag while\n>> making it clear that it is for one distro only.\n\nSince this is only an RPM problem, it should be solved in the RPM spec\nfile, not by hacking the configure script. We had at least one similar\npatch in the 7.0 spec file (for -fsigned-char stupidity in the RPM\nconfiguration on LinuxPPC). That's not needed anymore, but couldn't\nyou fix Mandrake the same way?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 22 Mar 2001 00:00:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: RPM building (was regression on RedHat) "
},
{
"msg_contents": "> You're reading this wrong. What this means is:\n> \"If you're working on GCC, do not ever think of enabling -ffast-math\n> implicitly by any -Ox level [since most other -fxxx options are grouped\n> under some -Ox], since programs that might want optimization could still\n> depend on correct IEEE math.\"\n> In particular, Mandrake is not wrong to compile with -O3 and -ffast-math.\n> The consequence would only be slightly incorrect math results, and that is\n> what indeed happened.\n\n?? I think we agree. It happens to be the case that slightly incorrect\nresults are wrong results, and that full IEEE math conformance gives\nexactly correct results. For the case of date/time, the \"slightly wrong\"\nresults round up to 60.0 seconds for times on an even minute boundary,\nwhich is just plain wrong.\n\n - Thomas\n",
"msg_date": "Thu, 22 Mar 2001 06:50:32 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: RPM building (was regression on RedHat)"
},
{
"msg_contents": "Thomas Lockhart writes:\n\n> ?? I think we agree. It happens to be the case that slightly incorrect\n> results are wrong results, and that full IEEE math conformance gives\n> exactly correct results. For the case of date/time, the \"slightly wrong\"\n> results round up to 60.0 seconds for times on an even minute boundary,\n> which is just plain wrong.\n\nWell, you're going to have to ask a numerical analyst about this. If you\ntake that stance then -ffast-math is always wrong, no matter what the\ncombination of other switches. The \"wrong\" results might be harder to\nreproduce without any optimization going on, but they could still happen.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Thu, 22 Mar 2001 17:22:14 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: RPM building (was regression on RedHat)"
},
{
"msg_contents": "> Well, you're going to have to ask a numerical analyst about this. If you\n> take that stance then -ffast-math is always wrong, no matter what the\n> combination of other switches. The \"wrong\" results might be harder to\n> reproduce without any optimization going on, but they could still happen.\n\nGrumble. OK, I'll rephrase my statement: it is not \"wrong\", but \"does\nnot produce the *required* result\". \n\nThe date/time stuff relies on conventional IEEE arithmetic rounding and\ntruncation rules to produce the world-wide, universally accepted\nconventions for date/time representation. And will do so *if* the\ncompiler produces math which conforms to IEEE (and many other, in my\nexperience) conventions for arithmetic. So, if someone actually would\nwant to get date/time results which conform to those conventions, and if\nthey would characterize that conformance as \"correct\", then they might\nmake the leap of phrase to characterize nonconformance to those\nconventions as \"wrong\".\n\n - Thomas (who is just finishing eight days of jury\nduty ;)\n",
"msg_date": "Thu, 22 Mar 2001 19:53:08 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: RPM building (was regression on RedHat)"
}
] |
[
{
"msg_contents": "Hello;\nI installed postgresql. I compiled it and started the server successfully\nbut when I'm trying to connect to database I get this message:\n Could not load the JDBC driver. org.postgresql.Driver reason: The backend\nhas broken the connection. Possibly the action you have attempted has caused\nit to close.\nWhat is the reason for this message?\nEsmat sedghi.\nThanks\n\n",
"msg_date": "Tue, 20 Mar 2001 16:50:05 -0500",
"msg_from": "\"Rosie Sedghi\" <rosie@macadamian.com>",
"msg_from_op": true,
"msg_subject": ""
}
] |
[
{
"msg_contents": "Redhat Linux 7.0 (glibc 2.2-12, gcc 2.96-69)\n\nMikeA\n\n-----Original Message-----\nFrom: Peter Eisentraut\nTo: The Hermit Hacker\nCc: pgsql-hackers@postgresql.org\nSent: 20/03/01 19:11\nSubject: Re: [HACKERS] Final Call: RC1 about to go out the door ...\n\nThe Hermit Hacker writes:\n\n> \tWe'd like to wrap up an RC1 and get this release happening this\n> year sometime :) Tom mentioned to me that he has no outstandings left\non\n> his plate ... does anyone else have any *show stoppers* left that need\nto\n> be addressed, or can I package things up?\n\nI just uploaded new man pages. I'll probably do them once more in a few\ndays to catch all the changes.\n\nWe need a supported platform list. Let's hear it.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: you can get off all lists at once with the unregister command\n (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n\n_________________________________________________________________________\nThis e-mail and any attachments are confidential and may also be privileged and/or copyright \nmaterial of Intec Telecom Systems PLC (or its affiliated companies). If you are not an \nintended or authorised recipient of this e-mail or have received it in error, please delete \nit immediately and notify the sender by e-mail. In such a case, reading, reproducing, \nprinting or further dissemination of this e-mail is strictly prohibited and may be unlawful. \nIntec Telecom Systems PLC. does not represent or warrant that an attachment hereto is free \nfrom computer viruses or other defects. The opinions expressed in this e-mail and any \nattachments may be those of the author and are not necessarily those of Intec Telecom \nSystems PLC. \n\nThis footnote also confirms that this email message has been swept by\nMIMEsweeper for the presence of computer viruses. \n__________________________________________________________________________\n\n\n\n\n\nRE: [HACKERS] Final Call: RC1 about to go out the door ...\n\n\nRedhat Linux 7.0 (glibc 2.2-12, gcc 2.96-69)\n\nMikeA\n\n-----Original Message-----\nFrom: Peter Eisentraut\nTo: The Hermit Hacker\nCc: pgsql-hackers@postgresql.org\nSent: 20/03/01 19:11\nSubject: Re: [HACKERS] Final Call: RC1 about to go out the door ...\n\nThe Hermit Hacker writes:\n\n> We'd like to wrap up an RC1 and get this release happening this\n> year sometime :) Tom mentioned to me that he has no outstandings left\non\n> his plate ... does anyone else have any *show stoppers* left that need\nto\n> be addressed, or can I package things up?\n\nI just uploaded new man pages. I'll probably do them once more in a few\ndays to catch all the changes.\n\nWe need a supported platform list. Let's hear it.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: you can get off all lists at once with the unregister command\n (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n\n\n_________________________________________________________________________\nThis e-mail and any attachments are confidential and may also be privileged and/or copyright \nmaterial of Intec Telecom Systems PLC (or its affiliated companies). If you are not an \nintended or authorised recipient of this e-mail or have received it in error, please delete \nit immediately and notify the sender by e-mail. In such a case, reading, reproducing, \nprinting or further dissemination of this e-mail is strictly prohibited and may be unlawful. \nIntec Telecom Systems PLC. does not represent or warrant that an attachment hereto is free \nfrom computer viruses or other defects. The opinions expressed in this e-mail and any \nattachments may be those of the author and are not necessarily those of Intec Telecom \nSystems PLC. \n\nThis footnote also confirms that this email message has been swept by\nMIMEsweeper for the presence of computer viruses. \n__________________________________________________________________________",
"msg_date": "Tue, 20 Mar 2001 22:03:26 -0000",
"msg_from": "Michael Ansley <Michael.Ansley@intec-telecom-systems.com>",
"msg_from_op": true,
"msg_subject": "RE: Final Call: RC1 about to go out the door ..."
}
] |
[
{
"msg_contents": "> I'm rerunning to see if it is intermittent. Second run -- no \n> error. Running a third time......no error. Now I'm confused.\n> What would cause such an error, Tom? I'm going to check on my\n\nHmm, concurrent checkpoint? Probably we could simplify dirty test\nin ByfferSync() - ie test bufHdr->cntxDirty without holding\nshlock (and pin!) on buffer: should be good as long as we set\ncntxDirty flag *before* XLogInsert in access methods. Have to\nlook more...\n\nVadim\n",
"msg_date": "Tue, 20 Mar 2001 15:25:24 -0800",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: Beta 6 Regression results on Redat 7.0."
},
{
"msg_contents": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n> Hmm, concurrent checkpoint? Probably we could simplify dirty test\n> in ByfferSync() - ie test bufHdr->cntxDirty without holding\n> shlock (and pin!) on buffer: should be good as long as we set\n> cntxDirty flag *before* XLogInsert in access methods. Have to\n> look more...\n\nYes, I'm wondering if some other backend is trying to write/flush\nthe buffer (maybe as part of a checkpoint, maybe not). But seems\nlike we should have seen this before, if so; that's not a low-\nprobability scenario, particularly with just 64 buffers...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 20 Mar 2001 18:44:02 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Beta 6 Regression results on Redat 7.0. "
}
] |
[
{
"msg_contents": "With RC1 nearing, when should I run pgindent? This is usually the time\nI do it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 20 Mar 2001 18:36:53 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "pgindent run?"
},
{
"msg_contents": "> With RC1 nearing, when should I run pgindent? This is usually the time\n> I do it.\n\nDoes the silence mean I should pick a date to run this?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Mar 2001 14:43:22 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgindent run?"
},
{
"msg_contents": "> Bruce Momjian writes:\n> \n> > With RC1 nearing, when should I run pgindent? This is usually the time\n> > I do it.\n> \n> Are there any severely mis-indented files?\n\nNot sure. I think there are some. It doesn't do anything unless there\nis mis-indenting, so it is pretty safe and has always been done in the\npast. It obviously only affects new changes since the last run.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Mar 2001 15:14:33 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgindent run?"
},
{
"msg_contents": "On Wed, 21 Mar 2001, Bruce Momjian wrote:\n\n> > With RC1 nearing, when should I run pgindent? This is usually the time\n> > I do it.\n>\n> Does the silence mean I should pick a date to run this?\n\nSince I'm going to end up re-rolling RC1, do a run tonight on her, so that\nany problems that arise from pgindent this time can be caught with those\ntesting RC1 ...\n\n\n",
"msg_date": "Wed, 21 Mar 2001 16:20:34 -0400 (AST)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: pgindent run?"
},
{
"msg_contents": "> On Wed, 21 Mar 2001, Bruce Momjian wrote:\n> \n> > > With RC1 nearing, when should I run pgindent? This is usually the time\n> > > I do it.\n> >\n> > Does the silence mean I should pick a date to run this?\n> \n> Since I'm going to end up re-rolling RC1, do a run tonight on her, so that\n> any problems that arise from pgindent this time can be caught with those\n> testing RC1 ...\n\nGood idea. It is well tested, but you never know. \n\nPeter, this is the optimial time to do it because no one has any\noutstanding patches at this point. Seems this is the only good time.\n\nUnless someone says otherwise, I will do the run tonight.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Mar 2001 15:21:52 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgindent run?"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> With RC1 nearing, when should I run pgindent? This is usually the time\n> I do it.\n\nAre there any severely mis-indented files?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Wed, 21 Mar 2001 21:22:32 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: pgindent run?"
},
{
"msg_contents": "> Bruce Momjian writes:\n> \n> > Peter, this is the optimial time to do it because no one has any\n> > outstanding patches at this point. Seems this is the only good time.\n> \n> Actually, I have quite a few outstanding patches. I got screwed by this\n> last time around as well. But I understand that this might be the best\n> time.\n\nThat you are holding? Yes, I have a few to at my new Unapplied\nPatches web page:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nThe good news is that these will apply fine to 7.2 unless they touch an\narea that needed indenting. The problem of not doing it is that the\ncode starts to look different after a while and takes on a chaotic feel.\n\nThis is probably the time when there are the fewest oustanding patches,\nI guess.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Mar 2001 15:48:35 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgindent run?"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> Peter, this is the optimial time to do it because no one has any\n> outstanding patches at this point. Seems this is the only good time.\n\nActually, I have quite a few outstanding patches. I got screwed by this\nlast time around as well. But I understand that this might be the best\ntime.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Wed, 21 Mar 2001 21:54:01 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: pgindent run?"
},
{
"msg_contents": "\nOK, I am going to have dinner and then get started on the pgindent run.\n\nI have also noticed we have some comments like:\n\n\t/* ----\n * one word\n * ----\n */\n\nthat look funny in a few places. I propose:\n\n\t/* one word */\n\nto be consistent.\n\n\n> With RC1 nearing, when should I run pgindent? This is usually the time\n> I do it.\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Mar 2001 18:12:36 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgindent run?"
},
{
"msg_contents": "On Wed, 21 Mar 2001, Bruce Momjian wrote:\n\n>\n> OK, I am going to have dinner and then get started on the pgindent run.\n>\n> I have also noticed we have some comments like:\n>\n> \t/* ----\n> * one word\n> * ----\n> */\n>\n> that look funny in a few places. I propose:\n>\n> \t/* one word */\n>\n> to be consistent.\n\nto be consistent with what ... ? isn't:\n\n/* ----------\n * comment\n * ----------\n */\n\nthe standard?\n\n>\n>\n> > With RC1 nearing, when should I run pgindent? This is usually the time\n> > I do it.\n> >\n> > --\n> > Bruce Momjian | http://candle.pha.pa.us\n> > pgman@candle.pha.pa.us | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 6: Have you searched our list archives?\n> >\n> > http://www.postgresql.org/search.mpl\n> >\n>\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n",
"msg_date": "Wed, 21 Mar 2001 19:25:49 -0400 (AST)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: pgindent run?"
},
{
"msg_contents": "> On Wed, 21 Mar 2001, Bruce Momjian wrote:\n> \n> >\n> > OK, I am going to have dinner and then get started on the pgindent run.\n> >\n> > I have also noticed we have some comments like:\n> >\n> > \t/* ----\n> > * one word\n> > * ----\n> > */\n> >\n> > that look funny in a few places. I propose:\n> >\n> > \t/* one word */\n> >\n> > to be consistent.\n> \n> to be consistent with what ... ? isn't:\n> \n> /* ----------\n> * comment\n> * ----------\n> */\n> \n> the standard?\n\nSorry. It has been a while since I studied this. The issue is the\ndashes, not the block comments. /* --- is needed for multi-line comment\nwhere you want to preserve the layout, but in other cases, it prevents\ncomment layout and looks kind of heavy. I eyeball each change to make\nsure it is clean so:\n\n\t/* ---\n\t * test\n\t * ---\n\t */\n\nbecomes the cleaner:\n\n\t/*\n\t * test\n\t */\n\nThis makes the comment easier to read.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Mar 2001 20:24:19 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgindent run?"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> With RC1 nearing, when should I run pgindent? This is usually the time\n>> I do it.\n\n> Does the silence mean I should pick a date to run this?\n\nIf you're going to do it before the release, I think you should do it\n*before* we wrap RC1. I've said before and will say again that I think\nit's utter folly to run pgindent at the conclusion of the test cycle.\nI've been around this project for three major release cycles and we have\nseen errors introduced by pgindent in two of them. I don't trust\npgindent to be bug-free and I don't believe you should either.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 21 Mar 2001 22:58:49 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgindent run? "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> With RC1 nearing, when should I run pgindent? This is usually the time\n> >> I do it.\n> \n> > Does the silence mean I should pick a date to run this?\n> \n> If you're going to do it before the release, I think you should do it\n> *before* we wrap RC1. I've said before and will say again that I think\n> it's utter folly to run pgindent at the conclusion of the test cycle.\n> I've been around this project for three major release cycles and we have\n> seen errors introduced by pgindent in two of them. I don't trust\n> pgindent to be bug-free and I don't believe you should either.\n\nOK, running now. Should I run it at another time or never?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Mar 2001 23:00:00 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgindent run?"
},
{
"msg_contents": ">> Are there any severely mis-indented files?\n\nThere are some new contrib modules that are nowhere close to our\nindent conventions; also a good deal of foreign-key-related stuff\nin the parser that needs to be cleaned up. So we should run it.\n\nI've always felt that it'd be smarter to run pgindent at the start\nof a development cycle, not the end, but I've been unable to convince\nBruce of that ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 21 Mar 2001 23:08:14 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgindent run? "
},
{
"msg_contents": "> >> Are there any severely mis-indented files?\n> \n> There are some new contrib modules that are nowhere close to our\n> indent conventions; also a good deal of foreign-key-related stuff\n> in the parser that needs to be cleaned up. So we should run it.\n> \n> I've always felt that it'd be smarter to run pgindent at the start\n> of a development cycle, not the end, but I've been unable to convince\n> Bruce of that ...\n\nHey, I am open to whatever people want to do. Just remember that we\naccumulate lots of patches/development during the slow time before\ndevelopment, and those patches become harder to apply. Peter E has some\nalready.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Mar 2001 23:10:43 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgindent run?"
},
{
"msg_contents": "* Bruce Momjian <pgman@candle.pha.pa.us> [010321 22:11]:\n> > >> Are there any severely mis-indented files?\n> > \n> > There are some new contrib modules that are nowhere close to our\n> > indent conventions; also a good deal of foreign-key-related stuff\n> > in the parser that needs to be cleaned up. So we should run it.\n> > \n> > I've always felt that it'd be smarter to run pgindent at the start\n> > of a development cycle, not the end, but I've been unable to convince\n> > Bruce of that ...\n> \n> Hey, I am open to whatever people want to do. Just remember that we\n> accumulate lots of patches/development during the slow time before\n> development, and those patches become harder to apply. Peter E has some\n> already.\nHow about:\n1) just AFTER release\n2) just BEFORE Beta \n\nLER\n\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Wed, 21 Mar 2001 22:12:26 -0600",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: pgindent run?"
},
{
"msg_contents": "On Wed, 21 Mar 2001, Bruce Momjian wrote:\n\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > >> With RC1 nearing, when should I run pgindent? This is usually the time\n> > >> I do it.\n> >\n> > > Does the silence mean I should pick a date to run this?\n> >\n> > If you're going to do it before the release, I think you should do it\n> > *before* we wrap RC1. I've said before and will say again that I think\n> > it's utter folly to run pgindent at the conclusion of the test cycle.\n> > I've been around this project for three major release cycles and we have\n> > seen errors introduced by pgindent in two of them. I don't trust\n> > pgindent to be bug-free and I don't believe you should either.\n>\n> OK, running now. Should I run it at another time or never?\n\nI'll put my vote on Tom's side of things ... run if after the release,\nright at the start of the next development cycle, so that any bugs that\ncrop up aren't just as we are trying to release ...\n\nHell, maybe once then and once *just* as we are going into first beta of a\nrelease ... Tom?\n\n",
"msg_date": "Thu, 22 Mar 2001 00:16:13 -0400 (AST)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: pgindent run?"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Hey, I am open to whatever people want to do. Just remember that we\n> accumulate lots of patches/development during the slow time before\n> development, and those patches become harder to apply. Peter E has some\n> already.\n\nWhy not start a devel cycle by (a) branching the tree, (b) applying\nall held-over patches, and then (c) running pgindent?\n\nI'd probably wait a week or so between (a) and (c) to let people push\nin whatever they have pending. But in general it seems a lot safer\nto pgindent at the front end of the cycle not the back end.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 21 Mar 2001 23:35:07 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgindent run? "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Hey, I am open to whatever people want to do. Just remember that we\n> > accumulate lots of patches/development during the slow time before\n> > development, and those patches become harder to apply. Peter E has some\n> > already.\n> \n> Why not start a devel cycle by (a) branching the tree, (b) applying\n> all held-over patches, and then (c) running pgindent?\n\nIf people can get their patches in all at one time, that would work. \nThe only problem there is that people who supply patches against 7.1\nwill not match the 7.2 tree, and we get those patches from people for\nmonths.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Mar 2001 23:45:17 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgindent run?"
},
{
"msg_contents": "On Wed, 21 Mar 2001, Bruce Momjian wrote:\n\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > Hey, I am open to whatever people want to do. Just remember that we\n> > > accumulate lots of patches/development during the slow time before\n> > > development, and those patches become harder to apply. Peter E has some\n> > > already.\n> >\n> > Why not start a devel cycle by (a) branching the tree, (b) applying\n> > all held-over patches, and then (c) running pgindent?\n>\n> If people can get their patches in all at one time, that would work.\n> The only problem there is that people who supply patches against 7.1\n> will not match the 7.2 tree, and we get those patches from people for\n> months.\n\nand those patches should only be applied to the v7.1 branch ... we are\nsuggesting (or, at least, I am) is that you pgindent *HEAD* after we've\nbranched off v7.1 ...\n\n... that way, we go into the new dev cycle \"clean\", but we doon't mess up\nthe *STABLE* tree ...\n\n",
"msg_date": "Thu, 22 Mar 2001 00:48:09 -0400 (AST)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: pgindent run?"
},
{
"msg_contents": "> > If people can get their patches in all at one time, that would work.\n> > The only problem there is that people who supply patches against 7.1\n> > will not match the 7.2 tree, and we get those patches from people for\n> > months.\n> \n> and those patches should only be applied to the v7.1 branch ... we are\n> suggesting (or, at least, I am) is that you pgindent *HEAD* after we've\n> branched off v7.1 ...\n> \n> ... that way, we go into the new dev cycle \"clean\", but we doon't mess up\n> the *STABLE* tree ...\n\nBut we get patches from 7.0.X now that get applied to 7.1. All our\ndevelopers are not working on CVS trees. Many use their main trees and\nsend in the occasional patch.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Mar 2001 23:50:46 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgindent run?"
},
{
"msg_contents": "On Wed, 21 Mar 2001, Bruce Momjian wrote:\n\n> > > If people can get their patches in all at one time, that would work.\n> > > The only problem there is that people who supply patches against 7.1\n> > > will not match the 7.2 tree, and we get those patches from people for\n> > > months.\n> >\n> > and those patches should only be applied to the v7.1 branch ... we are\n> > suggesting (or, at least, I am) is that you pgindent *HEAD* after we've\n> > branched off v7.1 ...\n> >\n> > ... that way, we go into the new dev cycle \"clean\", but we doon't mess up\n> > the *STABLE* tree ...\n>\n> But we get patches from 7.0.X now that get applied to 7.1. All our\n> developers are not working on CVS trees. Many use their main trees and\n> send in the occasional patch.\n\nand most times, those have to be merged into the source tree due to\nextensive changes anyway ... maybe we should just get rid of the use of\npgindent altogether? its not something that I've ever seen required on\nother projects I've worked on ... in general, most projects seem to\nrequire that a submit'd patch from an older release be at least tested on\nthe newest CVS, and with nightly snapshots being created as it is, I\nreally don't see why such a requirement is a bad thing ...\n\n\n",
"msg_date": "Thu, 22 Mar 2001 00:57:19 -0400 (AST)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: pgindent run?"
},
{
"msg_contents": "> and most times, those have to be merged into the source tree due to\n> extensive changes anyway ... maybe we should just get rid of the use of\n> pgindent altogether? its not something that I've ever seen required on\n> other projects I've worked on ... in general, most projects seem to\n> require that a submit'd patch from an older release be at least tested on\n> the newest CVS, and with nightly snapshots being created as it is, I\n> really don't see why such a requirement is a bad thing ...\n\nIn an ideal world, people would test on CVS but in reality, the patches\nare usually pretty small and if they fix the problem, we apply them.\n\nSeems like a lot of work just to avoid pgindent.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Mar 2001 23:59:24 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgindent run?"
},
{
"msg_contents": "On Wed, 21 Mar 2001, Bruce Momjian wrote:\n\n> > and most times, those have to be merged into the source tree due to\n> > extensive changes anyway ... maybe we should just get rid of the use of\n> > pgindent altogether? its not something that I've ever seen required on\n> > other projects I've worked on ... in general, most projects seem to\n> > require that a submit'd patch from an older release be at least tested on\n> > the newest CVS, and with nightly snapshots being created as it is, I\n> > really don't see why such a requirement is a bad thing ...\n>\n> In an ideal world, people would test on CVS but in reality, the patches\n> are usually pretty small and if they fix the problem, we apply them.\n>\n> Seems like a lot of work just to avoid pgindent.\n\nIf they are small, then why is pgindent required? And if they are large,\nis it too much to ask that the person submitting tests the patch to make\nsure its even applicable in the newest snapshot?\n\n\n",
"msg_date": "Thu, 22 Mar 2001 01:08:48 -0400 (AST)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: pgindent run?"
},
{
"msg_contents": "> > > and most times, those have to be merged into the source tree due to\n> > > extensive changes anyway ... maybe we should just get rid of the use of\n> > > pgindent altogether? its not something that I've ever seen required on\n> > > other projects I've worked on ... in general, most projects seem to\n> > > require that a submit'd patch from an older release be at least tested on\n> > > the newest CVS, and with nightly snapshots being created as it is, I\n> > > really don't see why such a requirement is a bad thing ...\n> >\n> > In an ideal world, people would test on CVS but in reality, the patches\n> > are usually pretty small and if they fix the problem, we apply them.\n> >\n> > Seems like a lot of work just to avoid pgindent.\n> \n> If they are small, then why is pgindent required? And if they are large,\n> is it too much to ask that the person submitting tests the patch to make\n> sure its even applicable in the newest snapshot?\n\nThe problem is that the small ones don't apply cleanly if they don't\nmatch the indenting in the source.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 22 Mar 2001 00:10:06 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgindent run?"
},
{
"msg_contents": "The Hermit Hacker <scrappy@hub.org> writes:\n> and most times, those have to be merged into the source tree due to\n> extensive changes anyway ... maybe we should just get rid of the use of\n> pgindent altogether?\n\nI think pgindent is a good thing; the style of different parts of the\ncode would vary too much without it. I'm only unhappy about the risk\nissues of running it at this late stage of the release cycle.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 22 Mar 2001 00:11:36 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgindent run? "
},
{
"msg_contents": "> The Hermit Hacker <scrappy@hub.org> writes:\n> > and most times, those have to be merged into the source tree due to\n> > extensive changes anyway ... maybe we should just get rid of the use of\n> > pgindent altogether?\n> \n> I think pgindent is a good thing; the style of different parts of the\n> code would vary too much without it. I'm only unhappy about the risk\n> issues of running it at this late stage of the release cycle.\n\nThis is the usual�discussion. Some like it, some don't like the risk,\nsome don't like the timing. I don't think we ever came up with a better\ntime than before RC, though I think we could do it a little earlier in\nbeta if people were not holding patches during that period. It is the\nbeta patching folks that we have the most control over.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 22 Mar 2001 00:13:28 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgindent run?"
},
{
"msg_contents": "On Thu, 22 Mar 2001, Bruce Momjian wrote:\n\n> > > > and most times, those have to be merged into the source tree due to\n> > > > extensive changes anyway ... maybe we should just get rid of the use of\n> > > > pgindent altogether? its not something that I've ever seen required on\n> > > > other projects I've worked on ... in general, most projects seem to\n> > > > require that a submit'd patch from an older release be at least tested on\n> > > > the newest CVS, and with nightly snapshots being created as it is, I\n> > > > really don't see why such a requirement is a bad thing ...\n> > >\n> > > In an ideal world, people would test on CVS but in reality, the patches\n> > > are usually pretty small and if they fix the problem, we apply them.\n> > >\n> > > Seems like a lot of work just to avoid pgindent.\n> >\n> > If they are small, then why is pgindent required? And if they are large,\n> > is it too much to ask that the person submitting tests the patch to make\n> > sure its even applicable in the newest snapshot?\n>\n> The problem is that the small ones don't apply cleanly if they don't\n> match the indenting in the source.\n\nbut ... if they are small, manually merging isn't that big of a deal ...\nand if anyone else has been working in that code since release, there is a\nchance it won't mergef cleanly ...\n\nQuite frankly, I'm for pgindent after branch and before beta ...\n\n",
"msg_date": "Thu, 22 Mar 2001 01:14:16 -0400 (AST)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: pgindent run?"
},
{
"msg_contents": "> > The problem is that the small ones don't apply cleanly if they don't\n> > match the indenting in the source.\n> \n> but ... if they are small, manually merging isn't that big of a deal ...\n> and if anyone else has been working in that code since release, there is a\n> chance it won't mergef cleanly ...\n\nYes, they can be manually merged, but that is much more error-prone than\npgindent itself, at least with me patching them. :-)\n\nYes, I agree there is a risk. I was quite scared the first time I ran\nit on the tree and did the commit. At this point, there are very few\nchanges to it, so I feel a little better, and the stuff gets caught\nsomehow if there is a problem.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 22 Mar 2001 00:17:22 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgindent run?"
},
{
"msg_contents": "* Bruce Momjian <pgman@candle.pha.pa.us> [010321 21:14] wrote:\n> > The Hermit Hacker <scrappy@hub.org> writes:\n> > > and most times, those have to be merged into the source tree due to\n> > > extensive changes anyway ... maybe we should just get rid of the use of\n> > > pgindent altogether?\n> > \n> > I think pgindent is a good thing; the style of different parts of the\n> > code would vary too much without it. I'm only unhappy about the risk\n> > issues of running it at this late stage of the release cycle.\n> \n> This is the usual�discussion. Some like it, some don't like the risk,\n> some don't like the timing. I don't think we ever came up with a better\n> time than before RC, though I think we could do it a little earlier in\n> beta if people were not holding patches during that period. It is the\n> beta patching folks that we have the most control over.\n\nIt seems that you guys are dead set on using this pgindent tool,\nthis is cool, we'd probably use some indentation tool on the FreeBSD\nsources if there was one that met our code style(9) guidelines.\n\nWith that said, I really scares the crud out of me to see those massive\npg_indent runs right before you guys do a release.\n\nIt would make a lot more sense to force a pgindent run after applying\neach patch. This way you don't loose the history.\n\nYou want to be upset with yourself Bruce? Go into a directory and type:\n\ncvs annotate <any file that's been pgindented>\n\ncvs annotate is a really, really handy tool, unfortunetly these\nindent runs remove this very useful tool as well as do a major job\nof obfuscating the code changes.\n\nIt's not like you guys have a massive devel team with new people each\nweek that have a steep committer learning curve ahead of them, making\npgindent as patches are applied should work.\n\nThere's also the argument that a developer's pgindent may force a\ncontributor to resolve conflicts, while this is true, it's also\ntrue that you guys expect diffs to be in context format, comments\nto be in english, function prototypes to be new style, etc, etc..\n\nI think contributors can deal with this.\n\njust my usual 20 cents. :)\n\n-- \n-Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]\nDaemon News Magazine in your snail-mail! http://magazine.daemonnews.org/\n",
"msg_date": "Wed, 21 Mar 2001 23:21:03 -0800",
"msg_from": "Alfred Perlstein <bright@wintelcom.net>",
"msg_from_op": false,
"msg_subject": "Re: pgindent run?"
},
{
"msg_contents": "> * Bruce Momjian <pgman@candle.pha.pa.us> [010321 21:14] wrote:\n> > > The Hermit Hacker <scrappy@hub.org> writes:\n> > > > and most times, those have to be merged into the source tree due to\n> > > > extensive changes anyway ... maybe we should just get rid of the use of\n> > > > pgindent altogether?\n> > > \n> > > I think pgindent is a good thing; the style of different parts of the\n> > > code would vary too much without it. I'm only unhappy about the risk\n> > > issues of running it at this late stage of the release cycle.\n> > \n> > This is the usual?discussion. Some like it, some don't like the risk,\n> > some don't like the timing. I don't think we ever came up with a better\n> > time than before RC, though I think we could do it a little earlier in\n> > beta if people were not holding patches during that period. It is the\n> > beta patching folks that we have the most control over.\n> \n> It seems that you guys are dead set on using this pgindent tool,\n> this is cool, we'd probably use some indentation tool on the FreeBSD\n> sources if there was one that met our code style(9) guidelines.\n\nYou don't notice the value of pgindent until you have some code that\nhasn't been run through it. For example, ODBC was not run through until\nthis release, and I had a terrible time trying to understand the code\nbecause it didn't _look_ like the rest of the code. Now that pgindent\nis run, it looks more normal, and I am sure that will encourage more\npeople to get in and make changes.\n\nIt gives more of a \"I know where I am\" feeling to the code. It\ncertainly doesn't make anything possible that wasn't possible before,\nbut it does encourage people, and that is pretty powerful.\n\nI can't tell you how many times I have had to fix someone's contributed\ncode that hadn't been through pgindent yet, and the problems I had\ntrying to understand it. I have even copied the file, pgindented it,\nread through the copy, then went back and fixed the original code. (I\ncan't run pgindent during development cycle, only when I have 100%\ncontrol over the code and outstanding patches people may have.)\n\nAs far as FreeBSD, I guarantee you will see major benefits to community\nparticipation by running the script. You will have to hand-review all\nthe changes after the first run to make sure it didn't wack out some\nwierd piece of code, but after that you will be pretty OK. The only\nissue is that the person who takes this on is a taking a major risk of\nexposure to ridicule if it fails.\n\nI remember doing it the first time, and being really scared I would\nlethally brake the code. Indenting is one of those efforts that has\nthis _big_ risk componient when it is performed, and you get the small,\nsteady benefit of doing it for months after.\n\n> With that said, I really scares the crud out of me to see those massive\n> pg_indent runs right before you guys do a release.\n> \n> It would make a lot more sense to force a pgindent run after applying\n> each patch. This way you don't loose the history.\n\n\nYes, we have considered that. The problem there is that sometimes\npeople supply a patch, do some more work on their production source,\nthen supply other patches to fix new problems. If we pgindent for every\nCVS commit, we then are changing the supplied patch, which means any new\npatches that person sends do not match their previous patch, and we get\ninto hand edits again.\n\nI know we ask for context diff's, but anytime a patch applies with some\noffset, if the offset is large, I have to make sure there wasn't some\nother identical context of code that may have been found by the patch\nprogram and applied incorrectly.\n\nA silent patch apply is safe; if it reports a large offset, I have to\ninvestigate.\n\n> You want to be upset with yourself Bruce? Go into a directory and type:\n> \n> cvs annotate <any file that's been pgindented>\n> \n> cvs annotate is a really, really handy tool, unfortunetly these\n> indent runs remove this very useful tool as well as do a major job\n> of obfuscating the code changes.\n\nI have never seen that feature. I don't even see it in my cvs manual\npage. It is great, and yes, I clearly wack that out for pgindent runs.\nMaybe pgindent for every commit is the way to go.\n\n> It's not like you guys have a massive devel team with new people each\n> week that have a steep committer learning curve ahead of them, making\n> pgindent as patches are applied should work.\n\nI imagine we can get CVS to do that automatically. The number of patch\non top of another patch is pretty rare and it would solve the other\nstated problems.\n\n\n> There's also the argument that a developer's pgindent may force a\n> contributor to resolve conflicts, while this is true, it's also\n> true that you guys expect diffs to be in context format, comments\n> to be in english, function prototypes to be new style, etc, etc..\n> \n> I think contributors can deal with this.\n\nIf someone submits a massive patch, and we apply it, all patches after\nthat that they give us will not apply cleanly because they still have\nthe old format. The other argument for not doing pgindent on cvs commit\nis that if someone is working in an area of the code, they should be\nable to format that code as they like to see it. They may be working in\nthere for months. Only during release are is their _style_ removed.\n\nOn a side note, the idea of having people submit patches only against\ncurrent CVS seems bad to me. If people are running production machines\nand they develop a patch and test it there, I want the patch that works\non their machine and can make sure it applies here. Having them\ndownload CVS and do the merge themselves seems really risky, especially\nbecause they probably can't test the CVS in production. The CVS may not\neven run properly.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 22 Mar 2001 09:49:19 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgindent run?"
},
{
"msg_contents": "Alfred Perlstein <bright@wintelcom.net> writes:\n> cvs annotate is a really, really handy tool, unfortunetly these\n> indent runs remove this very useful tool as well as do a major job\n> of obfuscating the code changes.\n\nI think this is a good reason for *not* applying pgindent on an\nincremental basis, but only once per release cycle. That at least\nlets cvs annotate be useful within a cycle.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 22 Mar 2001 09:56:06 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgindent run? "
},
{
"msg_contents": "> Alfred Perlstein <bright@wintelcom.net> writes:\n> > cvs annotate is a really, really handy tool, unfortunetly these\n> > indent runs remove this very useful tool as well as do a major job\n> > of obfuscating the code changes.\n> \n> I think this is a good reason for *not* applying pgindent on an\n> incremental basis, but only once per release cycle. That at least\n> lets cvs annotate be useful within a cycle.\n\nWhat about having it happen on every CVS commit? \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 22 Mar 2001 10:07:23 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgindent run?"
},
{
"msg_contents": "> It seems that you guys are dead set on using this pgindent tool,\n> this is cool, we'd probably use some indentation tool on the FreeBSD\n> sources if there was one that met our code style(9) guidelines.\n\nI would liken running pgindent to having a nice looking store or\nwebsite. No one is going to go to a website or a store only because it\nlooks nice, but having it look nice does keep people coming back.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 22 Mar 2001 10:09:38 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgindent run?"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> Alfred Perlstein <bright@wintelcom.net> writes:\n> cvs annotate is a really, really handy tool, unfortunetly these\n> indent runs remove this very useful tool as well as do a major job\n> of obfuscating the code changes.\n>> \n>> I think this is a good reason for *not* applying pgindent on an\n>> incremental basis, but only once per release cycle. That at least\n>> lets cvs annotate be useful within a cycle.\n\n> What about having it happen on every CVS commit? \n\nTry that and you'll get all-out war. It's tough enough keeping in sync\nwith the repository already.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 22 Mar 2001 10:36:24 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgindent run? "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> Alfred Perlstein <bright@wintelcom.net> writes:\n> > cvs annotate is a really, really handy tool, unfortunetly these\n> > indent runs remove this very useful tool as well as do a major job\n> > of obfuscating the code changes.\n> >> \n> >> I think this is a good reason for *not* applying pgindent on an\n> >> incremental basis, but only once per release cycle. That at least\n> >> lets cvs annotate be useful within a cycle.\n> \n> > What about having it happen on every CVS commit? \n> \n> Try that and you'll get all-out war. It's tough enough keeping in sync\n> with the repository already.\n\nOh, well. I tried.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 22 Mar 2001 10:37:14 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgindent run?"
},
{
"msg_contents": "> > Hey, I am open to whatever people want to do. Just remember that we\n> > accumulate lots of patches/development during the slow time before\n> > development, and those patches become harder to apply. Peter E has some\n> > already.\n> \n> This argument seems irrelevant when given the choice of before release or\n> after start of new cycle. From an analytical point of view (and slightly\n> idealized), these are limits approaching the same points in time, from\n> opposite sides. Thus, the second choice seems the infinitely better\n> option.\n\nYes, the bigger problem is people running our most recent stable release\nnot matching the current CVS sources. I think early beta is the time to\ndo this next time. That has the fewest patches crossing over time. In\n7.1, we had the WAL patches still being worked on until near the end,\nbut once Tom put that mega-patch in last week, we could have done it\nthen.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 22 Mar 2001 11:12:04 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgindent run?"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> > >> Are there any severely mis-indented files?\n> >\n> > There are some new contrib modules that are nowhere close to our\n> > indent conventions; also a good deal of foreign-key-related stuff\n> > in the parser that needs to be cleaned up. So we should run it.\n> >\n> > I've always felt that it'd be smarter to run pgindent at the start\n> > of a development cycle, not the end, but I've been unable to convince\n> > Bruce of that ...\n>\n> Hey, I am open to whatever people want to do. Just remember that we\n> accumulate lots of patches/development during the slow time before\n> development, and those patches become harder to apply. Peter E has some\n> already.\n\nThis argument seems irrelevant when given the choice of before release or\nafter start of new cycle. From an analytical point of view (and slightly\nidealized), these are limits approaching the same points in time, from\nopposite sides. Thus, the second choice seems the infinitely better\noption.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Thu, 22 Mar 2001 17:12:59 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: pgindent run?"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I think early beta is the time to\n> do this next time. That has the fewest patches crossing over time.\n\nThat would work too, particularly if you give people a few days' notice.\n(\"Get your patches in now, or expect to have to reformat...\")\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 22 Mar 2001 11:14:23 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgindent run? "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I think early beta is the time to\n> > do this next time. That has the fewest patches crossing over time.\n> \n> That would work too, particularly if you give people a few days' notice.\n> (\"Get your patches in now, or expect to have to reformat...\")\n\nYes, I did try that in a previous release and got the, \"Oh, I am still\nworking on X.\" I will be more insistent in the future that we get this\ndone earlier.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 22 Mar 2001 11:16:14 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgindent run?"
},
{
"msg_contents": "* Bruce Momjian <pgman@candle.pha.pa.us> [010322 06:49] wrote:\n> > * Bruce Momjian <pgman@candle.pha.pa.us> [010321 21:14] wrote:\n> > > > The Hermit Hacker <scrappy@hub.org> writes:\n> > > > > and most times, those have to be merged into the source tree due to\n> > > > > extensive changes anyway ... maybe we should just get rid of the use of\n> > > > > pgindent altogether?\n> > > > \n> > > > I think pgindent is a good thing; the style of different parts of the\n> > > > code would vary too much without it. I'm only unhappy about the risk\n> > > > issues of running it at this late stage of the release cycle.\n> > > \n> > > This is the usual?discussion. Some like it, some don't like the risk,\n> > > some don't like the timing. I don't think we ever came up with a better\n> > > time than before RC, though I think we could do it a little earlier in\n> > > beta if people were not holding patches during that period. It is the\n> > > beta patching folks that we have the most control over.\n> > \n> > It seems that you guys are dead set on using this pgindent tool,\n> > this is cool, we'd probably use some indentation tool on the FreeBSD\n> > sources if there was one that met our code style(9) guidelines.\n> \n> You don't notice the value of pgindent until you have some code that\n> hasn't been run through it. For example, ODBC was not run through until\n> this release, and I had a terrible time trying to understand the code\n> because it didn't _look_ like the rest of the code. Now that pgindent\n> is run, it looks more normal, and I am sure that will encourage more\n> people to get in and make changes.\n\nIn FreeBSD we will simply refuse to apply patches that don't at least\nsomewhat adhere to our coding standards. Word of mouth keeps people\nsubmitting patches that are correct.\n\n> As far as FreeBSD, I guarantee you will see major benefits to community\n> participation by running the script. You will have to hand-review all\n> the changes after the first run to make sure it didn't wack out some\n> wierd piece of code, but after that you will be pretty OK. The only\n> issue is that the person who takes this on is a taking a major risk of\n> exposure to ridicule if it fails.\n\nThis scares me to death, has indent done this before to you?\n\nIf it has, here's a nifty trick someone started doing on our project,\nwhenever he came across some file that was terribly formatted (*) he'd\ncompile it as is, then copy the object files somewhere else, reformat\nit and then recompile. He'd then use the md5(1) command to verify\nthat in reality nothing had changed.\n\n> > With that said, I really scares the crud out of me to see those massive\n> > pg_indent runs right before you guys do a release.\n> > \n> > It would make a lot more sense to force a pgindent run after applying\n> > each patch. This way you don't loose the history.\n> \n> Yes, we have considered that. The problem there is that sometimes\n> people supply a patch, do some more work on their production source,\n> then supply other patches to fix new problems. If we pgindent for every\n> CVS commit, we then are changing the supplied patch, which means any new\n> patches that person sends do not match their previous patch, and we get\n> into hand edits again.\n> \n> I know we ask for context diff's, but anytime a patch applies with some\n> offset, if the offset is large, I have to make sure there wasn't some\n> other identical context of code that may have been found by the patch\n> program and applied incorrectly.\n> \n> A silent patch apply is safe; if it reports a large offset, I have to\n> investigate.\n\nThis really goes without saying. I think it would be cool to setup\na site one day that explains the proper way to contribute to each\nproject, I still occasionally get smacked upside the head for doing\nsomething like putting an RCS/CVS tag in the wrong spot in a file.\n:)\n\n> > You want to be upset with yourself Bruce? Go into a directory and type:\n> > \n> > cvs annotate <any file that's been pgindented>\n> > \n> > cvs annotate is a really, really handy tool, unfortunetly these\n> > indent runs remove this very useful tool as well as do a major job\n> > of obfuscating the code changes.\n> \n> I have never seen that feature. I don't even see it in my cvs manual\n> page. It is great, and yes, I clearly wack that out for pgindent runs.\n> Maybe pgindent for every commit is the way to go.\n\nyeah. :(\n\n> > It's not like you guys have a massive devel team with new people each\n> > week that have a steep committer learning curve ahead of them, making\n> > pgindent as patches are applied should work.\n> \n> I imagine we can get CVS to do that automatically. The number of patch\n> on top of another patch is pretty rare and it would solve the other\n> stated problems.\n> \n> \n> > There's also the argument that a developer's pgindent may force a\n> > contributor to resolve conflicts, while this is true, it's also\n> > true that you guys expect diffs to be in context format, comments\n> > to be in english, function prototypes to be new style, etc, etc..\n> > \n> > I think contributors can deal with this.\n> \n> If someone submits a massive patch, and we apply it, all patches after\n> that that they give us will not apply cleanly because they still have\n> the old format. The other argument for not doing pgindent on cvs commit\n> is that if someone is working in an area of the code, they should be\n> able to format that code as they like to see it. They may be working in\n> there for months. Only during release are is their _style_ removed.\n\nAnd how exactly does that make sense? You guys have a long beta period,\nthis give people that sit in thier own little corner of the code (OBDC\nfor instance) stuck with a major conflict resolution after release when\nthey go to add patches they held off on during beta. You also suddenly\nmake the code look completely forien to the contributor... what if he\nhas a major issue with the style of pgindent? It would make a lot more\nsense to explain this up front...\n \"Say, Alfred, I noticed you use two space indents, that's gross,\n have you run your code through pgindent as explained in the\n contributor's guide at http://www.postgresql.org/faq/cont....?\"\n\n> On a side note, the idea of having people submit patches only against\n> current CVS seems bad to me. If people are running production machines\n> and they develop a patch and test it there, I want the patch that works\n> on their machine and can make sure it applies here. Having them\n> download CVS and do the merge themselves seems really risky, especially\n> because they probably can't test the CVS in production. The CVS may not\n> even run properly.\n\nWell that's true, but how confident are you that the patch applied to\nthe CVS version has the same effect as the -release version?\n\n-- \n-Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]\nDaemon News Magazine in your snail-mail! http://magazine.daemonnews.org/\n",
"msg_date": "Thu, 22 Mar 2001 09:23:41 -0800",
"msg_from": "Alfred Perlstein <bright@wintelcom.net>",
"msg_from_op": false,
"msg_subject": "Re: pgindent run?"
},
{
"msg_contents": "> > You don't notice the value of pgindent until you have some code that\n> > hasn't been run through it. For example, ODBC was not run through until\n> > this release, and I had a terrible time trying to understand the code\n> > because it didn't _look_ like the rest of the code. Now that pgindent\n> > is run, it looks more normal, and I am sure that will encourage more\n> > people to get in and make changes.\n> \n> In FreeBSD we will simply refuse to apply patches that don't at least\n> somewhat adhere to our coding standards. Word of mouth keeps people\n> submitting patches that are correct.\n\nYes, but most individuals can't be bothered to do that all the time, and\nautomated tools make it much better.\n\nAlso, we aren't too big on rejecting patches because they don't meet our\nindentation standards. After they get involved, they start to follow\nit, but usually it takes a while for that to happen, and if you enforce\nit right away, you risk losing that person.\n\n\n> > As far as FreeBSD, I guarantee you will see major benefits to community\n> > participation by running the script. You will have to hand-review all\n> > the changes after the first run to make sure it didn't wack out some\n> > wierd piece of code, but after that you will be pretty OK. The only\n> > issue is that the person who takes this on is a taking a major risk of\n> > exposure to ridicule if it fails.\n> \n> This scares me to death, has indent done this before to you?\n\nNo, not really. It is more the size of the diff that scares you.\n\n> If it has, here's a nifty trick someone started doing on our project,\n> whenever he came across some file that was terribly formatted (*) he'd\n> compile it as is, then copy the object files somewhere else, reformat\n> it and then recompile. He'd then use the md5(1) command to verify\n> that in reality nothing had changed.\n\nYes, I have considered taking the 'postgres' binary and doing a cmp\nagainst the two versions. I think I did that the first time I ran\npgindent. You need to skip over the object timestamp headers, but other\nthan that, it should work.\n\n> > I know we ask for context diff's, but anytime a patch applies with some\n> > offset, if the offset is large, I have to make sure there wasn't some\n> > other identical context of code that may have been found by the patch\n> > program and applied incorrectly.\n> > \n> > A silent patch apply is safe; if it reports a large offset, I have to\n> > investigate.\n> \n> This really goes without saying. I think it would be cool to setup\n> a site one day that explains the proper way to contribute to each\n> project, I still occasionally get smacked upside the head for doing\n> something like putting an RCS/CVS tag in the wrong spot in a file.\n> :)\n\nWe do have the developers FAQ, which goes into some detail about it,\nquestion #1.\n\n> > I have never seen that feature. I don't even see it in my cvs manual\n> > page. It is great, and yes, I clearly wack that out for pgindent runs.\n> > Maybe pgindent for every commit is the way to go.\n> \n> yeah. :(\n\n\n> > If someone submits a massive patch, and we apply it, all patches after\n> > that that they give us will not apply cleanly because they still have\n> > the old format. The other argument for not doing pgindent on cvs commit\n> > is that if someone is working in an area of the code, they should be\n> > able to format that code as they like to see it. They may be working in\n> > there for months. Only during release are is their _style_ removed.\n> \n> And how exactly does that make sense? You guys have a long beta period,\n> this give people that sit in thier own little corner of the code (OBDC\n> for instance) stuck with a major conflict resolution after release when\n> they go to add patches they held off on during beta. You also suddenly\n> make the code look completely forien to the contributor... what if he\n> has a major issue with the style of pgindent? It would make a lot more\n> sense to explain this up front...\n> \"Say, Alfred, I noticed you use two space indents, that's gross,\n> have you run your code through pgindent as explained in the\n> contributor's guide at http://www.postgresql.org/faq/cont....?\"\n\nBut we have the FAQ item to explain the format. I don't think we want\npeople personally formatting code to meet their style between releases\nbecause it discourages others from looking at the code. We sort of give\nthem their own format for the release cycle, then reign it back in just\nbefore release. Seems like a good balance to me.\n\n> \n> > On a side note, the idea of having people submit patches only against\n> > current CVS seems bad to me. If people are running production machines\n> > and they develop a patch and test it there, I want the patch that works\n> > on their machine and can make sure it applies here. Having them\n> > download CVS and do the merge themselves seems really risky, especially\n> > because they probably can't test the CVS in production. The CVS may not\n> > even run properly.\n> \n> Well that's true, but how confident are you that the patch applied to\n> the CVS version has the same effect as the -release version?\n\nIf there is no offset change, that means the lines are still in the\nexact same spot, and no lines were added/removed above, and the code\naround the patch hasn't been changed. There is a chance that stuff\ncould slip in around that, but we have the beta test cycle and\nreviewer's eyes to keep those in check.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 22 Mar 2001 12:33:13 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgindent run?"
},
{
"msg_contents": "* Bruce Momjian <pgman@candle.pha.pa.us> [010322 07:12] wrote:\n> > It seems that you guys are dead set on using this pgindent tool,\n> > this is cool, we'd probably use some indentation tool on the FreeBSD\n> > sources if there was one that met our code style(9) guidelines.\n> \n> I would liken running pgindent to having a nice looking store or\n> website. No one is going to go to a website or a store only because it\n> looks nice, but having it look nice does keep people coming back.\n\nI'm not saying I don't like pgindent, I'm saying I don't like\npgindent's effect on the CVS history.\n\n-- \n-Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]\nInstead of asking why a piece of software is using \"1970s technology,\"\nstart asking why software is ignoring 30 years of accumulated wisdom.\n",
"msg_date": "Thu, 22 Mar 2001 09:36:17 -0800",
"msg_from": "Alfred Perlstein <bright@wintelcom.net>",
"msg_from_op": false,
"msg_subject": "Re: pgindent run?"
},
{
"msg_contents": "> * Bruce Momjian <pgman@candle.pha.pa.us> [010322 07:12] wrote:\n> > > It seems that you guys are dead set on using this pgindent tool,\n> > > this is cool, we'd probably use some indentation tool on the FreeBSD\n> > > sources if there was one that met our code style(9) guidelines.\n> > \n> > I would liken running pgindent to having a nice looking store or\n> > website. No one is going to go to a website or a store only because it\n> > looks nice, but having it look nice does keep people coming back.\n> \n> I'm not saying I don't like pgindent, I'm saying I don't like\n> pgindent's effect on the CVS history.\n\nHow do we get around this problem?\n\nAlso, I now remember that the problem with CVS auto-pgindenting, as Tom\nmentioned, is that once you cvs commit, you would have to cvs update\nagain because your source tree wouldn't match cvs anymore.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 22 Mar 2001 12:54:11 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgindent run?"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> You don't notice the value of pgindent until you have some code that\n> hasn't been run through it. For example, ODBC was not run through until\n> this release, and I had a terrible time trying to understand the code\n> because it didn't _look_ like the rest of the code. Now that pgindent\n> is run, it looks more normal, and I am sure that will encourage more\n> people to get in and make changes.\n> \n\nI see now the following comment in interfaces/odbc/statement.c.\nThough it's mine(probably), it's hard for me to read.\nPlease tell me how to prevent pgindent from changing\ncomments.\n\n /*\n * Basically we don't have to begin a transaction in autocommit mode\n * because Postgres backend runs in autocomit mode. We issue \"BEGIN\"\n * in the following cases. 1) we use declare/fetch and the statement\n * is SELECT (because declare/fetch must be called in a transaction).\n * 2) we are in autocommit off state and the statement isn't of type\n * OTHER.\n */\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Fri, 23 Mar 2001 09:37:39 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: pgindent run?"
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> Please tell me how to prevent pgindent from changing\n> comments.\n\nPut dashes at the start and end of the comment block, eg\n\n\t/*----------\n\t * comment here\n\t *----------\n\t */\n\nI'm not sure exactly how many dashes are needed --- I usually use ten as\nshown above.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 22 Mar 2001 19:38:47 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgindent run? "
},
{
"msg_contents": "I have applied the following patch to fix wrapping of comparisons in\ncomment text, for Tom Lane.\n\nIf others find comments that were mis-wrapped, I would be glad to fix\nthem.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: contrib/spi/refint.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/contrib/spi/refint.c,v\nretrieving revision 1.13\ndiff -c -r1.13 refint.c\n*** contrib/spi/refint.c\t2000/12/03 20:45:31\t1.13\n--- contrib/spi/refint.c\t2001/03/23 04:37:25\n***************\n*** 399,417 ****\n \t\t{\n \t\t\trelname = args2[0];\n \n! \t\t\t/*\n! \t\t\t * For 'R'estrict action we construct SELECT query - SELECT 1\n! \t\t\t * FROM _referencing_relation_ WHERE Fkey1 = $1 [AND Fkey2 =\n! \t\t\t * $2 [...]] - to check is tuple referenced or not.\n \t\t\t */\n \t\t\tif (action == 'r')\n \n \t\t\t\tsprintf(sql, \"select 1 from %s where \", relname);\n \n! \t\t\t/*\n! \t\t\t * For 'C'ascade action we construct DELETE query - DELETE\n! \t\t\t * FROM _referencing_relation_ WHERE Fkey1 = $1 [AND Fkey2 =\n! \t\t\t * $2 [...]] - to delete all referencing tuples.\n \t\t\t */\n \n \t\t\t/*\n--- 399,427 ----\n \t\t{\n \t\t\trelname = args2[0];\n \n! \t\t\t/*---------\n! \t\t\t * For 'R'estrict action we construct SELECT query:\n! \t\t\t *\n! \t\t\t * SELECT 1\n! \t\t\t *\tFROM _referencing_relation_\n! \t\t\t *\tWHERE Fkey1 = $1 [AND Fkey2 = $2 [...]]\n! \t\t\t *\n! \t\t\t * to check is tuple referenced or not.\n! \t\t\t *---------\n \t\t\t */\n \t\t\tif (action == 'r')\n \n \t\t\t\tsprintf(sql, \"select 1 from %s where \", relname);\n \n! \t\t\t/*---------\n! \t\t\t * For 'C'ascade action we construct DELETE query\n! \t\t\t *\n! \t\t\t *\tDELETE\n! \t\t\t *\tFROM _referencing_relation_\n! \t\t\t *\tWHERE Fkey1 = $1 [AND Fkey2 = $2 [...]]\n! \t\t\t *\n! \t\t\t * to delete all referencing tuples.\n! \t\t\t *---------\n \t\t\t */\n \n \t\t\t/*\nIndex: src/backend/access/gist/gistscan.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/access/gist/gistscan.c,v\nretrieving revision 1.32\ndiff -c -r1.32 gistscan.c\n*** src/backend/access/gist/gistscan.c\t2001/03/22 03:59:12\t1.32\n--- src/backend/access/gist/gistscan.c\t2001/03/23 04:37:26\n***************\n*** 143,151 ****\n \t\t\tfor (i = 0; i < s->numberOfKeys; i++)\n \t\t\t{\n \n! \t\t\t\t/*\n \t\t\t\t * s->keyData[i].sk_procedure =\n! \t\t\t\t * index_getprocid(s->relation, 1, GIST_CONSISTENT_PROC);\n \t\t\t\t */\n \t\t\t\ts->keyData[i].sk_procedure\n \t\t\t\t\t= RelationGetGISTStrategy(s->relation, s->keyData[i].sk_attno,\n--- 143,152 ----\n \t\t\tfor (i = 0; i < s->numberOfKeys; i++)\n \t\t\t{\n \n! \t\t\t\t/*----------\n \t\t\t\t * s->keyData[i].sk_procedure =\n! \t\t\t\t * \t\tindex_getprocid(s->relation, 1, GIST_CONSISTENT_PROC);\n! \t\t\t\t *----------\n \t\t\t\t */\n \t\t\t\ts->keyData[i].sk_procedure\n \t\t\t\t\t= RelationGetGISTStrategy(s->relation, s->keyData[i].sk_attno,\nIndex: src/backend/access/hash/hashsearch.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/access/hash/hashsearch.c,v\nretrieving revision 1.25\ndiff -c -r1.25 hashsearch.c\n*** src/backend/access/hash/hashsearch.c\t2001/01/24 19:42:47\t1.25\n--- src/backend/access/hash/hashsearch.c\t2001/03/23 04:37:26\n***************\n*** 334,342 ****\n \t\t\t\twhile (offnum > maxoff)\n \t\t\t\t{\n \n! \t\t\t\t\t/*\n! \t\t\t\t\t * either this page is empty (maxoff ==\n! \t\t\t\t\t * InvalidOffsetNumber) or we ran off the end.\n \t\t\t\t\t */\n \t\t\t\t\t_hash_readnext(rel, &buf, &page, &opaque);\n \t\t\t\t\tif (BufferIsInvalid(buf))\n--- 334,344 ----\n \t\t\t\twhile (offnum > maxoff)\n \t\t\t\t{\n \n! \t\t\t\t\t/*--------\n! \t\t\t\t\t * either this page is empty\n! \t\t\t\t\t * (maxoff == InvalidOffsetNumber)\n! \t\t\t\t\t * or we ran off the end.\n! \t\t\t\t\t *--------\n \t\t\t\t\t */\n \t\t\t\t\t_hash_readnext(rel, &buf, &page, &opaque);\n \t\t\t\t\tif (BufferIsInvalid(buf))\n***************\n*** 382,390 ****\n \t\t\t\twhile (offnum < FirstOffsetNumber)\n \t\t\t\t{\n \n! \t\t\t\t\t/*\n! \t\t\t\t\t * either this page is empty (offnum ==\n! \t\t\t\t\t * InvalidOffsetNumber) or we ran off the end.\n \t\t\t\t\t */\n \t\t\t\t\t_hash_readprev(rel, &buf, &page, &opaque);\n \t\t\t\t\tif (BufferIsInvalid(buf))\n--- 384,394 ----\n \t\t\t\twhile (offnum < FirstOffsetNumber)\n \t\t\t\t{\n \n! \t\t\t\t\t/*---------\n! \t\t\t\t\t * either this page is empty\n! \t\t\t\t\t * (offnum == InvalidOffsetNumber)\n! \t\t\t\t\t * or we ran off the end.\n! \t\t\t\t\t *---------\n \t\t\t\t\t */\n \t\t\t\t\t_hash_readprev(rel, &buf, &page, &opaque);\n \t\t\t\t\tif (BufferIsInvalid(buf))\nIndex: src/backend/access/heap/tuptoaster.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/access/heap/tuptoaster.c,v\nretrieving revision 1.19\ndiff -c -r1.19 tuptoaster.c\n*** src/backend/access/heap/tuptoaster.c\t2001/03/22 06:16:07\t1.19\n--- src/backend/access/heap/tuptoaster.c\t2001/03/23 04:37:26\n***************\n*** 458,466 ****\n \t\tint32\t\tbiggest_size = MAXALIGN(sizeof(varattrib));\n \t\tDatum\t\told_value;\n \n! \t\t/*\n! \t\t * Search for the biggest yet inlined attribute with attstorage =\n! \t\t * 'x' or 'e'\n \t\t */\n \t\tfor (i = 0; i < numAttrs; i++)\n \t\t{\n--- 458,467 ----\n \t\tint32\t\tbiggest_size = MAXALIGN(sizeof(varattrib));\n \t\tDatum\t\told_value;\n \n! \t\t/*------\n! \t\t * Search for the biggest yet inlined attribute with\n! \t\t * attstorage equals 'x' or 'e'\n! \t\t *------\n \t\t */\n \t\tfor (i = 0; i < numAttrs; i++)\n \t\t{\n***************\n*** 572,580 ****\n \t\tint32\t\tbiggest_size = MAXALIGN(sizeof(varattrib));\n \t\tDatum\t\told_value;\n \n! \t\t/*\n! \t\t * Search for the biggest yet inlined attribute with attstorage =\n! \t\t * 'm'\n \t\t */\n \t\tfor (i = 0; i < numAttrs; i++)\n \t\t{\n--- 573,582 ----\n \t\tint32\t\tbiggest_size = MAXALIGN(sizeof(varattrib));\n \t\tDatum\t\told_value;\n \n! \t\t/*--------\n! \t\t * Search for the biggest yet inlined attribute with\n! \t\t * attstorage = 'm'\n! \t\t *--------\n \t\t */\n \t\tfor (i = 0; i < numAttrs; i++)\n \t\t{\nIndex: src/backend/access/nbtree/nbtsearch.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/access/nbtree/nbtsearch.c,v\nretrieving revision 1.65\ndiff -c -r1.65 nbtsearch.c\n*** src/backend/access/nbtree/nbtsearch.c\t2001/03/22 06:16:07\t1.65\n--- src/backend/access/nbtree/nbtsearch.c\t2001/03/23 04:37:27\n***************\n*** 584,591 ****\n \n \t/*\n \t * At this point we are positioned at the first item >= scan key, or\n! \t * possibly at the end of a page on which all the existing items are <\n! \t * scan key and we know that everything on later pages is >= scan key.\n \t * We could step forward in the latter case, but that'd be a waste of\n \t * time if we want to scan backwards. So, it's now time to examine\n \t * the scan strategy to find the exact place to start the scan.\n--- 584,593 ----\n \n \t/*\n \t * At this point we are positioned at the first item >= scan key, or\n! \t * possibly at the end of a page on which all the existing items are \n! \t * greater than the scan key and we know that everything on later pages\n! \t * is less than or equal to scan key.\n! *\n \t * We could step forward in the latter case, but that'd be a waste of\n \t * time if we want to scan backwards. So, it's now time to examine\n \t * the scan strategy to find the exact place to start the scan.\nIndex: src/backend/access/nbtree/nbtutils.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/access/nbtree/nbtutils.c,v\nretrieving revision 1.43\ndiff -c -r1.43 nbtutils.c\n*** src/backend/access/nbtree/nbtutils.c\t2001/03/22 03:59:15\t1.43\n--- src/backend/access/nbtree/nbtutils.c\t2001/03/23 04:37:27\n***************\n*** 412,419 ****\n \t\t\tif (DatumGetBool(test))\n \t\t\t\txform[j].sk_argument = cur->sk_argument;\n \t\t\telse if (j == (BTEqualStrategyNumber - 1))\n! \t\t\t\tso->qual_ok = false;\t/* key == a && key == b, but a !=\n! \t\t\t\t\t\t\t\t\t\t * b */\n \t\t}\n \t\telse\n \t\t{\n--- 412,419 ----\n \t\t\tif (DatumGetBool(test))\n \t\t\t\txform[j].sk_argument = cur->sk_argument;\n \t\t\telse if (j == (BTEqualStrategyNumber - 1))\n! \t\t\t\tso->qual_ok = false;\n! \t\t\t/* key == a && key == b, but a != b */\n \t\t}\n \t\telse\n \t\t{\nIndex: src/backend/commands/command.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/commands/command.c,v\nretrieving revision 1.124\ndiff -c -r1.124 command.c\n*** src/backend/commands/command.c\t2001/03/22 06:16:11\t1.124\n--- src/backend/commands/command.c\t2001/03/23 04:37:28\n***************\n*** 1034,1044 ****\n \tScanKeyEntryInitialize(&scankeys[0], 0x0, Anum_pg_attrdef_adrelid,\n \t\t\t\t\t\t F_OIDEQ, ObjectIdGetDatum(myrelid));\n \n! \t/*\n \t * Oops pg_attrdef doesn't have (adrelid,adnum) index\n! \t * ScanKeyEntryInitialize(&scankeys[1], 0x0, Anum_pg_attrdef_adnum,\n! \t * F_INT2EQ, Int16GetDatum(attnum)); sysscan =\n! \t * systable_beginscan(adrel, AttrDefaultIndex, 2, scankeys);\n \t */\n \tsysscan = systable_beginscan(adrel, AttrDefaultIndex, 1, scankeys);\n \twhile (HeapTupleIsValid(tup = systable_getnext(sysscan)))\n--- 1034,1046 ----\n \tScanKeyEntryInitialize(&scankeys[0], 0x0, Anum_pg_attrdef_adrelid,\n \t\t\t\t\t\t F_OIDEQ, ObjectIdGetDatum(myrelid));\n \n! \t/*--------\n \t * Oops pg_attrdef doesn't have (adrelid,adnum) index\n! \t *\n! \t *\tScanKeyEntryInitialize(&scankeys[1], 0x0, Anum_pg_attrdef_adnum,\n! \t * \t\t\t\t\t\t\t\tF_INT2EQ, Int16GetDatum(attnum));\n! \t *\tsysscan = systable_beginscan(adrel, AttrDefaultIndex, 2, scankeys);\n! \t *--------\n \t */\n \tsysscan = systable_beginscan(adrel, AttrDefaultIndex, 1, scankeys);\n \twhile (HeapTupleIsValid(tup = systable_getnext(sysscan)))\nIndex: src/backend/commands/_deadcode/version.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/commands/_deadcode/version.c,v\nretrieving revision 1.25\ndiff -c -r1.25 version.c\n*** src/backend/commands/_deadcode/version.c\t2001/01/24 19:42:53\t1.25\n--- src/backend/commands/_deadcode/version.c\t2001/03/23 04:37:29\n***************\n*** 77,85 ****\n eval_as_new_xact(char *query)\n {\n \n! \t/*\n \t * WARNING! do not uncomment the following lines WARNING!\n! \t * CommitTransactionCommand(); StartTransactionCommand();\n \t */\n \tCommandCounterIncrement();\n \tpg_exec_query(query);\n--- 77,88 ----\n eval_as_new_xact(char *query)\n {\n \n! \t/*------\n \t * WARNING! do not uncomment the following lines WARNING!\n! \t *\n! \t *\tCommitTransactionCommand();\n! \t *\tStartTransactionCommand();\n! \t *------\n \t */\n \tCommandCounterIncrement();\n \tpg_exec_query(query);\nIndex: src/backend/executor/execQual.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/executor/execQual.c,v\nretrieving revision 1.84\ndiff -c -r1.84 execQual.c\n*** src/backend/executor/execQual.c\t2001/03/22 03:59:26\t1.84\n--- src/backend/executor/execQual.c\t2001/03/23 04:37:30\n***************\n*** 1499,1506 ****\n \t * and another array that holds the isDone status for each targetlist\n \t * item. The isDone status is needed so that we can iterate,\n \t * generating multiple tuples, when one or more tlist items return\n! \t * sets. (We expect the caller to call us again if we return *isDone\n! \t * = ExprMultipleResult.)\n \t */\n \tif (nodomains > NPREALLOCDOMAINS)\n \t{\n--- 1499,1507 ----\n \t * and another array that holds the isDone status for each targetlist\n \t * item. The isDone status is needed so that we can iterate,\n \t * generating multiple tuples, when one or more tlist items return\n! \t * sets. (We expect the caller to call us again if we return:\n! \t *\n! \t *\tisDone = ExprMultipleResult.)\n \t */\n \tif (nodomains > NPREALLOCDOMAINS)\n \t{\nIndex: src/backend/executor/nodeLimit.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/executor/nodeLimit.c,v\nretrieving revision 1.5\ndiff -c -r1.5 nodeLimit.c\n*** src/backend/executor/nodeLimit.c\t2001/03/22 06:16:13\t1.5\n--- src/backend/executor/nodeLimit.c\t2001/03/23 04:37:30\n***************\n*** 79,86 ****\n \t\t * tuple in the offset region before we can return NULL.\n \t\t * Otherwise we won't be correctly aligned to start going forward\n \t\t * again. So, although you might think we can quit when position\n! \t\t * = offset + 1, we have to fetch a subplan tuple first, and then\n! \t\t * exit when position = offset.\n \t\t */\n \t\tif (ScanDirectionIsForward(direction))\n \t\t{\n--- 79,86 ----\n \t\t * tuple in the offset region before we can return NULL.\n \t\t * Otherwise we won't be correctly aligned to start going forward\n \t\t * again. So, although you might think we can quit when position\n! \t\t * equals offset + 1, we have to fetch a subplan tuple first, and\n! \t\t * then exit when position = offset.\n \t\t */\n \t\tif (ScanDirectionIsForward(direction))\n \t\t{\nIndex: src/backend/executor/nodeMergejoin.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/executor/nodeMergejoin.c,v\nretrieving revision 1.44\ndiff -c -r1.44 nodeMergejoin.c\n*** src/backend/executor/nodeMergejoin.c\t2001/03/22 06:16:13\t1.44\n--- src/backend/executor/nodeMergejoin.c\t2001/03/23 04:37:31\n***************\n*** 240,249 ****\n \t\t\tbreak;\n \t\t}\n \n! \t\t/*\n \t\t * ok, the compare clause failed so we test if the keys are\n! \t\t * equal... if key1 != key2, we return false. otherwise key1 =\n! \t\t * key2 so we move on to the next pair of keys.\n \t\t */\n \t\tconst_value = ExecEvalExpr((Node *) lfirst(eqclause),\n \t\t\t\t\t\t\t\t econtext,\n--- 240,250 ----\n \t\t\tbreak;\n \t\t}\n \n! \t\t/*-----------\n \t\t * ok, the compare clause failed so we test if the keys are\n! \t\t * equal... if key1 != key2, we return false. otherwise\n! \t\t * key1 = key2 so we move on to the next pair of keys.\n! \t\t *-----------\n \t\t */\n \t\tconst_value = ExecEvalExpr((Node *) lfirst(eqclause),\n \t\t\t\t\t\t\t\t econtext,\nIndex: src/backend/optimizer/path/clausesel.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/optimizer/path/clausesel.c,v\nretrieving revision 1.42\ndiff -c -r1.42 clausesel.c\n*** src/backend/optimizer/path/clausesel.c\t2001/03/22 03:59:34\t1.42\n--- src/backend/optimizer/path/clausesel.c\t2001/03/23 04:37:31\n***************\n*** 297,305 ****\n \t\t\telse\n \t\t\t{\n \n! \t\t\t\t/*\n! \t\t\t\t * We have found two similar clauses, such as x < y AND x\n! \t\t\t\t * < z. Keep only the more restrictive one.\n \t\t\t\t */\n \t\t\t\tif (rqelem->lobound > s2)\n \t\t\t\t\trqelem->lobound = s2;\n--- 297,307 ----\n \t\t\telse\n \t\t\t{\n \n! \t\t\t\t/*------\n! \t\t\t\t * We have found two similar clauses, such as\n! \t\t\t\t * x < y AND x < z.\n! \t\t\t\t * Keep only the more restrictive one.\n! \t\t\t\t *------\n \t\t\t\t */\n \t\t\t\tif (rqelem->lobound > s2)\n \t\t\t\t\trqelem->lobound = s2;\n***************\n*** 315,323 ****\n \t\t\telse\n \t\t\t{\n \n! \t\t\t\t/*\n! \t\t\t\t * We have found two similar clauses, such as x > y AND x\n! \t\t\t\t * > z. Keep only the more restrictive one.\n \t\t\t\t */\n \t\t\t\tif (rqelem->hibound > s2)\n \t\t\t\t\trqelem->hibound = s2;\n--- 317,327 ----\n \t\t\telse\n \t\t\t{\n \n! \t\t\t\t/*------\n! \t\t\t\t * We have found two similar clauses, such as\n! \t\t\t\t * x > y AND x > z.\n! \t\t\t\t * Keep only the more restrictive one.\n! \t\t\t\t *------\n \t\t\t\t */\n \t\t\t\tif (rqelem->hibound > s2)\n \t\t\t\t\trqelem->hibound = s2;\nIndex: src/backend/optimizer/path/indxpath.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/optimizer/path/indxpath.c,v\nretrieving revision 1.103\ndiff -c -r1.103 indxpath.c\n*** src/backend/optimizer/path/indxpath.c\t2001/03/22 03:59:35\t1.103\n--- src/backend/optimizer/path/indxpath.c\t2001/03/23 04:37:33\n***************\n*** 1986,1994 ****\n \texpr = make_opclause(op, leftop, (Var *) con);\n \tresult = makeList1(expr);\n \n! \t/*\n! \t * If we can create a string larger than the prefix, we can say \"x <\n! \t * greaterstr\".\n \t */\n \tgreaterstr = make_greater_string(prefix, datatype);\n \tif (greaterstr)\n--- 1986,1995 ----\n \texpr = make_opclause(op, leftop, (Var *) con);\n \tresult = makeList1(expr);\n \n! \t/*-------\n! \t * If we can create a string larger than the prefix, we can say\n! \t * \"x < greaterstr\".\n! \t *-------\n \t */\n \tgreaterstr = make_greater_string(prefix, datatype);\n \tif (greaterstr)\nIndex: src/backend/rewrite/rewriteDefine.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/rewrite/rewriteDefine.c,v\nretrieving revision 1.60\ndiff -c -r1.60 rewriteDefine.c\n*** src/backend/rewrite/rewriteDefine.c\t2001/03/22 06:16:16\t1.60\n--- src/backend/rewrite/rewriteDefine.c\t2001/03/23 04:37:33\n***************\n*** 130,139 ****\n \n #ifdef NOT_USED\n \n! \t/*\n \t * on retrieve to class.attribute do instead nothing is converted to\n! \t * 'on retrieve to class.attribute do instead retrieve (attribute =\n! \t * NULL)' --- this is also a terrible hack that works well -- glass\n \t */\n \tif (is_instead && !*action && eslot_string && event_type == CMD_SELECT)\n \t{\n--- 130,143 ----\n \n #ifdef NOT_USED\n \n! \t/*---------\n \t * on retrieve to class.attribute do instead nothing is converted to\n! \t * 'on retrieve to class.attribute do instead:\n! \t *\n! \t *\t retrieve (attribute = NULL)'\n! \t *\n! \t * this is also a terrible hack that works well -- glass\n! \t *---------\n \t */\n \tif (is_instead && !*action && eslot_string && event_type == CMD_SELECT)\n \t{\nIndex: src/backend/storage/ipc/ipc.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/storage/ipc/ipc.c,v\nretrieving revision 1.65\ndiff -c -r1.65 ipc.c\n*** src/backend/storage/ipc/ipc.c\t2001/03/22 06:16:16\t1.65\n--- src/backend/storage/ipc/ipc.c\t2001/03/23 04:37:34\n***************\n*** 404,410 ****\n \t * and entering the semop() call. If a cancel/die interrupt occurs in\n \t * that window, we would fail to notice it until after we acquire the\n \t * lock (or get another interrupt to escape the semop()). We can\n! \t * avoid this problem by temporarily setting ImmediateInterruptOK =\n \t * true before we do CHECK_FOR_INTERRUPTS; then, a die() interrupt in\n \t * this interval will execute directly. However, there is a huge\n \t * pitfall: there is another window of a few instructions after the\n--- 404,410 ----\n \t * and entering the semop() call. If a cancel/die interrupt occurs in\n \t * that window, we would fail to notice it until after we acquire the\n \t * lock (or get another interrupt to escape the semop()). We can\n! \t * avoid this problem by temporarily setting ImmediateInterruptOK to\n \t * true before we do CHECK_FOR_INTERRUPTS; then, a die() interrupt in\n \t * this interval will execute directly. However, there is a huge\n \t * pitfall: there is another window of a few instructions after the\nIndex: src/backend/storage/ipc/sinval.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/storage/ipc/sinval.c,v\nretrieving revision 1.28\ndiff -c -r1.28 sinval.c\n*** src/backend/storage/ipc/sinval.c\t2001/03/22 03:59:45\t1.28\n--- src/backend/storage/ipc/sinval.c\t2001/03/23 04:37:34\n***************\n*** 319,329 ****\n \t\t\t\txid < FirstTransactionId || xid >= snapshot->xmax)\n \t\t\t{\n \n! \t\t\t\t/*\n! \t\t\t\t * Seems that there is no sense to store xid >=\n! \t\t\t\t * snapshot->xmax (what we got from ReadNewTransactionId\n! \t\t\t\t * above) in snapshot->xip - we just assume that all xacts\n \t\t\t\t * with such xid-s are running and may be ignored.\n \t\t\t\t */\n \t\t\t\tcontinue;\n \t\t\t}\n--- 319,331 ----\n \t\t\t\txid < FirstTransactionId || xid >= snapshot->xmax)\n \t\t\t{\n \n! \t\t\t\t/*--------\n! \t\t\t\t * Seems that there is no sense to store\n! \t\t\t\t * \t\txid >= snapshot->xmax\n! \t\t\t\t * (what we got from ReadNewTransactionId above)\n! \t\t\t\t * in snapshot->xip. We just assume that all xacts\n \t\t\t\t * with such xid-s are running and may be ignored.\n+ \t\t\t\t *--------\n \t\t\t\t */\n \t\t\t\tcontinue;\n \t\t\t}\nIndex: src/backend/utils/adt/formatting.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/utils/adt/formatting.c,v\nretrieving revision 1.35\ndiff -c -r1.35 formatting.c\n*** src/backend/utils/adt/formatting.c\t2001/03/22 06:16:17\t1.35\n--- src/backend/utils/adt/formatting.c\t2001/03/23 04:37:36\n***************\n*** 2846,2854 ****\n \telse if (tmfc->yy)\n \t{\n \n! \t\t/*\n! \t\t * 2-digit year: '00' ... '69'\t= 2000 ... 2069 '70' ... '99' =\n! \t\t * 1970 ... 1999\n \t\t */\n \t\ttm->tm_year = tmfc->yy;\n \n--- 2846,2856 ----\n \telse if (tmfc->yy)\n \t{\n \n! \t\t/*---------\n! \t\t * 2-digit year:\n! \t\t * '00' ... '69' = 2000 ... 2069\n! \t\t * '70' ... '99' = 1970 ... 1999\n! \t\t *---------\n \t\t */\n \t\ttm->tm_year = tmfc->yy;\n \n***************\n*** 2860,2868 ****\n \telse if (tmfc->yyy)\n \t{\n \n! \t\t/*\n! \t\t * 3-digit year: '100' ... '999' = 1100 ... 1999 '000' ... '099' =\n! \t\t * 2000 ... 2099\n \t\t */\n \t\ttm->tm_year = tmfc->yyy;\n \n--- 2862,2872 ----\n \telse if (tmfc->yyy)\n \t{\n \n! \t\t/*---------\n! \t\t * 3-digit year:\n! \t\t *\t'100' ... '999' = 1100 ... 1999\n! \t\t *\t'000' ... '099' = 2000 ... 2099\n! \t\t *---------\n \t\t */\n \t\ttm->tm_year = tmfc->yyy;\n \nIndex: src/backend/utils/adt/selfuncs.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/utils/adt/selfuncs.c,v\nretrieving revision 1.86\ndiff -c -r1.86 selfuncs.c\n*** src/backend/utils/adt/selfuncs.c\t2001/03/22 03:59:54\t1.86\n--- src/backend/utils/adt/selfuncs.c\t2001/03/23 04:37:38\n***************\n*** 1642,1650 ****\n \t\t\t\t\t\t\t Int32GetDatum(SEL_CONSTANT | SEL_RIGHT)));\n \tpfree(DatumGetPointer(prefixcon));\n \n! \t/*\n! \t * If we can create a string larger than the prefix, say \"x <\n! \t * greaterstr\".\n \t */\n \tgreaterstr = make_greater_string(prefix, datatype);\n \tif (greaterstr)\n--- 1642,1651 ----\n \t\t\t\t\t\t\t Int32GetDatum(SEL_CONSTANT | SEL_RIGHT)));\n \tpfree(DatumGetPointer(prefixcon));\n \n! \t/*-------\n! \t * If we can create a string larger than the prefix, say\n! \t *\t\"x < greaterstr\".\n! \t *-------\n \t */\n \tgreaterstr = make_greater_string(prefix, datatype);\n \tif (greaterstr)\nIndex: src/backend/utils/cache/lsyscache.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/utils/cache/lsyscache.c,v\nretrieving revision 1.51\ndiff -c -r1.51 lsyscache.c\n*** src/backend/utils/cache/lsyscache.c\t2001/03/22 03:59:57\t1.51\n--- src/backend/utils/cache/lsyscache.c\t2001/03/23 04:37:39\n***************\n*** 272,278 ****\n \n \t/*\n \t * VACUUM ANALYZE has not been run for this table. Produce an estimate\n! \t * = 1/numtuples. This may produce unreasonably small estimates for\n \t * large tables, so limit the estimate to no less than min_estimate.\n \t */\n \tdispersion = 1.0 / (double) ntuples;\n--- 272,278 ----\n \n \t/*\n \t * VACUUM ANALYZE has not been run for this table. Produce an estimate\n! \t * of 1/numtuples. This may produce unreasonably small estimates for\n \t * large tables, so limit the estimate to no less than min_estimate.\n \t */\n \tdispersion = 1.0 / (double) ntuples;\nIndex: src/backend/utils/cache/relcache.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/utils/cache/relcache.c,v\nretrieving revision 1.129\ndiff -c -r1.129 relcache.c\n*** src/backend/utils/cache/relcache.c\t2001/03/22 03:59:57\t1.129\n--- src/backend/utils/cache/relcache.c\t2001/03/23 04:37:40\n***************\n*** 2833,2843 ****\n \t * the descriptors, nail them into cache so we never lose them.\n \t */\n \n! \t/*\n! \t * Removed the following ProcessingMode change -- inoue At this point\n! \t * 1) Catalog Cache isn't initialized 2) Relation Cache for the\n! \t * following critical indexes aren't built oldmode =\n! \t * GetProcessingMode(); SetProcessingMode(BootstrapProcessing);\n \t */\n \n \tbi.infotype = INFO_RELNAME;\n--- 2833,2846 ----\n \t * the descriptors, nail them into cache so we never lose them.\n \t */\n \n! \t/*---------\n! \t * Removed the following ProcessingMode change -- inoue\n! \t * At this point\n! \t * 1) Catalog Cache isn't initialized\n! \t * 2) Relation Cache for the following critical indexes aren't built\n! \t * oldmode = GetProcessingMode();\n! \t * SetProcessingMode(BootstrapProcessing);\n! \t *---------\n \t */\n \n \tbi.infotype = INFO_RELNAME;\nIndex: src/backend/utils/sort/tuplesort.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/utils/sort/tuplesort.c,v\nretrieving revision 1.14\ndiff -c -r1.14 tuplesort.c\n*** src/backend/utils/sort/tuplesort.c\t2001/03/22 04:00:09\t1.14\n--- src/backend/utils/sort/tuplesort.c\t2001/03/23 04:37:42\n***************\n*** 129,136 ****\n \t * kind of tuple we are sorting from the routines that don't need to\n \t * know it. They are set up by the tuplesort_begin_xxx routines.\n \t *\n! \t * Function to compare two tuples; result is per qsort() convention, ie,\n! \t * <0, 0, >0 according as a<b, a=b, a>b.\n \t */\n \tint\t\t\t(*comparetup) (Tuplesortstate *state, const void *a, const void *b);\n \n--- 129,138 ----\n \t * kind of tuple we are sorting from the routines that don't need to\n \t * know it. They are set up by the tuplesort_begin_xxx routines.\n \t *\n! \t * Function to compare two tuples; result is per qsort() convention,\n! \t * ie:\n! \t *\n! \t * \t<0, 0, >0 according as a<b, a=b, a>b.\n \t */\n \tint\t\t\t(*comparetup) (Tuplesortstate *state, const void *a, const void *b);\n \nIndex: src/bin/pg_dump/pg_backup_db.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/bin/pg_dump/pg_backup_db.c,v\nretrieving revision 1.16\ndiff -c -r1.16 pg_backup_db.c\n*** src/bin/pg_dump/pg_backup_db.c\t2001/03/22 04:00:12\t1.16\n--- src/bin/pg_dump/pg_backup_db.c\t2001/03/23 04:37:42\n***************\n*** 473,481 ****\n \t\t\t\tqry += loc + 1;\n \t\t\t\tisEnd = (strcmp(AH->pgCopyBuf->data, \"\\\\.\\n\") == 0);\n \n! \t\t\t\t/*\n! \t\t\t\t * fprintf(stderr, \"Sending '%s' via COPY (at end =\n! \t\t\t\t * %d)\\n\\n\", AH->pgCopyBuf->data, isEnd);\n \t\t\t\t */\n \n \t\t\t\tif (PQputline(AH->connection, AH->pgCopyBuf->data) != 0)\n--- 473,482 ----\n \t\t\t\tqry += loc + 1;\n \t\t\t\tisEnd = (strcmp(AH->pgCopyBuf->data, \"\\\\.\\n\") == 0);\n \n! \t\t\t\t/*---------\n! \t\t\t\t * fprintf(stderr, \"Sending '%s' via\n! \t\t\t\t *\t\tCOPY (at end = %d)\\n\\n\", AH->pgCopyBuf->data, isEnd);\n! \t\t\t\t *---------\n \t\t\t\t */\n \n \t\t\t\tif (PQputline(AH->connection, AH->pgCopyBuf->data) != 0)\nIndex: src/bin/pg_dump/pg_dump.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/bin/pg_dump/pg_dump.c,v\nretrieving revision 1.196\ndiff -c -r1.196 pg_dump.c\n*** src/bin/pg_dump/pg_dump.c\t2001/03/22 04:00:14\t1.196\n--- src/bin/pg_dump/pg_dump.c\t2001/03/23 04:37:45\n***************\n*** 4405,4412 ****\n \t/*\n \t * The logic we use for restoring sequences is as follows: - Add a\n \t * basic CREATE SEQUENCE statement (use last_val for start if called\n! \t * == 'f', else use min_val for start_val). -\tAdd a 'SETVAL(seq,\n! \t * last_val, iscalled)' at restore-time iff we load data\n \t */\n \n \tif (!dataOnly)\n--- 4405,4414 ----\n \t/*\n \t * The logic we use for restoring sequences is as follows: - Add a\n \t * basic CREATE SEQUENCE statement (use last_val for start if called\n! \t * with 'f', else use min_val for start_val).\n! \t *\n! \t *\tAdd a 'SETVAL(seq, last_val, iscalled)' at restore-time iff\n! \t * we load data\n \t */\n \n \tif (!dataOnly)\nIndex: src/bin/pg_dump/pg_dump.h\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/bin/pg_dump/pg_dump.h,v\nretrieving revision 1.59\ndiff -c -r1.59 pg_dump.h\n*** src/bin/pg_dump/pg_dump.h\t2001/03/22 04:00:15\t1.59\n--- src/bin/pg_dump/pg_dump.h\t2001/03/23 04:37:45\n***************\n*** 158,165 ****\n {\n \tchar\t *oid;\n \tchar\t *oprname;\n! \tchar\t *oprkind;\t\t/* \"b\" = binary, \"l\" = left unary, \"r\" =\n! \t\t\t\t\t\t\t\t * right unary */\n \tchar\t *oprcode;\t\t/* operator function name */\n \tchar\t *oprleft;\t\t/* left operand type */\n \tchar\t *oprright;\t\t/* right operand type */\n--- 158,169 ----\n {\n \tchar\t *oid;\n \tchar\t *oprname;\n! \tchar\t *oprkind;\t\t/*----------\n! \t\t\t\t\t\t\t\t * \tb = binary,\n! \t\t\t\t\t\t\t\t *\tl = left unary\n! \t\t\t\t\t\t\t\t *\tr = right unary\n! \t\t\t\t\t\t\t\t *----------\n! \t\t\t\t\t\t\t\t */\n \tchar\t *oprcode;\t\t/* operator function name */\n \tchar\t *oprleft;\t\t/* left operand type */\n \tchar\t *oprright;\t\t/* right operand type */\nIndex: src/include/catalog/pg_type.h\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/include/catalog/pg_type.h,v\nretrieving revision 1.102\ndiff -c -r1.102 pg_type.h\n*** src/include/catalog/pg_type.h\t2001/03/22 04:00:41\t1.102\n--- src/include/catalog/pg_type.h\t2001/03/23 04:37:46\n***************\n*** 77,84 ****\n \t * be a \"real\" array type; some ordinary fixed-length types can also\n \t * be subscripted (e.g., oidvector). Variable-length types can *not*\n \t * be turned into pseudo-arrays like that. Hence, the way to determine\n! \t * whether a type is a \"true\" array type is typelem != 0 and typlen <\n! \t * 0.\n \t */\n \tOid\t\t\ttypelem;\n \tregproc\t\ttypinput;\n--- 77,85 ----\n \t * be a \"real\" array type; some ordinary fixed-length types can also\n \t * be subscripted (e.g., oidvector). Variable-length types can *not*\n \t * be turned into pseudo-arrays like that. Hence, the way to determine\n! \t * whether a type is a \"true\" array type is if:\n! \t *\n! \t *\ttypelem != 0 and typlen < 0.\n \t */\n \tOid\t\t\ttypelem;\n \tregproc\t\ttypinput;\nIndex: src/include/nodes/parsenodes.h\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/include/nodes/parsenodes.h,v\nretrieving revision 1.125\ndiff -c -r1.125 parsenodes.h\n*** src/include/nodes/parsenodes.h\t2001/03/22 04:00:51\t1.125\n--- src/include/nodes/parsenodes.h\t2001/03/23 04:37:47\n***************\n*** 116,125 ****\n typedef struct AlterTableStmt\n {\n \tNodeTag\t\ttype;\n! \tchar\t\tsubtype;\t\t/* A = add column, T = alter column, D =\n! \t\t\t\t\t\t\t\t * drop column, C = add constraint, X =\n! \t\t\t\t\t\t\t\t * drop constraint, E = add toast table, U\n! \t\t\t\t\t\t\t\t * = change owner */\n \tchar\t *relname;\t\t/* table to work on */\n \tInhOption\tinhOpt;\t\t\t/* recursively act on children? */\n \tchar\t *name;\t\t\t/* column or constraint name to act on, or\n--- 116,131 ----\n typedef struct AlterTableStmt\n {\n \tNodeTag\t\ttype;\n! \tchar\t\tsubtype;\t\t/*------------\n! \t\t\t\t\t\t\t\t * \tA = add column\n! \t\t\t\t\t\t\t\t *\tT = alter column\n! \t\t\t\t\t\t\t\t *\tD = drop column\n! \t\t\t\t\t\t\t\t *\tC = add constraint\n! \t\t\t\t\t\t\t\t *\tX = drop constraint\n! \t\t\t\t\t\t\t\t *\tE = add toast table,\n! \t\t\t\t\t\t\t\t *\tU = change owner\n! \t\t\t\t\t\t\t\t *------------\n! \t\t\t\t\t\t\t\t */\n \tchar\t *relname;\t\t/* table to work on */\n \tInhOption\tinhOpt;\t\t\t/* recursively act on children? */\n \tchar\t *name;\t\t\t/* column or constraint name to act on, or\nIndex: src/interfaces/ecpg/lib/connect.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/interfaces/ecpg/lib/connect.c,v\nretrieving revision 1.8\ndiff -c -r1.8 connect.c\n*** src/interfaces/ecpg/lib/connect.c\t2001/03/22 04:01:17\t1.8\n--- src/interfaces/ecpg/lib/connect.c\t2001/03/23 04:37:47\n***************\n*** 307,316 ****\n \t\tif (strncmp(dbname + offset, \"postgresql://\", strlen(\"postgresql://\")) == 0)\n \t\t{\n \n! \t\t\t/*\n \t\t\t * new style:\n! \t\t\t * <tcp|unix>:postgresql://server[:port|:/unixsocket/path:][/db\n! \t\t\t * name][?options]\n \t\t\t */\n \t\t\toffset += strlen(\"postgresql://\");\n \n--- 307,317 ----\n \t\tif (strncmp(dbname + offset, \"postgresql://\", strlen(\"postgresql://\")) == 0)\n \t\t{\n \n! \t\t\t/*------\n \t\t\t * new style:\n! \t\t\t * \t<tcp|unix>:postgresql://server[:port|:/unixsocket/path:]\n! \t\t\t *\t[/db name][?options]\n! \t\t\t *------\n \t\t\t */\n \t\t\toffset += strlen(\"postgresql://\");\n \nIndex: src/interfaces/libpq/fe-connect.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/interfaces/libpq/fe-connect.c,v\nretrieving revision 1.162\ndiff -c -r1.162 fe-connect.c\n*** src/interfaces/libpq/fe-connect.c\t2001/03/22 06:16:20\t1.162\n--- src/interfaces/libpq/fe-connect.c\t2001/03/23 04:37:48\n***************\n*** 582,591 ****\n \t\tif (strncmp(conn->dbName + offset, \"postgresql://\", strlen(\"postgresql://\")) == 0)\n \t\t{\n \n! \t\t\t/*\n \t\t\t * new style:\n! \t\t\t * <tcp|unix>:postgresql://server[:port|:/unixsocket/path:][/db\n! \t\t\t * name][?options]\n \t\t\t */\n \t\t\toffset += strlen(\"postgresql://\");\n \n--- 582,592 ----\n \t\tif (strncmp(conn->dbName + offset, \"postgresql://\", strlen(\"postgresql://\")) == 0)\n \t\t{\n \n! \t\t\t/*-------\n \t\t\t * new style:\n! \t\t\t * \t<tcp|unix>:postgresql://server[:port|:/unixsocket/path:]\n! \t\t\t *\t[/db name][?options]\n! \t\t\t *-------\n \t\t\t */\n \t\t\toffset += strlen(\"postgresql://\");\n \nIndex: src/interfaces/odbc/info.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/interfaces/odbc/info.c,v\nretrieving revision 1.42\ndiff -c -r1.42 info.c\n*** src/interfaces/odbc/info.c\t2001/03/22 04:01:33\t1.42\n--- src/interfaces/odbc/info.c\t2001/03/23 04:37:50\n***************\n*** 1738,1754 ****\n \t\tset_tuplefield_string(&row->tuple[5], field_type_name);\n \n \n! \t\t/*\n \t\t * Some Notes about Postgres Data Types:\n \t\t *\n \t\t * VARCHAR - the length is stored in the pg_attribute.atttypmod field\n \t\t * BPCHAR - the length is also stored as varchar is\n \t\t *\n! \t\t * NUMERIC - the scale is stored in atttypmod as follows: precision =\n! \t\t * ((atttypmod - VARHDRSZ) >> 16) & 0xffff scale\t = (atttypmod\n! \t\t * - VARHDRSZ) & 0xffff\n \t\t *\n \t\t *\n \t\t */\n \t\tqlog(\"SQLColumns: table='%s',field_name='%s',type=%d,sqltype=%d,name='%s'\\n\",\n \t\t\t table_name, field_name, field_type, pgtype_to_sqltype, field_type_name);\n--- 1738,1755 ----\n \t\tset_tuplefield_string(&row->tuple[5], field_type_name);\n \n \n! \t\t/*----------\n \t\t * Some Notes about Postgres Data Types:\n \t\t *\n \t\t * VARCHAR - the length is stored in the pg_attribute.atttypmod field\n \t\t * BPCHAR - the length is also stored as varchar is\n \t\t *\n! \t\t * NUMERIC - the scale is stored in atttypmod as follows:\n \t\t *\n+ \t\t *\tprecision =((atttypmod - VARHDRSZ) >> 16) & 0xffff\n+ \t\t *\tscale\t = (atttypmod - VARHDRSZ) & 0xffff\n \t\t *\n+ \t\t *----------\n \t\t */\n \t\tqlog(\"SQLColumns: table='%s',field_name='%s',type=%d,sqltype=%d,name='%s'\\n\",\n \t\t\t table_name, field_name, field_type, pgtype_to_sqltype, field_type_name);\nIndex: src/interfaces/odbc/options.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/interfaces/odbc/options.c,v\nretrieving revision 1.25\ndiff -c -r1.25 options.c\n*** src/interfaces/odbc/options.c\t2001/03/22 04:01:34\t1.25\n--- src/interfaces/odbc/options.c\t2001/03/23 04:37:50\n***************\n*** 81,96 ****\n \t\t\t\tstmt->options.scroll_concurrency = vParam;\n \t\t\tbreak;\n \n! \t\t\t/*\n! \t\t\t * if (globals.lie) { if (conn)\n! \t\t\t * conn->stmtOptions.scroll_concurrency = vParam; if (stmt)\n! \t\t\t * stmt->options.scroll_concurrency = vParam; } else {\n \t\t\t *\n! \t\t\t * if (conn) conn->stmtOptions.scroll_concurrency =\n! \t\t\t * SQL_CONCUR_READ_ONLY; if (stmt)\n! \t\t\t * stmt->options.scroll_concurrency = SQL_CONCUR_READ_ONLY;\n! \t\t\t *\n! \t\t\t * if (vParam != SQL_CONCUR_READ_ONLY) changed = TRUE; } break;\n \t\t\t */\n \n \t\tcase SQL_CURSOR_TYPE:\n--- 81,107 ----\n \t\t\t\tstmt->options.scroll_concurrency = vParam;\n \t\t\tbreak;\n \n! \t\t\t/*----------\n! \t\t\t * if (globals.lie)\n! \t\t\t * {\n! \t\t\t *\t\tif (conn)\n! \t\t\t * \t\t\tconn->stmtOptions.scroll_concurrency = vParam;\n! \t\t\t *\t\tif (stmt)\n! \t\t\t * \t\t\tstmt->options.scroll_concurrency = vParam;\n! \t\t\t *\t\t} else {\n! \t\t\t * \t\t\tif (conn)\n! \t\t\t *\t\t\t\tconn->stmtOptions.scroll_concurrency =\n! \t\t\t * \t\t\t\t\tSQL_CONCUR_READ_ONLY;\n! \t\t\t *\t\t\tif (stmt)\n! \t\t\t * \t\t\t\tstmt->options.scroll_concurrency =\n! \t\t\t *\t\t\t\t\tSQL_CONCUR_READ_ONLY;\n \t\t\t *\n! \t\t\t * \t\t\tif (vParam != SQL_CONCUR_READ_ONLY)\n! \t\t\t *\t\t\t\tchanged = TRUE;\n! \t\t\t *\t\t}\n! \t\t\t *\t\tbreak;\n! \t\t\t *\t}\n! \t\t\t *----------\n \t\t\t */\n \n \t\tcase SQL_CURSOR_TYPE:",
"msg_date": "Thu, 22 Mar 2001 23:48:47 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] pgindent run?"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> > \n> > You don't notice the value of pgindent until you have some code that\n> > hasn't been run through it. For example, ODBC was not run through until\n> > this release, and I had a terrible time trying to understand the code\n> > because it didn't _look_ like the rest of the code. Now that pgindent\n> > is run, it looks more normal, and I am sure that will encourage more\n> > people to get in and make changes.\n> > \n> \n> I see now the following comment in interfaces/odbc/statement.c.\n> Though it's mine(probably), it's hard for me to read.\n> Please tell me how to prevent pgindent from changing\n> comments.\n> \n> /*\n> * Basically we don't have to begin a transaction in autocommit mode\n> * because Postgres backend runs in autocomit mode. We issue \"BEGIN\"\n> * in the following cases. 1) we use declare/fetch and the statement\n> * is SELECT (because declare/fetch must be called in a transaction).\n> * 2) we are in autocommit off state and the statement isn't of type\n> * OTHER.\n> */\n\nSorry that happened. It is mentioned in the developer's FAQ that the\ndashes prevent wrapping. I know it is hard to remember even if you know\nit, and I would be glad to fix any comments that look bad.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 22 Mar 2001 23:51:18 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgindent run?"
}
] |
[
{
"msg_contents": "\nPostmaster crashed on me, and on restart, pg_inherits cannot be found.\nI can see it in pg_class (and it shows up w/ \\dS), but any attempt to\nmodify anything fails with \"pg_inherits: No such file or directory\".\n\nI've reindexed the database (w/postgres -P -O). Vacuuming fails (w/error\nabove).\n\nWhat could this be? Is there any hope?\n\nThanks!\n\n-- \nJoel Burton <jburton@scw.org>\nDirector of Information Systems, Support Center of Washington\n\n",
"msg_date": "Tue, 20 Mar 2001 18:38:35 -0500 (EST)",
"msg_from": "Joel Burton <jburton@scw.org>",
"msg_from_op": true,
"msg_subject": "pg_inherits: not found, but visible"
},
{
"msg_contents": "Joel Burton wrote:\n> \n> Postmaster crashed on me, and on restart, pg_inherits cannot be found.\n> I can see it in pg_class (and it shows up w/ \\dS), but any attempt to\n> modify anything fails with \"pg_inherits: No such file or directory\".\n> \n> I've reindexed the database (w/postgres -P -O). Vacuuming fails (w/error\n> above).\n> \n> What could this be? Is there any hope?\n> \n\nTry the following queries.\n1) select oid from pg_database where datname = your_db_name;\n2) select oid, relfilenode from pg_class where relname = 'pg_inherits';\n\nFor example I get the followings in my environment.\n1) oid = 18720\n2) relfilenode(==oid) = 16567;\n\nand I could find a $PGDATA/base/18720/16567 file.\nCould you find such a file ?\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Wed, 21 Mar 2001 09:17:34 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: pg_inherits: not found, but visible"
},
{
"msg_contents": "On Wed, 21 Mar 2001, Hiroshi Inoue wrote:\n\n> Joel Burton wrote:\n> > \n> > Postmaster crashed on me, and on restart, pg_inherits cannot be found.\n> > I can see it in pg_class (and it shows up w/ \\dS), but any attempt to\n> > modify anything fails with \"pg_inherits: No such file or directory\".\n> > \n> > I've reindexed the database (w/postgres -P -O). Vacuuming fails (w/error\n> > above).\n> > \n> > What could this be? Is there any hope?\n> > \n> \n> Try the following queries.\n> 1) select oid from pg_database where datname = your_db_name;\n> 2) select oid, relfilenode from pg_class where relname = 'pg_inherits';\n> \n> For example I get the followings in my environment.\n> 1) oid = 18720\n> 2) relfilenode(==oid) = 16567;\n> \n> and I could find a $PGDATA/base/18720/16567 file.\n> Could you find such a file ?\n\nNo. I do have the db directory, and all of the other file for the existing\nclasses, but not this.\n\nAny ideas why this would disappear? Or any ideas about how to get my\nexisting data out? (I have a dump from about 36 hours ago; it would be\nnice to extract some more recent data!)\n\n-- \nJoel Burton <jburton@scw.org>\nDirector of Information Systems, Support Center of Washington\n\n",
"msg_date": "Tue, 20 Mar 2001 19:20:56 -0500 (EST)",
"msg_from": "Joel Burton <jburton@scw.org>",
"msg_from_op": true,
"msg_subject": "Re: pg_inherits: not found, but visible"
},
{
"msg_contents": "Joel Burton <jburton@scw.org> writes:\n>> and I could find a $PGDATA/base/18720/16567 file.\n>> Could you find such a file ?\n\n> No. I do have the db directory, and all of the other file for the existing\n> classes, but not this.\n\nHm. You could make an empty file by that name (just 'touch' it) and\nthen you'd probably be able to dump (possibly after reindexing\npg_inherit's indexes again). pg_inherits isn't a real critical table,\nfortunately.\n\n> Any ideas why this would disappear?\n\nInteresting question, all right. Did you have a system crash?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 20 Mar 2001 19:32:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_inherits: not found, but visible "
},
{
"msg_contents": "Joel Burton wrote:\n> \n> On Wed, 21 Mar 2001, Hiroshi Inoue wrote:\n> \n> > Joel Burton wrote:\n> > >\n> > > Postmaster crashed on me, and on restart, pg_inherits cannot be found.\n> > > I can see it in pg_class (and it shows up w/ \\dS), but any attempt to\n> > > modify anything fails with \"pg_inherits: No such file or directory\".\n> > >\n> > > I've reindexed the database (w/postgres -P -O). Vacuuming fails (w/error\n> > > above).\n> > >\n> > > What could this be? Is there any hope?\n> > >\n> >\n> > Try the following queries.\n> > 1) select oid from pg_database where datname = your_db_name;\n> > 2) select oid, relfilenode from pg_class where relname = 'pg_inherits';\n> >\n> > For example I get the followings in my environment.\n> > 1) oid = 18720\n> > 2) relfilenode(==oid) = 16567;\n> >\n> > and I could find a $PGDATA/base/18720/16567 file.\n> > Could you find such a file ?\n> \n> No. I do have the db directory, and all of the other file for the existing\n> classes, but not this.\n> \n\nJust a confirmation. What is a result of the second query\nin your current environment ? \n\n> Any ideas why this would disappear?\n\nI have no idea but this is really a disastrous phenomenon.\n\n> Or any ideas about how to get my\n> existing data out? (I have a dump from about 36 hours ago; it would be\n> nice to extract some more recent data!)\n> \n\nAre you using inheritance ?\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Wed, 21 Mar 2001 09:38:22 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: pg_inherits: not found, but visible"
},
{
"msg_contents": "On Tue, 20 Mar 2001, Tom Lane wrote:\n\n> Joel Burton <jburton@scw.org> writes:\n> >> and I could find a $PGDATA/base/18720/16567 file.\n> >> Could you find such a file ?\n> \n> > No. I do have the db directory, and all of the other file for the existing\n> > classes, but not this.\n> \n> Hm. You could make an empty file by that name (just 'touch' it) and\n> then you'd probably be able to dump (possibly after reindexing\n> pg_inherit's indexes again). pg_inherits isn't a real critical table,\n> fortunately.\n> \n> > Any ideas why this would disappear?\n>\n> Interesting question, all right. Did you have a system crash?\n\nOk, so I touched the file, and did a postgres -P -O reindex of the table\nwith force.\n\nGoing into psql then, I could select * from the table, and, not\nsurprisingly, nothing was in it, but I can (& did) dump my data.\n\nFor those watching, that's about 15 minutes from the sinking feeling of\n'I just lost two days of work' to 'resolution and data restored'. Our\ncommunity has *damn fine* technical support! :-) Thanks, Tom, and Hiroshi,\nfor being so helpful so quickly.\n\n\nAs for your questions, no, I didn't have a system crash. I was running a\nZope page that queries several tables (show all classes, for each class,\nshow all instances, for each instance, show all dates, etc.); the page\nnormally takes about 2 minutes to pull everything together (I think that's\nZope's speed issue, not PG!) Anyway, while that was chugging away, I tried\nto drop a view and recreate it, and that request just hung there for a\nfew minutes. The Zope page never came up, and psql notified me that I lost\nmy connection.\n\nI wasn't, and haven't ever, used inheritance in this database.\n\n-- \nJoel Burton <jburton@scw.org>\nDirector of Information Systems, Support Center of Washington\n\n",
"msg_date": "Tue, 20 Mar 2001 19:42:54 -0500 (EST)",
"msg_from": "Joel Burton <jburton@scw.org>",
"msg_from_op": true,
"msg_subject": "Re: pg_inherits: not found, but visible "
},
{
"msg_contents": "On Wed, 21 Mar 2001, Hiroshi Inoue wrote:\n\n> Joel Burton wrote:\n> > \n> > On Wed, 21 Mar 2001, Hiroshi Inoue wrote:\n> > \n> > > Joel Burton wrote:\n> > > >\n> > > > Postmaster crashed on me, and on restart, pg_inherits cannot be found.\n> > > > I can see it in pg_class (and it shows up w/ \\dS), but any attempt to\n> > > > modify anything fails with \"pg_inherits: No such file or directory\".\n> > > >\n> > > > I've reindexed the database (w/postgres -P -O). Vacuuming fails (w/error\n> > > > above).\n> > > >\n> > > > What could this be? Is there any hope?\n> > > >\n> > >\n> > > Try the following queries.\n> > > 1) select oid from pg_database where datname = your_db_name;\n> > > 2) select oid, relfilenode from pg_class where relname = 'pg_inherits';\n> > >\n> > > For example I get the followings in my environment.\n> > > 1) oid = 18720\n> > > 2) relfilenode(==oid) = 16567;\n> > >\n> > > and I could find a $PGDATA/base/18720/16567 file.\n> > > Could you find such a file ?\n> > \n> > No. I do have the db directory, and all of the other file for the existing\n> > classes, but not this.\n> > \n> \n> Just a confirmation. What is a result of the second query\n> in your current environment ? \n\nI got exactly what I would expect in a working PG db: the oid and\nrelfilenode matched, and were OIDs in the range of the other system tables\nin the directory.\n\n-- \nJoel Burton <jburton@scw.org>\nDirector of Information Systems, Support Center of Washington\n\n",
"msg_date": "Tue, 20 Mar 2001 19:43:59 -0500 (EST)",
"msg_from": "Joel Burton <jburton@scw.org>",
"msg_from_op": true,
"msg_subject": "Re: pg_inherits: not found, but visible"
},
{
"msg_contents": "\nYikes. It gets weirder.\n\nFixed the pg_inherits problem, went back to my Zoping, trying to optimize\nsome views, and during another run, get an error that trelclasspq, one of\nmy tables, couldn't open.\n\nTrying this out in psql, I get the same error message--the file doesn't\nexist. And, getting the oid for the file, looked in the directory--and\nthis file is gone too!\n\nNow, I just made a good dump of the database, so I can always go back to\nthat. But this seems to be a *serious* problem in the system.\n\nI have\n\nZope 2.3.1b2 (most recent version of Zope)\nrunning on a Linux-Mandrake 7.2 box (server #1)\n\nIt has a database adapter called ZPoPy, which is the Zope version of PoPy,\na Python database adapter for PostgreSQL.\n\nPoPy is getting data from my PostgreSQL database, which is 7.1beta4, and\nserved on a different Mandrake 7.2 box.\n\nHas anyone seen anything like this?\n\nI doubt the error is Zope *per se*, since Zope can only talk to the\ndatabase adapter, and I doubt the database adapter has the intentional\nfeature of delete-the-file-for-this-table in its protocol. It *could* be a\nproblem w/ZPoPy or PoPy; I'll send a message to their list as well.\n\nThanks!\n\n-- \nJoel Burton <jburton@scw.org>\nDirector of Information Systems, Support Center of Washington\n\n",
"msg_date": "Tue, 20 Mar 2001 20:03:16 -0500 (EST)",
"msg_from": "Joel Burton <jburton@scw.org>",
"msg_from_op": true,
"msg_subject": "Re: pg_inherits: not found, but visible [IT GETS WORSE] "
},
{
"msg_contents": "On Tue, Mar 20, 2001 at 08:03:16PM -0500, Joel Burton wrote:\n> \n> Yikes. It gets weirder.\n> \n> \n> I have\n> \n> Zope 2.3.1b2 (most recent version of Zope)\n> running on a Linux-Mandrake 7.2 box (server #1)\n> \n\nWhat kind of filesystem is the pgsql data tree living on? If you do a fsck,\ndoes anything turn up in lost+found?\n\nRoss\n\n",
"msg_date": "Tue, 20 Mar 2001 19:12:56 -0600",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: pg_inherits: not found, but visible [IT GETS WORSE]"
},
{
"msg_contents": "On Tue, 20 Mar 2001, Ross J. Reedstrom wrote:\n\n> What kind of filesystem is the pgsql data tree living on? If you do a fsck,\n> does anything turn up in lost+found?\n> \n> Ross\n\next2, straight out of the box. It's in /var, which is a separate\npartition.\n\nfscking shows no errors, tells no lies, and nothing appears in lost+found.\n\nThanks,\n-- \nJoel Burton <jburton@scw.org>\nDirector of Information Systems, Support Center of Washington\n\n",
"msg_date": "Tue, 20 Mar 2001 20:28:26 -0500 (EST)",
"msg_from": "Joel Burton <jburton@scw.org>",
"msg_from_op": true,
"msg_subject": "Re: pg_inherits: not found, but visible [IT GETS WORSE]"
},
{
"msg_contents": "Joel Burton <jburton@scw.org> writes:\n> Yikes. It gets weirder.\n> Fixed the pg_inherits problem, went back to my Zoping, trying to optimize\n> some views, and during another run, get an error that trelclasspq, one of\n> my tables, couldn't open.\n> Trying this out in psql, I get the same error message--the file doesn't\n> exist. And, getting the oid for the file, looked in the directory--and\n> this file is gone too!\n\nThis does not seem good. Just to clarify: in both cases, the pg_class\nrow for the table is still there, but the underlying Unix file is gone?\n\nBarring major malfeasance from your kernel, it seems like Postgres must\nbe issuing a delete on the wrong file when you are doing something else.\nThis is particularly bizarre if you are just doing create/delete view,\nbecause in 7.1 a view hasn't got any associated file, and so no unlink()\nkernel call should be issued at all.\n\nI would recommend that you try to narrow down the events leading up to\nthis --- in particular, keeping a postmaster log of queries issued (-d2)\nseems like a good idea.\n\n> I doubt the error is Zope *per se*,\n\nZope cannot be the culprit --- there is no API for deleting a table file\nwithout deleting its pg_class entry ;-). But it seems possible that\nsome peculiar pattern of queries that they issue could be triggering a\npreviously-unknown Postgres bug.\n\nI will be out of town all day tomorrow, but please see what data you can\ngather. If you can create a reproducible failure case it'd be great...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 21 Mar 2001 00:20:45 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_inherits: not found, but visible [IT GETS WORSE] "
}
] |
[
{
"msg_contents": "\nI'm sorry, I should have included:\n\nPostgreSQL 7.1beta4\nLinux-Mandrake 7.1 (very simiiar RedHat 7)\nIntel hardware\n\n-- \nJoel Burton <jburton@scw.org>\nDirector of Information Systems, Support Center of Washington\n\n",
"msg_date": "Tue, 20 Mar 2001 18:39:49 -0500 (EST)",
"msg_from": "Joel Burton <jburton@scw.org>",
"msg_from_op": true,
"msg_subject": "pg_inherits: addt'l info"
}
] |
[
{
"msg_contents": "On Tue, Mar 20, 2001 at 08:11:21PM +0100, Peter Eisentraut wrote:\n> \n> We need a supported platform list. Let's hear it.\n\n\tLinux 2.4.2 (Debian, Woody), glibc 2.2.2, gcc 2.95.3 (from CVS).\n\n\t-Roberto\n-- \n+----| http://fslc.usu.edu USU Free Software & GNU/Linux Club|------+\n Roberto Mello - Computer Science, USU - http://www.brasileiro.net \n http://www.sdl.usu.edu - Space Dynamics Lab, Web Developer \n",
"msg_date": "Tue, 20 Mar 2001 17:23:24 -0700",
"msg_from": "Roberto Mello <rmello@cc.usu.edu>",
"msg_from_op": true,
"msg_subject": "Re: Final Call: RC1 about to go out the door ..."
}
] |
[
{
"msg_contents": "> I think the problem is that BufferSync unconditionally does PinBuffer\n> on each buffer, and holds the pin during intervals where it's released\n> BufMgrLock, even if there's not really anything for it to do on that\n> buffer. If someone else is running FlushRelationBuffers then it's\n> possible for that routine to see a nonzero pin count when it looks.\n> \n> Vadim, what do you think about how to change this? I think this is\n> BufferSync's fault not FlushRelationBuffers's ...\n\nI'm looking there right now...\n\nVadim\n",
"msg_date": "Tue, 20 Mar 2001 16:44:17 -0800",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: Beta 6 Regression results on Redat 7.0. "
},
{
"msg_contents": ">> I think the problem is that BufferSync unconditionally does PinBuffer\n>> on each buffer, and holds the pin during intervals where it's released\n>> BufMgrLock, even if there's not really anything for it to do on that\n>> buffer. If someone else is running FlushRelationBuffers then it's\n>> possible for that routine to see a nonzero pin count when it looks.\n\nFurther note: this bug does not arise in 7.0.* because in that code,\nBufferSync will only pin buffers that have been dirtied in the current\ntransaction. This cannot affect a concurrent FlushRelationBuffers,\nwhich should be holding exclusive lock on the table it's flushing.\n\nOr can it? The above is safe enough for user tables, but on system\ntables we have a bad habit of releasing locks early. It seems possible\nthat a VACUUM on a system table might see pins due to BufferSyncs\nrunning in concurrent transactions that have altered that system table.\n\nPerhaps this issue does explain some of the reports of\nFlushRelationBuffers failure that we've seen from the field.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 20 Mar 2001 20:07:21 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Beta 6 Regression results on Redat 7.0. "
}
] |
[
{
"msg_contents": "> Further note: this bug does not arise in 7.0.* because in that code,\n> BufferSync will only pin buffers that have been dirtied in the current\n> transaction. This cannot affect a concurrent FlushRelationBuffers,\n> which should be holding exclusive lock on the table it's flushing.\n> \n> Or can it? The above is safe enough for user tables, but on system\n> tables we have a bad habit of releasing locks early. It seems possible\n> that a VACUUM on a system table might see pins due to BufferSyncs\n> running in concurrent transactions that have altered that system table.\n> \n> Perhaps this issue does explain some of the reports of\n> FlushRelationBuffers failure that we've seen from the field.\n\nAnother possible source of this problem (in 7.0.X) is BufferReplace..?\n\nVadim\n",
"msg_date": "Tue, 20 Mar 2001 17:23:24 -0800",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: Beta 6 Regression results on Redat 7.0. "
}
] |
[
{
"msg_contents": "[Cced: to PostgreSQL hackers list]\n\nAlexander,\n\nI believe this problem was fixed in the latest JDBC driver, that is\nsupposed to be shipped with 7.1. It asks your database which encoding\nis used for particular database while connecting to the database. So\nyou should be able to see \"select getdatabaseencoding\" if you turn on\na debugging option for postmaster.\n\nI also think the latest driver is compatible with 7.0.3, but I'm not\nsure. Peter T?\n--\nTatsuo Ishii\n\nFrom: \"Alexander Vaysman\" <avaysman@numerix.com>\nSubject: PostgreSQL JDBC Unicode Support\nDate: Thu, 15 Mar 2001 15:34:43 -0500\nMessage-ID: <PFEFILJCOGAEAEJICCHBMENCCAAA.avaysman@numerix.com>\n\n> Tatsuo,\n> \n> my name is Alex Vaysman, and I saw your numerous posts in the newsgroups\n> regarding Postgres and mutli-language support. I have a problem with our\n> Postgres database, and intensive searches on the Internet/newsgroups didn't\n> provide me with an answer. I was wondering if you would know the answer or\n> point me towards it.\n> \n> In the nutshell, we are trying to get Postgres DB running that supports\n> Unicode and interacts with clients via JDBC. We have PostgreSQL version\n> 7.0.3 installed. I have downloaded the latest JDBC driver from\n> http://jdbc.postgresql.org.\n> \n> I have created a Unicode database (confirmed through \\l command in psql,\n> reported encoding is 'UNICODE'). In that DB I've created a table with two\n> fields integer and varchar(64). Then I store a record into this table. In my\n> code I specify the string through Unicode escapes. After that I retrieve\n> this value and write it out. I don't get my value back but rather ?????. I'm\n> attaching the code I use for reference.\n> \n> My Internet searches for the solution indicated that I need to apply some\n> patches to JDBC driver. However, I don't know how to do that. Do you know\n> where I may download the JDBC driver version with the appropriate patches\n> applied? If you're using one, would you be kind enough and e-mail it to me.\n> Also, having some experience with SQL Server, I know that if I wanted to\n> store Unicode values into some column I was creating that column as nvarchar\n> rather the varchar. Is anything like this required for Postgres?\n> \n> Your help is greatly appreciated. Thanks in advance,\n> \n> Alex Vaysman.\n",
"msg_date": "Wed, 21 Mar 2001 12:19:49 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL JDBC Unicode Support"
},
{
"msg_contents": "Quoting Tatsuo Ishii <t-ishii@sra.co.jp>:\n\n> [Cced: to PostgreSQL hackers list]\n> \n> Alexander,\n> \n> I believe this problem was fixed in the latest JDBC driver, that is\n> supposed to be shipped with 7.1. It asks your database which encoding\n> is used for particular database while connecting to the database. So\n> you should be able to see \"select getdatabaseencoding\" if you turn on\n> a debugging option for postmaster.\n> \n> I also think the latest driver is compatible with 7.0.3, but I'm not\n> sure. Peter T?\n\nIt should be at the basic level, but methods in DatabaseMetaData will fail as \nthey are specific to 7.1's system table changes etc.\n\nPeter\n\n> --\n> Tatsuo Ishii\n> \n> From: \"Alexander Vaysman\" <avaysman@numerix.com>\n> Subject: PostgreSQL JDBC Unicode Support\n> Date: Thu, 15 Mar 2001 15:34:43 -0500\n> Message-ID: <PFEFILJCOGAEAEJICCHBMENCCAAA.avaysman@numerix.com>\n> \n> > Tatsuo,\n> > \n> > my name is Alex Vaysman, and I saw your numerous posts in the\n> newsgroups\n> > regarding Postgres and mutli-language support. I have a problem with\n> our\n> > Postgres database, and intensive searches on the Internet/newsgroups\n> didn't\n> > provide me with an answer. I was wondering if you would know the\n> answer or\n> > point me towards it.\n> > \n> > In the nutshell, we are trying to get Postgres DB running that\n> supports\n> > Unicode and interacts with clients via JDBC. We have PostgreSQL\n> version\n> > 7.0.3 installed. I have downloaded the latest JDBC driver from\n> > http://jdbc.postgresql.org.\n> > \n> > I have created a Unicode database (confirmed through \\l command in\n> psql,\n> > reported encoding is 'UNICODE'). In that DB I've created a table with\n> two\n> > fields integer and varchar(64). Then I store a record into this table.\n> In my\n> > code I specify the string through Unicode escapes. After that I\n> retrieve\n> > this value and write it out. I don't get my value back but rather\n> ?????. I'm\n> > attaching the code I use for reference.\n> > \n> > My Internet searches for the solution indicated that I need to apply\n> some\n> > patches to JDBC driver. However, I don't know how to do that. Do you\n> know\n> > where I may download the JDBC driver version with the appropriate\n> patches\n> > applied? If you're using one, would you be kind enough and e-mail it\n> to me.\n> > Also, having some experience with SQL Server, I know that if I wanted\n> to\n> > store Unicode values into some column I was creating that column as\n> nvarchar\n> > rather the varchar. Is anything like this required for Postgres?\n> > \n> > Your help is greatly appreciated. Thanks in advance,\n> > \n> > Alex Vaysman.\n> \n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n\n\n-- \nPeter Mount peter@retep.org.uk\nPostgreSQL JDBC Driver: http://www.retep.org.uk/postgres/\nRetepPDF PDF library for Java: http://www.retep.org.uk/pdf/\n",
"msg_date": "Wed, 21 Mar 2001 09:33:29 -0500 (EST)",
"msg_from": "Peter T Mount <peter@retep.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: Re: PostgreSQL JDBC Unicode Support"
}
] |
[
{
"msg_contents": "\n\n-----\n\nHi,\n\nI am trying to access PostGreSQL database running at the default port\n5432\nusing JDBC. But the application is giving error \"Cannot find suitable\ndriver\". I have included JDBC driver JAR file in my CLASSPATH and\nClass.forName(\"org.postgresql.Driver\") is loading driver successfully.\nCan anybody tell me how to go about to solve the problem?\n\nWith regards,\nSourabh\n\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n",
"msg_date": "Wed, 21 Mar 2001 11:26:55 +0530",
"msg_from": "\"sourabh dixit\" <sourabh.dixit@wipro.com>",
"msg_from_op": true,
"msg_subject": "PostgreSQL-JDBC driver"
},
{
"msg_contents": "Quoting sourabh dixit <sourabh.dixit@wipro.com>:\n\n> \n> \n> -----\n> \n> Hi,\n> \n> I am trying to access PostGreSQL database running at the default port\n> 5432\n> using JDBC. But the application is giving error \"Cannot find suitable\n> driver\". I have included JDBC driver JAR file in my CLASSPATH and\n> Class.forName(\"org.postgresql.Driver\") is loading driver successfully.\n> Can anybody tell me how to go about to solve the problem?\n\nSounds like your URL is wrong. Make sure it begins with jdbc:postgresql:\n\nPeter\n\n-- \nPeter Mount peter@retep.org.uk\nPostgreSQL JDBC Driver: http://www.retep.org.uk/postgres/\nRetepPDF PDF library for Java: http://www.retep.org.uk/pdf/\n",
"msg_date": "Wed, 21 Mar 2001 06:26:43 -0500 (EST)",
"msg_from": "Peter T Mount <peter@retep.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL-JDBC driver"
}
] |
[
{
"msg_contents": "Hi,\n\n\tint8 is not handled correctly on Alpha. Inserting 2^63-1, 2^63-2 and\n2^61\ninto \n\ncreate table lint (i int8);\n\ngives\n\ntest=# select * from lint;\n i \n----\n -1\n -2\n 0\n(3 rows)\n\nOn linux it gives the correct values:\n\ntest=# select * from lint;\n i \n---------------------\n 9223372036854775807\n 9223372036854775806\n 2305843009213693952\n(3 rows)\n\nThis is postgres 7.1b4, compiled with native cc on Tru64 4.0G. I seem to\nrecall running the regression tests, so perhaps this is not checked?\n(just looked at int8.sql, and it is not checked.)\n\nI'm swamped, so cannot look at it right now. If nobody else can look at\nit, I will get back to it in about a fortnight.\n\nAdriaan\n",
"msg_date": "Wed, 21 Mar 2001 11:19:39 +0200",
"msg_from": "Adriaan Joubert <a.joubert@albourne.com>",
"msg_from_op": true,
"msg_subject": "int8 bug on Alpha"
},
{
"msg_contents": "> int8 is not handled correctly on Alpha. Inserting 2^63-1, 2^63-2 and\n> 2^61...\n\nHow are you doing the inserts? If you aren't coercing the \"2\" to be an\nint8, then (afaik) the math will be done in int4, then upconverted. So,\ncan you confirm that your inserts look like:\n\ninsert into lint values ('9223372036854775807');\n\nor\n\ninsert into lint select (int8 '2') ^ 61;\n\n - Thomas\n",
"msg_date": "Wed, 21 Mar 2001 11:46:12 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: int8 bug on Alpha"
},
{
"msg_contents": "Thomas Lockhart wrote:\n> \n> > int8 is not handled correctly on Alpha. Inserting 2^63-1, 2^63-2 and\n> > 2^61...\n> \n> How are you doing the inserts? If you aren't coercing the \"2\" to be an\n> int8, then (afaik) the math will be done in int4, then upconverted. So,\n> can you confirm that your inserts look like:\n> \n> insert into lint values ('9223372036854775807');\n\nOK, that was it. I inserted without quotes. If I insert the quotes it\nworks. So why does it work correctly on linux without quotes?\n\nand \n\n insert into lint values ('9223372036854775807'::int8);\n\nworks, but\n\n insert into lint values (9223372036854775807::int8);\n\ndoesn't. I guess in the second case it converts it to an int4 and then\nrecasts to an int8?\n\nCheers,\n\nAdriaan\n",
"msg_date": "Wed, 21 Mar 2001 15:10:55 +0200",
"msg_from": "Adriaan Joubert <a.joubert@albourne.com>",
"msg_from_op": true,
"msg_subject": "Re: int8 bug on Alpha"
},
{
"msg_contents": "> > How are you doing the inserts? If you aren't coercing the \"2\" to be an\n> > int8, then (afaik) the math will be done in int4, then upconverted. So,\n> > can you confirm that your inserts look like:\n> > insert into lint values ('9223372036854775807');\n> OK, that was it. I inserted without quotes. If I insert the quotes it\n> works. So why does it work correctly on linux without quotes?\n\nFor integers (optional sign and all digits), the code in\nsrc/backend/parser/scan.l uses strtol() to read the string, then checks\nfor failure. If it fails, the number is interpreted as a double float on\nthe assumption that if it could hold more digits it would succeed!\n\nAnyway, either strtol() thinks it *should* be able to read a 64 bit\ninteger, or your machine is silently overflowing. I used to have a bunch\nof these boxes, and I recall spending quite a bit of time discovering\nthat Alphas have some explicit flags which can be set at compile time\nwhich affect run-time detection of floating point and (perhaps) integer\noverflow behavior.\n\nCan you check these possibilities? I'd look at strtol() first, then the\noverflow/underflow flags second...\n\n - Thomas\n",
"msg_date": "Wed, 21 Mar 2001 15:47:54 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: int8 bug on Alpha"
},
{
"msg_contents": "> For integers (optional sign and all digits), the code in\n> src/backend/parser/scan.l uses strtol() to read the string, then checks\n> for failure. If it fails, the number is interpreted as a double float on\n> the assumption that if it could hold more digits it would succeed!\n> \n> Anyway, either strtol() thinks it *should* be able to read a 64 bit\n> integer, or your machine is silently overflowing. I used to have a bunch\n> of these boxes, and I recall spending quite a bit of time discovering\n> that Alphas have some explicit flags which can be set at compile time\n> which affect run-time detection of floating point and (perhaps) integer\n> overflow behavior.\n> \n> Can you check these possibilities? I'd look at strtol() first, then the\n> overflow/underflow flags second...\n\nIntersting that the lack of strtol() failure on Alpha is causing the\nproblem.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Mar 2001 10:52:12 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: int8 bug on Alpha"
},
{
"msg_contents": "> Anyway, either strtol() thinks it *should* be able to read a 64 bit\n> integer, or your machine is silently overflowing. I used to have a bunch\n> of these boxes, and I recall spending quite a bit of time discovering\n> that Alphas have some explicit flags which can be set at compile time\n> which affect run-time detection of floating point and (perhaps) integer\n> overflow behavior.\n> \n> Can you check these possibilities? I'd look at strtol() first, then the\n> overflow/underflow flags second...\n\nHmm, I wrote a trivial programme parsing long ints and get the following\n\n#include <errno.h>\n\nmain (int argc, char *argv[]) {\n long int a = strtol(argv[1], (char **) 0, 10);\n printf(\"input='%s' ld=%ld (errno %d)\\n\",argv[1],a,errno);\n}\n\nemily:~/Tmp/C++$ a.out 9223372036854775807\ninput='9223372036854775807' ld=9223372036854775807 (errno 0)\nemily:~/Tmp/C++$ a.out 9223372036854775808\ninput='9223372036854775808' ld=9223372036854775807 (errno 34)\nemily:~/Tmp/C++$ a.out 9223372036854775806\ninput='9223372036854775806' ld=9223372036854775806 (errno 0)\nemily:~/Tmp/C++$ a.out -9223372036854775808\ninput='-9223372036854775808' ld=-9223372036854775808 (errno 0)\n\n\nso that seems to work correctly. And I compiled with the same compiler\nflags with which postgres was compiled. Apparently long is defined as\n'long long int' on alpha, and I tried it with that and it works as well.\n\nI'll have to debug this properly, but first I need to get Friday out of\nthe way ;-)\n\nAdriaan\n",
"msg_date": "Wed, 21 Mar 2001 18:29:30 +0200",
"msg_from": "Adriaan Joubert <a.joubert@albourne.com>",
"msg_from_op": true,
"msg_subject": "Re: int8 bug on Alpha"
},
{
"msg_contents": "Thomas Lockhart <lockhart@alumni.caltech.edu> writes:\n> For integers (optional sign and all digits), the code in\n> src/backend/parser/scan.l uses strtol() to read the string, then checks\n> for failure. If it fails, the number is interpreted as a double float on\n> the assumption that if it could hold more digits it would succeed!\n\nOhhhh....\n\nThis is an Alpha, remember? long *is* 64 bits on that machine,\ntherefore strtol is correct to accept the number. Unfortunately,\nlater in the parser we assign the datatype int4, not int8, to the\n\"integer\" constant, and so it gets truncated. make_const is making\nan unwarranted assumption that T_Integer is the same as int4 --- or,\nif you prefer, make_const is OK and scan.l is erroneous to use\nnode type T_Integer for ints that exceed 32 bits.\n\nThis is a portability bug, no question. But I'd expect it to fail\nlike that on all Alpha-based platforms. Adriaan, when you say it\nworks on Linux, are you talking about Linux/Alpha or some other\nhardware?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 21 Mar 2001 22:39:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: int8 bug on Alpha "
},
{
"msg_contents": "> This is a portability bug, no question. But I'd expect it to fail\n> like that on all Alpha-based platforms. Adriaan, when you say it\n> works on Linux, are you talking about Linux/Alpha or some other\n> hardware?\n\nNo, PC Linux. I run a database on my laptop as well.\n\nAdriaan\n",
"msg_date": "Thu, 22 Mar 2001 10:59:30 +0200",
"msg_from": "Adriaan Joubert <a.joubert@albourne.com>",
"msg_from_op": true,
"msg_subject": "Re: Re: int8 bug on Alpha"
},
{
"msg_contents": "Adriaan Joubert <a.joubert@albourne.com> writes:\n> insert into lint values ('9223372036854775807'::int8);\n> works, but\n> insert into lint values (9223372036854775807::int8);\n> doesn't.\n\nFixed, and checked on Debian Alpha.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 22 Mar 2001 13:16:16 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: int8 bug on Alpha "
}
] |
[
{
"msg_contents": "Just committed changes in bufmgr.c\nRegress tests passed but need more specific tests,\nas usually. Descr as in CVS:\n\n> Check bufHdr->cntxDirty and call StartBufferIO in BufferSync()\n> *before* acquiring shlock on buffer context. This way we should be\n> protected against conflicts with FlushRelationBuffers. \n> (Seems we never do excl lock and then StartBufferIO for the same\n> buffer, so there should be no deadlock here, - but we'd better\n> check this very soon).\n\nVadim\n\n\n",
"msg_date": "Wed, 21 Mar 2001 02:23:03 -0800",
"msg_from": "\"Vadim Mikheev\" <vmikheev@sectorbase.com>",
"msg_from_op": true,
"msg_subject": "BufferSync() & FlushRelationBuffers() conflict"
},
{
"msg_contents": "\nTom, since you appear to be able to recreate the bug, can you comment on\nthis, as to whether we are okay now?\n\nOn Wed, 21 Mar 2001, Vadim Mikheev wrote:\n\n> Just committed changes in bufmgr.c\n> Regress tests passed but need more specific tests,\n> as usually. Descr as in CVS:\n>\n> > Check bufHdr->cntxDirty and call StartBufferIO in BufferSync()\n> > *before* acquiring shlock on buffer context. This way we should be\n> > protected against conflicts with FlushRelationBuffers.\n> > (Seems we never do excl lock and then StartBufferIO for the same\n> > buffer, so there should be no deadlock here, - but we'd better\n> > check this very soon).\n>\n> Vadim\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n",
"msg_date": "Wed, 21 Mar 2001 15:04:54 -0400 (AST)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: BufferSync() & FlushRelationBuffers() conflict"
},
{
"msg_contents": "The Hermit Hacker <scrappy@hub.org> writes:\n> Tom, since you appear to be able to recreate the bug, can you comment on\n> this, as to whether we are okay now?\n\nSorry for the delay --- I was down in Norfolk all day, and am just now\ncatching up on email. I will pull Vadim's update and run the test some\nmore. However, last night I only saw the failure once in about an\nhour's worth of testing, so it's not that easy to reproduce anyway...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 21 Mar 2001 22:47:31 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BufferSync() & FlushRelationBuffers() conflict "
},
{
"msg_contents": "\nokay, baring you bein able to recreate the bug between now and, say,\n13:00AST tomorrow, I'll wrap up RC1 and get her out the door ...\n\nOn Wed, 21 Mar 2001, Tom Lane wrote:\n\n> The Hermit Hacker <scrappy@hub.org> writes:\n> > Tom, since you appear to be able to recreate the bug, can you comment on\n> > this, as to whether we are okay now?\n>\n> Sorry for the delay --- I was down in Norfolk all day, and am just now\n> catching up on email. I will pull Vadim's update and run the test some\n> more. However, last night I only saw the failure once in about an\n> hour's worth of testing, so it's not that easy to reproduce anyway...\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n",
"msg_date": "Wed, 21 Mar 2001 23:51:08 -0400 (AST)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: BufferSync() & FlushRelationBuffers() conflict "
},
{
"msg_contents": "> > Tom, since you appear to be able to recreate the bug, can you comment on\n> > this, as to whether we are okay now?\n> \n> Sorry for the delay --- I was down in Norfolk all day, and am just now\n> catching up on email. I will pull Vadim's update and run the test some\n> more. However, last night I only saw the failure once in about an\n> hour's worth of testing, so it's not that easy to reproduce anyway...\n\nI saw >~ 10 failures with -B 32 in ~ 3 minutes of testing. With old code,\nof course -:)\n\nVadim\n\n\n",
"msg_date": "Wed, 21 Mar 2001 19:52:54 -0800",
"msg_from": "\"Vadim Mikheev\" <vmikheev@sectorbase.com>",
"msg_from_op": true,
"msg_subject": "Re: BufferSync() & FlushRelationBuffers() conflict "
}
] |
[
{
"msg_contents": "Hello ! \nMy program is given below and the errors which Iam getting on executing \nthe program is \"ClassNotFoundException:org.postgresql.Driver\n SQLException:No suitable driver\".\nCan you tell me what's wrong in my program?\n\nRegards,\nSourabh\n\n\n\n\n\nimport java.sql.*;\n\npublic class DM\n{\n public static void main(String args[])\n {\n String url = \"jdbc:postgresql:testdb\";\n\n Connection con;\n String createString;\n createString = \"create MyInfo table \"+\"(INTERFACE_TYPE \n INTEGER,\"+\"EQUIPMENT_TYPE INTEGER)\";\n\n Statement stmt;\n\n try {\n Class.forName(\"org.postgresql.Driver\");\n }\n catch(java.lang.ClassNotFoundException e)\n {\n System.err.print(\"ClassNotFoundException:\");\n System.err.println(e.getMessage());\n }\n\n try {\n con = DriverManager.getConnection(url,\"sdixit\",\"sdixit\");\n\n stmt = con.createStatement();\n\n stmt.executeUpdate(createString);\n\n stmt.close();\n\n con.close();\n\n } catch(SQLException ex)\n {\n System.err.println(\"SQLException:\"+ex.getMessage());\n }\n }\n\n----- Original Message -----\nFrom: Peter T Mount <peter@retep.org.uk>\nDate: Wednesday, March 21, 2001 4:56 pm\nSubject: Re: [HACKERS] PostgreSQL-JDBC driver\n\n> Quoting sourabh dixit <sourabh.dixit@wipro.com>:\n> \n> > \n> > \n> > -----\n> > \n> > Hi,\n> > \n> > I am trying to access PostGreSQL database running at the default \n> port> 5432\n> > using JDBC. But the application is giving error \"Cannot find \n> suitable> driver\". I have included JDBC driver JAR file in my \n> CLASSPATH and\n> > Class.forName(\"org.postgresql.Driver\") is loading driver \n> successfully.> Can anybody tell me how to go about to solve the \n> problem?\n> Sounds like your URL is wrong. Make sure it begins with \n> jdbc:postgresql:\n> Peter\n> \n> -- \n> Peter Mount peter@retep.org.uk\n> PostgreSQL JDBC Driver: http://www.retep.org.uk/postgres/\n> RetepPDF PDF library for Java: http://www.retep.org.uk/pdf/\n> \n\n",
"msg_date": "Wed, 21 Mar 2001 16:50:02 +0500",
"msg_from": "\"sourabh dixit\" <sourabh.dixit@wipro.com>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL-JDBC driver"
}
] |
[
{
"msg_contents": "Time to speak up, I have a HPUX 9.07 system and will test today.\n\nJim\n\n> Thomas Lockhart <lockhart@alumni.caltech.edu> writes:\n> >> HPUX 10.20 (HP-PA architecture)\n> \n> > Time to drop 9.2 from the list?\n> \n> I don't have it running here anymore. Is there anyone on the list\n> who can test on HPUX 9?\n> \n> >> Linux/PPC (LinuxPPC 2000 Q4 distro tested here; 2.2.18 kernel\nI think)\n> \n> > What processor? Tatsuo had tested on a 603...\n> \n> It's a Powerbook G3 (FireWire model), but I'm not sure which chip is\n> inside (and Apple's spec sheet isn't too helpful)...\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n> \n\n\n",
"msg_date": "Wed, 21 Mar 2001 08:38:41 -0500 (EST)",
"msg_from": "\"Jim Buttafuoco\" <jim@/etc/mail/ok>",
"msg_from_op": true,
"msg_subject": "Re: Re: Final Call: RC1 about to go out the door ... "
}
] |
[
{
"msg_contents": "Added to TODO:\n\n\t* Add BETWEEN [ASYMMETRIC|SYMMETRIC]\n\nRoss did a patch for this but some wanted it implemented differently so\nI just added it to the TODO list.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Mar 2001 10:51:03 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "New TODO item"
},
{
"msg_contents": "On Wed, Mar 21, 2001 at 10:51:03AM -0500, Bruce Momjian wrote:\n> Added to TODO:\n> \n> \t* Add BETWEEN [ASYMMETRIC|SYMMETRIC]\n> \n> Ross did a patch for this but some wanted it implemented differently so\n> I just added it to the TODO list.\n\nHmm, have I been coding in my sleep? I think I perhaps commented on the\nSQL'92 standard grammar for this (in reply to someone else's patch),\nbut I don't think I wrote anything. Unless it's another Ross (we're not\nas common as Toms, but getting more so ;-)\n\nRoss\n",
"msg_date": "Wed, 21 Mar 2001 14:11:09 -0600",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: New TODO item"
},
{
"msg_contents": "> On Wed, Mar 21, 2001 at 10:51:03AM -0500, Bruce Momjian wrote:\n> > Added to TODO:\n> > \n> > \t* Add BETWEEN [ASYMMETRIC|SYMMETRIC]\n> > \n> > Ross did a patch for this but some wanted it implemented differently so\n> > I just added it to the TODO list.\n> \n> Hmm, have I been coding in my sleep? I think I perhaps commented on the\n> SQL'92 standard grammar for this (in reply to someone else's patch),\n> but I don't think I wrote anything. Unless it's another Ross (we're not\n> as common as Toms, but getting more so ;-)\n\nSorry, got my R*'s mixed up:\n\n\t\"Robert B. Easter\" <reaster@comptechnews.com>\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Mar 2001 15:13:27 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: New TODO item"
}
] |
[
{
"msg_contents": "I have created an FTP file containing all ourstanding patches. It is\nat:\n\n\tftp://candle.pha.pa.us/pub/postgresql/patches.mbox\n\nI will keep this updated so people know their patches are in the queue\nand have not been forgotten. I may also use this to ask people for\npatch review.\n\nCan someone suggest a nice web frontend CGI script to a mbox file, one\nthat shows sender/subject/date, etc? I don't need to search or modify\nthe messages, just display them.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Mar 2001 10:54:46 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Patch application"
},
{
"msg_contents": "On Wed, Mar 21, 2001 at 10:54:46AM -0500, Bruce Momjian wrote:\n> \n> Can someone suggest a nice web frontend CGI script to a mbox file, one\n> that shows sender/subject/date, etc? I don't need to search or modify\n> the messages, just display them.\n\n\tRun mhonarc on the mbox. It will create HTML files from it. Example\n(using the Debian lists-archives package) can be seen at\nhttp://fslc.usu.edu/archives\n\t\n\t-Roberto\n-- \n+----| http://fslc.usu.edu USU Free Software & GNU/Linux Club|------+\n Roberto Mello - Computer Science, USU - http://www.brasileiro.net \n http://www.sdl.usu.edu - Space Dynamics Lab, Web Developer \nONLINE? Hit <ALT+H> for a quick I.Q. Test!\n",
"msg_date": "Wed, 21 Mar 2001 09:34:40 -0700",
"msg_from": "Roberto Mello <rmello@cc.usu.edu>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Patch application"
},
{
"msg_contents": "> On Wed, Mar 21, 2001 at 10:54:46AM -0500, Bruce Momjian wrote:\n> > \n> > Can someone suggest a nice web frontend CGI script to a mbox file, one\n> > that shows sender/subject/date, etc? I don't need to search or modify\n> > the messages, just display them.\n> \n> \tRun mhonarc on the mbox. It will create HTML files from it. Example\n> (using the Debian lists-archives package) can be seen at\n> http://fslc.usu.edu/archives\n\nYes, I am looking at mhonarc right now. That is what I will use.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Mar 2001 11:38:31 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Patch application"
},
{
"msg_contents": "On Wed, 21 Mar 2001, Bruce Momjian wrote:\n\n> I have created an FTP file containing all ourstanding patches. It is\n> at:\n>\n> \tftp://candle.pha.pa.us/pub/postgresql/patches.mbox\n>\n> I will keep this updated so people know their patches are in the queue\n> and have not been forgotten. I may also use this to ask people for\n> patch review.\n>\n> Can someone suggest a nice web frontend CGI script to a mbox file, one\n> that shows sender/subject/date, etc? I don't need to search or modify\n> the messages, just display them.\n\nwould could make a read-only to public, write only to you, mailbox on\nmail.postgresql.org that ppl could access with IMAP ...\n\n\n",
"msg_date": "Wed, 21 Mar 2001 15:17:44 -0400 (AST)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Patch application"
},
{
"msg_contents": "> On Wed, 21 Mar 2001, Bruce Momjian wrote:\n> \n> > I have created an FTP file containing all ourstanding patches. It is\n> > at:\n> >\n> > \tftp://candle.pha.pa.us/pub/postgresql/patches.mbox\n> >\n> > I will keep this updated so people know their patches are in the queue\n> > and have not been forgotten. I may also use this to ask people for\n> > patch review.\n> >\n> > Can someone suggest a nice web frontend CGI script to a mbox file, one\n> > that shows sender/subject/date, etc? I don't need to search or modify\n> > the messages, just display them.\n> \n> would could make a read-only to public, write only to you, mailbox on\n> mail.postgresql.org that ppl could access with IMAP ...\n\nI actually finished. It is at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nThis URL will reindex if I make any changes to the mailbox file.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Mar 2001 14:42:30 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Patch application"
}
] |
[
{
"msg_contents": "> I'm been running one backend doing repeated iterations of\n> \n> CREATE TABLE temptest(col int);\n> INSERT INTO temptest VALUES (1);\n> \n> CREATE TEMP TABLE temptest(col int);\n> INSERT INTO temptest VALUES (2);\n> SELECT * FROM temptest;\n> DROP TABLE temptest;\n> \n> SELECT * FROM temptest;\n> DROP TABLE temptest;\n> \n> and another one doing repeated CHECKPOINTs. I've already gotten a\n> couple occurrences of Lamar's failure.\n\nI wasn't able to reproduce failure with current sources.\n\nVadim\n",
"msg_date": "Wed, 21 Mar 2001 09:18:08 -0800",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: Beta 6 Regression results on Redat 7.0. "
}
] |
[
{
"msg_contents": "Recent changes in pg_crc.c (64 bit CRC) introduced non portable constants of the form:\n\n -c -o pg_crc.o pg_crc.c\n 287 | 0x0000000000000000, 0x42F0E1EBA9EA3693,\n ............................a..................\na - 1506-207 (W) Integer constant 0x42F0E1EBA9EA3693 out of range.\n\nI guess this will show up on a lot of non gcc platforms !!!!!\nIt shows no diffs in the regression tests! From what I understand,\nfailure would only show up after fast shutdown/crash.\n\nAttached is a patch, but I have no idea how portable that is.\n\nAndreas",
"msg_date": "Wed, 21 Mar 2001 18:45:25 +0100",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "RELEASE STOPPER? nonportable int64 constants in pg_crc.c"
},
{
"msg_contents": "Zeugswetter Andreas SB writes:\n\n>\n> Recent changes in pg_crc.c (64 bit CRC) introduced non portable constants of the form:\n>\n> -c -o pg_crc.o pg_crc.c\n> 287 | 0x0000000000000000, 0x42F0E1EBA9EA3693,\n> ............................a..................\n> a - 1506-207 (W) Integer constant 0x42F0E1EBA9EA3693 out of range.\n>\n> I guess this will show up on a lot of non gcc platforms !!!!!\n> It shows no diffs in the regression tests! From what I understand,\n> failure would only show up after fast shutdown/crash.\n>\n> Attached is a patch, but I have no idea how portable that is.\n\nI don't think it's the answer either. The patch assumes that int64 ==\nlong long. The ugly solution might have to be:\n\n#if <int64 == long>\n#define L64 L\n#else\n#define L64 LL\n#endif\n\nconst uint64 crc_table[256] = {\n 0x0000000000000000##L64, 0x42F0E1EBA9EA3693##L64,\n 0x85E1C3D753D46D26##L64, 0xC711223CFA3E5BB5##L64,\n...\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Wed, 21 Mar 2001 21:51:49 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: RELEASE STOPPER? nonportable int64 constants in pg_crc.c"
},
{
"msg_contents": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n> Recent changes in pg_crc.c (64 bit CRC) introduced non portable constants of the form:\n\n> -c -o pg_crc.o pg_crc.c\n> 287 | 0x0000000000000000, 0x42F0E1EBA9EA3693,\n> ............................a..................\n> a - 1506-207 (W) Integer constant 0x42F0E1EBA9EA3693 out of range.\n\nPlease observe that this is a warning, not an error. Your proposed\nfix is considerably worse than the disease, because it will break on\ncompilers that do not recognize \"LL\" constants, to say nothing of\nmachines where L is correct and LL is some yet wider datatype.\n\nI'm aware that some compilers will produce warnings about these\nconstants, but there should not be any that fail completely, since\n(a) we won't be compiling this code unless we've proven that the\ncompiler supports a 64-bit-int datatype, and (b) the C standard\nforbids a compiler from requiring width suffixes (cf. 6.4.4.1 in C99).\n\nI don't think it's a good tradeoff to risk breaking some platforms in\norder to suppress warnings from overly anal-retentive compilers.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 21 Mar 2001 21:46:35 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: RELEASE STOPPER? nonportable int64 constants in pg_crc.c "
},
{
"msg_contents": "\nokay, this was the only one I was waiting to hear on ... the fix committed\nthis afternoon for the regression test, did/does it fix the problem? are\nwe safe on a proper RC1 now?\n\nOn Wed, 21 Mar 2001, Tom Lane wrote:\n\n> Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n> > Recent changes in pg_crc.c (64 bit CRC) introduced non portable constants of the form:\n>\n> > -c -o pg_crc.o pg_crc.c\n> > 287 | 0x0000000000000000, 0x42F0E1EBA9EA3693,\n> > ............................a..................\n> > a - 1506-207 (W) Integer constant 0x42F0E1EBA9EA3693 out of range.\n>\n> Please observe that this is a warning, not an error. Your proposed\n> fix is considerably worse than the disease, because it will break on\n> compilers that do not recognize \"LL\" constants, to say nothing of\n> machines where L is correct and LL is some yet wider datatype.\n>\n> I'm aware that some compilers will produce warnings about these\n> constants, but there should not be any that fail completely, since\n> (a) we won't be compiling this code unless we've proven that the\n> compiler supports a 64-bit-int datatype, and (b) the C standard\n> forbids a compiler from requiring width suffixes (cf. 6.4.4.1 in C99).\n>\n> I don't think it's a good tradeoff to risk breaking some platforms in\n> order to suppress warnings from overly anal-retentive compilers.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n",
"msg_date": "Wed, 21 Mar 2001 22:49:52 -0400 (AST)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Re: RELEASE STOPPER? nonportable int64 constants in\n pg_crc.c"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> I don't think it's the answer either. The patch assumes that int64 ==\n> long long. The ugly solution might have to be:\n\n> #if <int64 == long>\n> #define L64 L\n> #else\n> #define L64 LL\n> #endif\n\n> const uint64 crc_table[256] = {\n> 0x0000000000000000##L64, 0x42F0E1EBA9EA3693##L64,\n> 0x85E1C3D753D46D26##L64, 0xC711223CFA3E5BB5##L64,\n\nHmm ... how portable is that likely to be? I don't want to suppress\nwarnings on a few boxes at the cost of breaking even one platform\nthat would otherwise work. See my reply to Andreas.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 21 Mar 2001 21:51:40 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: RELEASE STOPPER? nonportable int64 constants in pg_crc.c "
},
{
"msg_contents": "> Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n> > Recent changes in pg_crc.c (64 bit CRC) introduced non portable constants of the form:\n> \n> > -c -o pg_crc.o pg_crc.c\n> > 287 | 0x0000000000000000, 0x42F0E1EBA9EA3693,\n> > ............................a..................\n> > a - 1506-207 (W) Integer constant 0x42F0E1EBA9EA3693 out of range.\n> \n> Please observe that this is a warning, not an error. Your proposed\n> fix is considerably worse than the disease, because it will break on\n> compilers that do not recognize \"LL\" constants, to say nothing of\n> machines where L is correct and LL is some yet wider datatype.\n\nI am seeing the same warnings with gcc 2.7.2.1 -Wall on BSDi i386:\n\npg_crc.c:353: warning: integer constant out of range\npg_crc.c:353: warning: integer constant out of range\npg_crc.c:354: warning: integer constant out of range\npg_crc.c:354: warning: integer constant out of range\npg_crc.c:355: warning: integer constant out of range\npg_crc.c:355: warning: integer constant out of range\npg_crc.c:356: warning: integer constant out of range\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Mar 2001 22:47:43 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: RELEASE STOPPER? nonportable int64 constants in pg_crc.c"
},
{
"msg_contents": "\nCan we use (long long) rather than LL?\n\n\n> > Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n> > > Recent changes in pg_crc.c (64 bit CRC) introduced non portable constants of the form:\n> > \n> > > -c -o pg_crc.o pg_crc.c\n> > > 287 | 0x0000000000000000, 0x42F0E1EBA9EA3693,\n> > > ............................a..................\n> > > a - 1506-207 (W) Integer constant 0x42F0E1EBA9EA3693 out of range.\n> > \n> > Please observe that this is a warning, not an error. Your proposed\n> > fix is considerably worse than the disease, because it will break on\n> > compilers that do not recognize \"LL\" constants, to say nothing of\n> > machines where L is correct and LL is some yet wider datatype.\n> \n> I am seeing the same warnings with gcc 2.7.2.1 -Wall on BSDi i386:\n> \n> pg_crc.c:353: warning: integer constant out of range\n> pg_crc.c:353: warning: integer constant out of range\n> pg_crc.c:354: warning: integer constant out of range\n> pg_crc.c:354: warning: integer constant out of range\n> pg_crc.c:355: warning: integer constant out of range\n> pg_crc.c:355: warning: integer constant out of range\n> pg_crc.c:356: warning: integer constant out of range\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Mar 2001 22:49:50 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: RELEASE STOPPER? nonportable int64 constants in pg_crc.c"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Can we use (long long) rather than LL?\n\nNo.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 21 Mar 2001 23:36:50 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: RELEASE STOPPER? nonportable int64 constants in pg_crc.c "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Can we use (long long) rather than LL?\n> \n> No.\n\nCan I ask how 0LL is different from (long long)0?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Mar 2001 23:45:58 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: RELEASE STOPPER? nonportable int64 constants in pg_crc.c"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Can we use (long long) rather than LL?\n>> \n>> No.\n\n> Can I ask how 0LL is different from (long long)0?\n\nThe former is a long-long-int constant ab initio. The latter is an int\nconstant that is subsequently casted to long long. If you write\n\t(long long) 12345678901234567890\nI'd expect a compiler that warns about larger-than-int constants to\nproduce a warning anyway, since the warning is only looking at the\nconstant and not its context of use. (If the warning had that much\nintelligence, it'd not be complaining now.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 22 Mar 2001 00:07:41 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: RELEASE STOPPER? nonportable int64 constants in pg_crc.c "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> const uint64 crc_table[256] = {\n> 0x0000000000000000##L64, 0x42F0E1EBA9EA3693##L64,\n> 0x85E1C3D753D46D26##L64, 0xC711223CFA3E5BB5##L64,\n>> \n>> Hmm ... how portable is that likely to be?\n\n> If the 'L' or 'LL' suffix is portable (probably), and token pasting is\n> portable (yes), then the aggregate should be as well, because one is\n> handled by the preprocessor and the other by the compiler.\n\nIt's just that I've never seen token-pasting applied to build anything\nbut identifiers and strings. In theory it should work, but theory does\nnot always predict what compiler writers will choose to warn about and\nwhere. That \"oversized integer\" warning could be coming out of the\npreprocessor.\n\nBTW, my C book only talks about token-pasting as a step of macro body\nexpansion. Wouldn't we really need something like\n\nSIXTYFOUR(0x0000000000000000), SIXTYFOUR(0x42F0E1EBA9EA3693),\nSIXTYFOUR(0x85E1C3D753D46D26), SIXTYFOUR(0xC711223CFA3E5BB5),\n\nwhere SIXTYFOUR(x) is conditionally defined to be \"x##LL\", \"x##L\",\nor perhaps just \"x\"?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 22 Mar 2001 11:37:36 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: RELEASE STOPPER? nonportable int64 constants in pg_crc.c "
},
{
"msg_contents": "Tom Lane writes:\n\n> > #if <int64 == long>\n> > #define L64 L\n> > #else\n> > #define L64 LL\n> > #endif\n>\n> > const uint64 crc_table[256] = {\n> > 0x0000000000000000##L64, 0x42F0E1EBA9EA3693##L64,\n> > 0x85E1C3D753D46D26##L64, 0xC711223CFA3E5BB5##L64,\n>\n> Hmm ... how portable is that likely to be?\n\nIf the 'L' or 'LL' suffix is portable (probably), and token pasting is\nportable (yes), then the aggregate should be as well, because one is\nhandled by the preprocessor and the other by the compiler.\n\n> I don't want to suppress warnings on a few boxes at the cost of\n> breaking even one platform that would otherwise work. See my reply to\n> Andreas.\n\nIt's possible that there might be one that warns and truncates, but that's\nunlikely. Why are there suffixes for integer (not float) constants\nanyway?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Thu, 22 Mar 2001 17:40:00 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: RELEASE STOPPER? nonportable int64 constants in pg_crc.c"
}
] |
[
{
"msg_contents": "Just a reminder that this will begin in 10 minutes:\n\n---------------------------------------------------------------------------\n\nI am taking part in an on-line Q&A talk tomorrow about PostgreSQL. It\nwill be at:\n\n http://searchdatabase.techtarget.com/Online_Events/searchDatabase_Online_Events_Page\n\nHere is the information:\n\n---------------------------------------------------------------------------\n\nPostgreSQL in the Enterprise\n \n When:\n Mar 21, 2001 at 01:00 PM EST (18:00 GMT)\n \n Speaker:\n Bruce Momjian, Vice President, Database Development,\n Great Bridge, LLC \n \n Topic:\n PostgreSQL is one of the major open source database\n management systems vying for acceptance in the\n enterprise. This Live Expert Q&A will focus on\n PostgreSQL's current and future suitability for \n large-scale, mission-critical systems.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Mar 2001 12:49:35 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Chat starts in 10 minutes"
}
] |
[
{
"msg_contents": "HPUX 9.07 with GCC 2.8.1 fails the regression tests. I will look into \nthis later. I would NOT hold anything up because of this....\n\nJim\n\n> Time to speak up, I have a HPUX 9.07 system and will test today.\n> \n> Jim\n> \n> > Thomas Lockhart <lockhart@alumni.caltech.edu> writes:\n> > >> HPUX 10.20 (HP-PA architecture)\n> > \n> > > Time to drop 9.2 from the list?\n> > \n> > I don't have it running here anymore. Is there anyone on the list\n> > who can test on HPUX 9?\n> > \n> > >> Linux/PPC (LinuxPPC 2000 Q4 distro tested here; 2.2.18 \nkernel\n> I think)\n> > \n> > > What processor? Tatsuo had tested on a 603...\n> > \n> > It's a Powerbook G3 (FireWire model), but I'm not sure which chip is\n> > inside (and Apple's spec sheet isn't too helpful)...\n> > \n> > \t\t\tregards, tom lane\n> > \n> > ---------------------------(end of\n> broadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to majordomo@postgresql.org so that your\n> > message can get through to the mailing list cleanly\n> > \n> > \n> \n> \n> \n> ---------------------------(end of broadcast)-------------------------\n--\n> TIP 4: Don't 'kill -9' the postmaster\n> \n> \n\n\n",
"msg_date": "Wed, 21 Mar 2001 15:56:15 -0500 (EST)",
"msg_from": "\"Jim Buttafuoco\" <jim@/etc/mail/ok>",
"msg_from_op": true,
"msg_subject": "Re: Re: Final Call: RC1 about to go out the door ... "
}
] |
[
{
"msg_contents": "I have created a new web page that contains all unapplied patches that\nare either waiting for approval or waiting for new development to begin.\n\nPeople can use this page to know that their patches have not been lost,\nand I may ask people to review this page for patch approval.\n\nThe page is:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nand there is a link to this page at the bottom of the Developers Corner\nweb page.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Mar 2001 16:17:22 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "New Unapplied Patches web page"
}
] |
[
{
"msg_contents": "Dear experts,\n\nSome advice would be greatly appreciated:\n\n1. I've been running RedHat6.2 and its pgSQL 6.5xx, PHP3.0 counterparts\nfor 10\nmonths, not having time to upgrade and being afraid to upgrade due to\n\"regular problems\" that go along.\n\n2. I am thinking about Debian (my preferred option - good/easy for\n\"automatic\" WEB upgrades) or\nMandrake or Suse/professional (best ranking in some PC magazines).\nStrange: The rankings (No 1 Mandrake, No 2 Suse) did not test Debian -\naccidentally or deliberately?\n\n3. RedHat 7xx has been characterised - unstable!!!\n\nAny \"objective\" Pros/Cons for Linux type/version in terms of pgSQL use -\n\nmy order of preferences Debian, Mandrake, Suse, RedHat.\n\nMuch obliged,\n\nSteven.\n\n\n--\n*****************************************************************************************************\n\nSteven Vajdic (BSc/Hon, MSc)\nSenior Software Engineer\nMotorola Australia Software Centre (MASC)\n2 Second Avenue, Technology Park\nAdelaide, South Australia 5095\n\n\nWORK email: Steven.Vajdic@motorola.com\nWORK email: svajdic@asc.corp.mot.com\nphone (work): +61-8-8168-3435\nFax (work): +61-8-8168-3501\nFront Office (ph): +61-8-8168-3500\nmobile: +61 (0)419 860 903\nHOME email: steven_vajdic@ivillage.com\n*****************************************************************************************************\n\n\n\n",
"msg_date": "Thu, 22 Mar 2001 11:33:16 +1030",
"msg_from": "Steven Vajdic <svajdic@asc.corp.mot.com>",
"msg_from_op": true,
"msg_subject": "Migration - Linux/pgSQL/PHP (type,version)"
},
{
"msg_contents": "On Thu, Mar 22, 2001 at 11:33:16AM +1030, Steven Vajdic wrote:\n> \n> 1. I've been running RedHat6.2 and its pgSQL 6.5xx, PHP3.0 counterparts\n\n\tYou should definitely upgrade to PG 7.1.\n\n> 2. I am thinking about Debian (my preferred option - good/easy for\n> \"automatic\" WEB upgrades) or\n> Mandrake or Suse/professional (best ranking in some PC magazines).\n> Strange: The rankings (No 1 Mandrake, No 2 Suse) did not test Debian -\n> accidentally or deliberately?\n\n\tDon't know which \"ranking\" you are referring to. These are good\ndistributions. Red Hat is also good, but IMHO some things in 7.0 just\nshouldn't have shipped the way they were. Supposedly 7.1 will be much\nbetter.\n\tDebian is excellent. I've been running for quite some time and am very\nhappy with it. If you are looking for an easy install of Debian, try\nProgeny Debian (www.progeny.com).\n\n\t-Roberto\n\t\n-- \n+----| http://fslc.usu.edu USU Free Software & GNU/Linux Club|------+\n Roberto Mello - Computer Science, USU - http://www.brasileiro.net \n http://www.sdl.usu.edu - Space Dynamics Lab, Web Developer \nDammit Jim, I'm a doctor, not a ... Windows 95 Beta Tester\n",
"msg_date": "Wed, 21 Mar 2001 23:02:17 -0700",
"msg_from": "Roberto Mello <rmello@cc.usu.edu>",
"msg_from_op": false,
"msg_subject": "Re: Migration - Linux/pgSQL/PHP (type,version)"
},
{
"msg_contents": "On Thu, Mar 22, 2001 at 11:33:16AM +1030, Steven Vajdic wrote:\n> 2. I am thinking about Debian (my preferred option - good/easy for\n> \"automatic\" WEB upgrades) or\n\nI surely second that. :-)\n\nMy experience is that the Debian upgrade mechanism is much better than that\nof the other dists. But you will have to use the yet to be frozen developer\ndist of Debian to get postgreSQL 7.1 once it gets debianized. The current\nversion in stable is 6.5.3. Of course you can compile 7.1 on the Debian\npotato release yourself.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Fri, 23 Mar 2001 12:49:58 +0100",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Migration - Linux/pgSQL/PHP (type,version)"
}
] |
[
{
"msg_contents": "Helper,\n\nI need to create a temp table for each db connection. So, I add the\ndollowing code into postgres.c\n----------------\n SPI_connect();\n SPI_exec(\"create temp table tbl_tmp (n int);\",0);\n SPI_exec(\"insert into tbl_tmp values (1);\",0);\n SPI_finish();\n----------------\nright after\n----------------\n /*\n * POSTGRES main processing loop begins here\n *\n * If an exception is encountered, processing resumes here so we\nabort\n * the current transaction and start a new one.\n */\n----------------\n\nI checked the return of SPI_exec and both are fine. Then I run psql and\ngot two error messages, which contradicts to each other!\n-------------------\ndb1=> select * from tbl_tmp;\nERROR: Relation 'tbl_tmp' does not exist\ndb1=> create temp table tbl_tmp (n int);\nERROR: Relation 'tbl_tmp' already exists\ndb1=>\n-------------------\n\nI checked the SPI document, but cannot find solution. Can anyone please\ntells me which document should I look into? Or I cannot use SPI like\nthat at the frist place. If that's the case, is any workaround? The\nbase line is I cannot ask db client program to create that temp table.\n\nThank you very much\n\n--\nLM Liu\n\n\n",
"msg_date": "Wed, 21 Mar 2001 18:03:45 -0800",
"msg_from": "Limin Liu <limin@pumpkinnet.com>",
"msg_from_op": true,
"msg_subject": "Can I use SPI in postgres.c"
},
{
"msg_contents": "Currently I am using 7.1beta4, but I just learned that these SPI code works\nfine in 7.02.\nWe can issue \"select * from tbl_tmp\" and psql will return the correct\ninformation in the temp table!\n\n> I need to create a temp table for each db connection. So, I add the\n> dollowing code into postgres.c\n> ----------------\n> SPI_connect();\n> SPI_exec(\"create temp table tbl_tmp (n int);\",0);\n> SPI_exec(\"insert into tbl_tmp values (1);\",0);\n> SPI_finish();\n> ----------------\n> right after\n> ----------------\n> /*\n> * POSTGRES main processing loop begins here\n> *\n> * If an exception is encountered, processing resumes here so we\n> abort\n> * the current transaction and start a new one.\n> */\n> ----------------\n>\n> I checked the return of SPI_exec and both are fine. Then I run psql and\n> got two error messages, which contradicts to each other!\n> -------------------\n> db1=> select * from tbl_tmp;\n> ERROR: Relation 'tbl_tmp' does not exist\n> db1=> create temp table tbl_tmp (n int);\n> ERROR: Relation 'tbl_tmp' already exists\n> db1=>\n> -------------------\n>\n> I checked the SPI document, but cannot find solution. Can anyone please\n> tells me which document should I look into? Or I cannot use SPI like\n> that at the frist place. If that's the case, is any workaround? The\n> base line is I cannot ask db client program to create that temp table.\n>\n> Thank you very much\n>\n> --\n>\n> LM Liu\n>\n",
"msg_date": "Wed, 21 Mar 2001 18:37:37 -0800",
"msg_from": "Limin Liu <limin@pumpkinnet.com>",
"msg_from_op": true,
"msg_subject": "Re: Can I use SPI in postgres.c"
}
] |
[
{
"msg_contents": "Since I am playing with StarOffice, I figured I'd try --with-odbc,\ncurrent sources, except for the big Bruce commit I just saw :-) \n\n\nUX:tsort: INFO: \tpsqlodbc.o\nUX:tsort: INFO: \tdlg_specific.o\nUX:tsort: INFO: \tconvert.o\nUX:tsort: WARNING: Cycle in data\nUX:tsort: INFO: \tpsqlodbc.o\nUX:tsort: INFO: \tdlg_specific.o\nUX:tsort: INFO: \tmisc.o\nUX:tsort: WARNING: Cycle in data\nUX:tsort: INFO: \tdlg_specific.o\nUX:tsort: INFO: \tpsqlodbc.o\n: libpsqlodbc.a\ncc -G -Wl,-z,text -Wl,-h,libpsqlodbc.so.0 -Wl,-Bsymbolic info.o bind.o columninfo.o connection.o convert.o drvconn.o environ.o execute.o lobj.o misc.o options.o pgtypes.o psqlodbc.o qresult.o results.o socket.o parse.o statement.o gpps.o tuple.o tuplelist.o dlg_specific.o -lm -Wl,-R/usr/local/pgsql/lib -o libpsqlodbc.so.0.26\nUX:ld: ERROR: psqlodbc.o: symbol: '_fini' multiply defined; also in file /usr/ccs/lib/crti.o\ngmake[3]: *** [libpsqlodbc.so.0.26] Error 1\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/interfaces/odbc'\ngmake[2]: *** [all] Error 2\ngmake[2]: Leaving directory `/home/ler/pg-dev/pgsql/src/interfaces'\ngmake[1]: *** [all] Error 2\ngmake[1]: Leaving directory `/home/ler/pg-dev/pgsql/src'\ngmake: *** [all] Error 2\n\nWhy do WE define _fini? \n\nLER\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Wed, 21 Mar 2001 22:04:00 -0600",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "odbc/UnixWare 7.1.1: No Go."
},
{
"msg_contents": "Works fine here.\n\n\n> Since I am playing with StarOffice, I figured I'd try --with-odbc,\n> current sources, except for the big Bruce commit I just saw :-) \n> \n> \n> UX:tsort: INFO: \tpsqlodbc.o\n> UX:tsort: INFO: \tdlg_specific.o\n> UX:tsort: INFO: \tconvert.o\n> UX:tsort: WARNING: Cycle in data\n> UX:tsort: INFO: \tpsqlodbc.o\n> UX:tsort: INFO: \tdlg_specific.o\n> UX:tsort: INFO: \tmisc.o\n> UX:tsort: WARNING: Cycle in data\n> UX:tsort: INFO: \tdlg_specific.o\n> UX:tsort: INFO: \tpsqlodbc.o\n> : libpsqlodbc.a\n> cc -G -Wl,-z,text -Wl,-h,libpsqlodbc.so.0 -Wl,-Bsymbolic info.o bind.o columninfo.o connection.o convert.o drvconn.o environ.o execute.o lobj.o misc.o options.o pgtypes.o psqlodbc.o qresult.o results.o socket.o parse.o statement.o gpps.o tuple.o tuplelist.o dlg_specific.o -lm -Wl,-R/usr/local/pgsql/lib -o libpsqlodbc.so.0.26\n> UX:ld: ERROR: psqlodbc.o: symbol: '_fini' multiply defined; also in file /usr/ccs/lib/crti.o\n> gmake[3]: *** [libpsqlodbc.so.0.26] Error 1\n> gmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/interfaces/odbc'\n> gmake[2]: *** [all] Error 2\n> gmake[2]: Leaving directory `/home/ler/pg-dev/pgsql/src/interfaces'\n> gmake[1]: *** [all] Error 2\n> gmake[1]: Leaving directory `/home/ler/pg-dev/pgsql/src'\n> gmake: *** [all] Error 2\n> \n> Why do WE define _fini? \n> \n> LER\n> \n> \n> -- \n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 E-Mail: ler@lerctr.org\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 22 Mar 2001 00:08:17 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: odbc/UnixWare 7.1.1: No Go."
},
{
"msg_contents": "* Bruce Momjian <pgman@candle.pha.pa.us> [010321 23:08]:\n> Works fine here.\non a GCC platform, it does. I suspect this is a portability issue. \n\nLER\n\n> \n> \n> > Since I am playing with StarOffice, I figured I'd try --with-odbc,\n> > current sources, except for the big Bruce commit I just saw :-) \n> > \n> > \n> > UX:tsort: INFO: \tpsqlodbc.o\n> > UX:tsort: INFO: \tdlg_specific.o\n> > UX:tsort: INFO: \tconvert.o\n> > UX:tsort: WARNING: Cycle in data\n> > UX:tsort: INFO: \tpsqlodbc.o\n> > UX:tsort: INFO: \tdlg_specific.o\n> > UX:tsort: INFO: \tmisc.o\n> > UX:tsort: WARNING: Cycle in data\n> > UX:tsort: INFO: \tdlg_specific.o\n> > UX:tsort: INFO: \tpsqlodbc.o\n> > : libpsqlodbc.a\n> > cc -G -Wl,-z,text -Wl,-h,libpsqlodbc.so.0 -Wl,-Bsymbolic info.o bind.o columninfo.o connection.o convert.o drvconn.o environ.o execute.o lobj.o misc.o options.o pgtypes.o psqlodbc.o qresult.o results.o socket.o parse.o statement.o gpps.o tuple.o tuplelist.o dlg_specific.o -lm -Wl,-R/usr/local/pgsql/lib -o libpsqlodbc.so.0.26\n> > UX:ld: ERROR: psqlodbc.o: symbol: '_fini' multiply defined; also in file /usr/ccs/lib/crti.o\n> > gmake[3]: *** [libpsqlodbc.so.0.26] Error 1\n> > gmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/interfaces/odbc'\n> > gmake[2]: *** [all] Error 2\n> > gmake[2]: Leaving directory `/home/ler/pg-dev/pgsql/src/interfaces'\n> > gmake[1]: *** [all] Error 2\n> > gmake[1]: Leaving directory `/home/ler/pg-dev/pgsql/src'\n> > gmake: *** [all] Error 2\n> > \n> > Why do WE define _fini? \n> > \n> > LER\n> > \n> > \n> > -- \n> > Larry Rosenman http://www.lerctr.org/~ler\n> > Phone: +1 972-414-9812 E-Mail: ler@lerctr.org\n> > US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> > \n> \n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Thu, 22 Mar 2001 07:06:20 -0600",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: odbc/UnixWare 7.1.1: No Go."
},
{
"msg_contents": "\nCan't we do something with atexit or other PORTABLE end stuff?\n\nI'll look at it for 7.2. \n\nLER\n\n>>>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<<\n\nOn 3/22/01, 10:16:03 AM, Peter Eisentraut <peter_e@gmx.net> wrote regarding \nRe: [HACKERS] odbc/UnixWare 7.1.1: No Go.:\n\n\n> Larry Rosenman writes:\n\n> > cc -G -Wl,-z,text -Wl,-h,libpsqlodbc.so.0 -Wl,-Bsymbolic info.o bind.o \ncolumninfo.o connection.o convert.o drvconn.o environ.o execute.o lobj.o \nmisc.o options.o pgtypes.o psqlodbc.o qresult.o results.o socket.o parse.o \nstatement.o gpps.o tuple.o tuplelist.o dlg_specific.o -lm \n-Wl,-R/usr/local/pgsql/lib -o libpsqlodbc.so.0.26\n> > UX:ld: ERROR: psqlodbc.o: symbol: '_fini' multiply defined; also in file \n/usr/ccs/lib/crti.o\n> > gmake[3]: *** [libpsqlodbc.so.0.26] Error 1\n\n> This is a known portability problem on Unixware (at least known to me) \nand\n> probably other non-GCC setups.\n\n> > Why do WE define _fini?\n\n> Because we need to 'fini' something, I suspect.\n\n> --\n> Peter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n",
"msg_date": "Thu, 22 Mar 2001 16:08:05 GMT",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: odbc/UnixWare 7.1.1: No Go."
},
{
"msg_contents": "Larry Rosenman writes:\n\n> cc -G -Wl,-z,text -Wl,-h,libpsqlodbc.so.0 -Wl,-Bsymbolic info.o bind.o columninfo.o connection.o convert.o drvconn.o environ.o execute.o lobj.o misc.o options.o pgtypes.o psqlodbc.o qresult.o results.o socket.o parse.o statement.o gpps.o tuple.o tuplelist.o dlg_specific.o -lm -Wl,-R/usr/local/pgsql/lib -o libpsqlodbc.so.0.26\n> UX:ld: ERROR: psqlodbc.o: symbol: '_fini' multiply defined; also in file /usr/ccs/lib/crti.o\n> gmake[3]: *** [libpsqlodbc.so.0.26] Error 1\n\nThis is a known portability problem on Unixware (at least known to me) and\nprobably other non-GCC setups.\n\n> Why do WE define _fini?\n\nBecause we need to 'fini' something, I suspect.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Thu, 22 Mar 2001 17:16:03 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: odbc/UnixWare 7.1.1: No Go."
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Larry Rosenman writes:\n>> Why do WE define _fini?\n\n> Because we need to 'fini' something, I suspect.\n\nSee src/interfaces/odbc/psqlodbc.c line 126. It doesn't look to me like\nthe _fini() does anything useful; could we take it out?\n\nWe do not actually need the _init() anymore either. Possibly the whole\n#ifdef not-Windows segment (lines 92-132) is just asking for trouble and\nshould be removed.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 22 Mar 2001 11:20:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: odbc/UnixWare 7.1.1: No Go. "
},
{
"msg_contents": "Larry Rosenman writes:\n\n> Can't we do something with atexit or other PORTABLE end stuff?\n\nIt's supposed to work transparently for the library user. At least the\n_fini can probably be hooked in atexit, but the _init would probably have\nto be handled some other way. Maybe some\n\nif (!already_inited)\n{\n\talready_inited = 1;\n\tdo_init();\n}\n\nhooked into one or more functions that the ODBC user would likely call\nfirst (like connect maybe).\n\n>\n> I'll look at it for 7.2.\n>\n> LER\n>\n> >>>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<<\n>\n> On 3/22/01, 10:16:03 AM, Peter Eisentraut <peter_e@gmx.net> wrote regarding\n> Re: [HACKERS] odbc/UnixWare 7.1.1: No Go.:\n>\n>\n> > Larry Rosenman writes:\n>\n> > > cc -G -Wl,-z,text -Wl,-h,libpsqlodbc.so.0 -Wl,-Bsymbolic info.o bind.o\n> columninfo.o connection.o convert.o drvconn.o environ.o execute.o lobj.o\n> misc.o options.o pgtypes.o psqlodbc.o qresult.o results.o socket.o parse.o\n> statement.o gpps.o tuple.o tuplelist.o dlg_specific.o -lm\n> -Wl,-R/usr/local/pgsql/lib -o libpsqlodbc.so.0.26\n> > > UX:ld: ERROR: psqlodbc.o: symbol: '_fini' multiply defined; also in file\n> /usr/ccs/lib/crti.o\n> > > gmake[3]: *** [libpsqlodbc.so.0.26] Error 1\n>\n> > This is a known portability problem on Unixware (at least known to me)\n> and\n> > probably other non-GCC setups.\n>\n> > > Why do WE define _fini?\n>\n> > Because we need to 'fini' something, I suspect.\n>\n> > --\n> > Peter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n>\n>\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Thu, 22 Mar 2001 17:43:00 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: odbc/UnixWare 7.1.1: No Go."
},
{
"msg_contents": "In a very quick look I just made, I tend to agree with Tom, that the \nwhole non-gcc, non-windows stuff should go. \n\nLER\n\n\n>>>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<<\n\nOn 3/22/01, 10:20:11 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote regarding Re: \n[HACKERS] odbc/UnixWare 7.1.1: No Go. :\n\n\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > Larry Rosenman writes:\n> >> Why do WE define _fini?\n\n> > Because we need to 'fini' something, I suspect.\n\n> See src/interfaces/odbc/psqlodbc.c line 126. It doesn't look to me like\n> the _fini() does anything useful; could we take it out?\n\n> We do not actually need the _init() anymore either. Possibly the whole\n> #ifdef not-Windows segment (lines 92-132) is just asking for trouble and\n> should be removed.\n\n> regards, tom lane\n",
"msg_date": "Thu, 22 Mar 2001 16:56:42 GMT",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: odbc/UnixWare 7.1.1: No Go. "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> It's supposed to work transparently for the library user. At least the\n> _fini can probably be hooked in atexit, but the _init would probably have\n> to be handled some other way.\n\nThe _fini does nothing, and I already made a hack to cover lack of the\n_init (which seems not to get called on HPUX) --- see odbc/environ.c\nline 37.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 22 Mar 2001 12:05:29 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: odbc/UnixWare 7.1.1: No Go. "
},
{
"msg_contents": "Does this mean it's eligible to be fixed for 7.1? \n\nLER\n\n\n>>>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<<\n\nOn 3/22/01, 11:05:29 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote regarding Re: \n[HACKERS] odbc/UnixWare 7.1.1: No Go. :\n\n\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > It's supposed to work transparently for the library user. At least the\n> > _fini can probably be hooked in atexit, but the _init would probably have\n> > to be handled some other way.\n\n> The _fini does nothing, and I already made a hack to cover lack of the\n> _init (which seems not to get called on HPUX) --- see odbc/environ.c\n> line 37.\n\n> regards, tom lane\n\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n\n> http://www.postgresql.org/users-lounge/docs/faq.html\n",
"msg_date": "Thu, 22 Mar 2001 18:19:52 GMT",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: odbc/UnixWare 7.1.1: No Go. "
},
{
"msg_contents": "Larry Rosenman <ler@lerctr.org> writes:\n> Does this mean it's eligible to be fixed for 7.1? \n\nWe can talk about it anyway. Does removing the _fini alone make it work\nfor you, or do we have to remove _init too?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 22 Mar 2001 13:23:57 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: odbc/UnixWare 7.1.1: No Go. "
},
{
"msg_contents": "need to kill the _init too. Then we get other symbol issues, I think due \nto -Wl,z,text, but I'm not sure. \n\nar crs libpsqlodbc.a `lorder info.o bind.o columninfo.o connection.o \nconvert.o drvconn.o environ.o execute.o lobj.o misc.o options.o pgtypes.o \npsqlodbc.o qresult.o results.o socket.o parse.o statement.o gpps.o \ntuple.o tuplelist.o dlg_specific.o | tsort`\nUX:tsort: WARNING: Cycle in data\nUX:tsort: INFO: \tresults.o\nUX:tsort: INFO: \tparse.o\nUX:tsort: INFO: \tinfo.o\nUX:tsort: WARNING: Cycle in data\nUX:tsort: INFO: \texecute.o\nUX:tsort: INFO: \tenviron.o\nUX:tsort: INFO: \tdlg_specific.o\nUX:tsort: INFO: \tconvert.o\nUX:tsort: INFO: \tconnection.o\nUX:tsort: INFO: \tresults.o\nUX:tsort: INFO: \tparse.o\nUX:tsort: INFO: \tstatement.o\nUX:tsort: INFO: \tbind.o\nUX:tsort: WARNING: Cycle in data\nUX:tsort: INFO: \texecute.o\nUX:tsort: INFO: \tenviron.o\nUX:tsort: INFO: \tdlg_specific.o\nUX:tsort: INFO: \tconvert.o\nUX:tsort: INFO: \tconnection.o\nUX:tsort: INFO: \tresults.o\nUX:tsort: INFO: \tqresult.o\nUX:tsort: INFO: \tcolumninfo.o\n: libpsqlodbc.a\ncc -G -Wl,-z,text -Wl,-h,libpsqlodbc.so.0 -Wl,-Bsymbolic info.o bind.o \ncolumninfo.o connection.o convert.o drvconn.o environ.o execute.o lobj.o \nmisc.o options.o pgtypes.o psqlodbc.o qresult.o results.o socket.o \nparse.o statement.o gpps.o tuple.o tuplelist.o dlg_specific.o -lm \n-Wl,-R/usr/local/pgsql/lib -o libpsqlodbc.so.0.26\nUndefined\t\t\tfirst referenced\nsymbol \t\t\t in file\nclose socket.o\nstrcat info.o\ngetpwuid misc.o\natof connection.o\natoi info.o\natol convert.o\nmalloc info.o\nlabs tuplelist.o\nstrchr info.o\nldexpf libm.so\nldexpl libm.so\nfgets gpps.o\nstrcmp info.o\nstrstr info.o\n_lib_version libm.so\nstrcasecmp convert.o\n_modf libm.so\nstrcpy info.o\nmemcpy convert.o\nstrlen info.o\n__ctype convert.o\nstrrchr results.o\nstrtok info.o\nmodff libm.so\nmodfl libm.so\ntime convert.o\nlocaltime convert.o\nmultibyte_char_check convert.o\n__thr_errno libm.so\ngetpid misc.o\nsprintf info.o\nsetbuf misc.o\ninet_addr socket.o\nmultibyte_strchr convert.o\nstrdup columninfo.o\nstrncasecmp convert.o\ngetuid misc.o\ncheck_client_encoding connection.o\nldexp libm.so\nrealloc info.o\nmultibyte_init convert.o\nfclose gpps.o\nfopen misc.o\nstrncat convert.o\nstrncpy connection.o\ngethostbyname socket.o\nstrncmp info.o\nsscanf connection.o\nvfprintf misc.o\nfree info.o\n_write libm.so\nilogb libm.so\nfrexpl libm.so\nUX:ld: WARNING: Symbol referencing errors.\nrm -f libpsqlodbc.so.0\nln -s libpsqlodbc.so.0.26 libpsqlodbc.so.0\nrm -f libpsqlodbc.so\nln -s libpsqlodbc.so.0.26 libpsqlodbc.so\n\n>>>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<<\n\nOn 3/22/01, 12:23:57 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote regarding Re: \n[HACKERS] odbc/UnixWare 7.1.1: No Go. :\n\n\n> Larry Rosenman <ler@lerctr.org> writes:\n> > Does this mean it's eligible to be fixed for 7.1?\n\n> We can talk about it anyway. Does removing the _fini alone make it work\n> for you, or do we have to remove _init too?\n\n> regards, tom lane\n",
"msg_date": "Thu, 22 Mar 2001 18:45:08 GMT",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: odbc/UnixWare 7.1.1: No Go. "
},
{
"msg_contents": "Larry Rosenman <ler@lerctr.org> writes:\n> need to kill the _init too. Then we get other symbol issues, I think due \n> to -Wl,z,text, but I'm not sure. \n\nUm. This suggests that the real problem is a completely wrong approach\nto linking the shared lib. On this evidence I'm not going to touch the\n_init/_fini ... looks more like you should be fooling with linker\nswitches.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 22 Mar 2001 13:49:10 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: odbc/UnixWare 7.1.1: No Go. "
},
{
"msg_contents": "\nPeter,\n I'm not a GNU MAKE person, can you help here? \n\nLER\n\n>>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<<\n\nOn 3/22/01, 12:49:10 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote regarding Re: \n[HACKERS] odbc/UnixWare 7.1.1: No Go. :\n\n\n> Larry Rosenman <ler@lerctr.org> writes:\n> > need to kill the _init too. Then we get other symbol issues, I think due\n> > to -Wl,z,text, but I'm not sure.\n\n> Um. This suggests that the real problem is a completely wrong approach\n> to linking the shared lib. On this evidence I'm not going to touch the\n> _init/_fini ... looks more like you should be fooling with linker\n> switches.\n\n> regards, tom lane\n",
"msg_date": "Thu, 22 Mar 2001 18:52:44 GMT",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: odbc/UnixWare 7.1.1: No Go. "
},
{
"msg_contents": "\nusing the following link, with the _init/_fini killed, works:\n\ncc -G *.o -L /usr/local/pgsql/lib -lpq -R/usr/local/pgsql/lib -lsocket -o \nlibpsqlodbc.so.0.26\n\n\nSO, Peter, how do we fix the generated make file?\n\n\n>>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<<\n\nOn 3/22/01, 12:23:57 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote regarding Re: \n[HACKERS] odbc/UnixWare 7.1.1: No Go. :\n\n\n> Larry Rosenman <ler@lerctr.org> writes:\n> > Does this mean it's eligible to be fixed for 7.1?\n\n> We can talk about it anyway. Does removing the _fini alone make it work\n> for you, or do we have to remove _init too?\n\n> regards, tom lane\n\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n",
"msg_date": "Thu, 22 Mar 2001 19:00:08 GMT",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: odbc/UnixWare 7.1.1: No Go. "
},
{
"msg_contents": "\nand before you ask, the _init and _fini NEED to go away.\n\nLER\n\n>>>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<<\n\nOn 3/22/01, 1:00:08 PM, Larry Rosenman <ler@lerctr.org> wrote regarding Re: \n[HACKERS] odbc/UnixWare 7.1.1: No Go. :\n\n\n> using the following link, with the _init/_fini killed, works:\n\n> cc -G *.o -L /usr/local/pgsql/lib -lpq -R/usr/local/pgsql/lib -lsocket -o\n> libpsqlodbc.so.0.26\n\n\n> SO, Peter, how do we fix the generated make file?\n\n\n> >>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<<\n\n> On 3/22/01, 12:23:57 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote regarding Re:\n> [HACKERS] odbc/UnixWare 7.1.1: No Go. :\n\n\n> > Larry Rosenman <ler@lerctr.org> writes:\n> > > Does this mean it's eligible to be fixed for 7.1?\n\n> > We can talk about it anyway. Does removing the _fini alone make it work\n> > for you, or do we have to remove _init too?\n\n> > regards, tom lane\n\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n",
"msg_date": "Thu, 22 Mar 2001 19:02:39 GMT",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: odbc/UnixWare 7.1.1: No Go. "
},
{
"msg_contents": "Larry Rosenman writes:\n\n> need to kill the _init too. Then we get other symbol issues, I think due\n> to -Wl,z,text, but I'm not sure.\n\nThese look to be due to the -Bsymbolic. Note they're only warnings, but\nyou could really only tell by trying out the driver. I have a slight\nsuspicion that -z text plus -Bsymbolic is an idiosyncratic combination,\nbut I might be off.\n\n>\n> ar crs libpsqlodbc.a `lorder info.o bind.o columninfo.o connection.o\n> convert.o drvconn.o environ.o execute.o lobj.o misc.o options.o pgtypes.o\n> psqlodbc.o qresult.o results.o socket.o parse.o statement.o gpps.o\n> tuple.o tuplelist.o dlg_specific.o | tsort`\n> UX:tsort: WARNING: Cycle in data\n> UX:tsort: INFO: \tresults.o\n> UX:tsort: INFO: \tparse.o\n> UX:tsort: INFO: \tinfo.o\n> UX:tsort: WARNING: Cycle in data\n> UX:tsort: INFO: \texecute.o\n> UX:tsort: INFO: \tenviron.o\n> UX:tsort: INFO: \tdlg_specific.o\n> UX:tsort: INFO: \tconvert.o\n> UX:tsort: INFO: \tconnection.o\n> UX:tsort: INFO: \tresults.o\n> UX:tsort: INFO: \tparse.o\n> UX:tsort: INFO: \tstatement.o\n> UX:tsort: INFO: \tbind.o\n> UX:tsort: WARNING: Cycle in data\n> UX:tsort: INFO: \texecute.o\n> UX:tsort: INFO: \tenviron.o\n> UX:tsort: INFO: \tdlg_specific.o\n> UX:tsort: INFO: \tconvert.o\n> UX:tsort: INFO: \tconnection.o\n> UX:tsort: INFO: \tresults.o\n> UX:tsort: INFO: \tqresult.o\n> UX:tsort: INFO: \tcolumninfo.o\n> : libpsqlodbc.a\n> cc -G -Wl,-z,text -Wl,-h,libpsqlodbc.so.0 -Wl,-Bsymbolic info.o bind.o\n> columninfo.o connection.o convert.o drvconn.o environ.o execute.o lobj.o\n> misc.o options.o pgtypes.o psqlodbc.o qresult.o results.o socket.o\n> parse.o statement.o gpps.o tuple.o tuplelist.o dlg_specific.o -lm\n> -Wl,-R/usr/local/pgsql/lib -o libpsqlodbc.so.0.26\n> Undefined\t\t\tfirst referenced\n> symbol \t\t\t in file\n> close socket.o\n> strcat info.o\n> getpwuid misc.o\n> atof connection.o\n> atoi info.o\n> atol convert.o\n> malloc info.o\n> labs tuplelist.o\n> strchr info.o\n> ldexpf libm.so\n> ldexpl libm.so\n> fgets gpps.o\n> strcmp info.o\n> strstr info.o\n> _lib_version libm.so\n> strcasecmp convert.o\n> _modf libm.so\n> strcpy info.o\n> memcpy convert.o\n> strlen info.o\n> __ctype convert.o\n> strrchr results.o\n> strtok info.o\n> modff libm.so\n> modfl libm.so\n> time convert.o\n> localtime convert.o\n> multibyte_char_check convert.o\n> __thr_errno libm.so\n> getpid misc.o\n> sprintf info.o\n> setbuf misc.o\n> inet_addr socket.o\n> multibyte_strchr convert.o\n> strdup columninfo.o\n> strncasecmp convert.o\n> getuid misc.o\n> check_client_encoding connection.o\n> ldexp libm.so\n> realloc info.o\n> multibyte_init convert.o\n> fclose gpps.o\n> fopen misc.o\n> strncat convert.o\n> strncpy connection.o\n> gethostbyname socket.o\n> strncmp info.o\n> sscanf connection.o\n> vfprintf misc.o\n> free info.o\n> _write libm.so\n> ilogb libm.so\n> frexpl libm.so\n> UX:ld: WARNING: Symbol referencing errors.\n> rm -f libpsqlodbc.so.0\n> ln -s libpsqlodbc.so.0.26 libpsqlodbc.so.0\n> rm -f libpsqlodbc.so\n> ln -s libpsqlodbc.so.0.26 libpsqlodbc.so\n>\n> >>>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<<\n>\n> On 3/22/01, 12:23:57 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote regarding Re:\n> [HACKERS] odbc/UnixWare 7.1.1: No Go. :\n>\n>\n> > Larry Rosenman <ler@lerctr.org> writes:\n> > > Does this mean it's eligible to be fixed for 7.1?\n>\n> > We can talk about it anyway. Does removing the _fini alone make it work\n> > for you, or do we have to remove _init too?\n>\n> > regards, tom lane\n>\n>\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Thu, 22 Mar 2001 21:50:53 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: odbc/UnixWare 7.1.1: No Go. "
},
{
"msg_contents": "Larry Rosenman writes:\n\n> using the following link, with the _init/_fini killed, works:\n>\n> cc -G *.o -L /usr/local/pgsql/lib -lpq -R/usr/local/pgsql/lib -lsocket -o\n> libpsqlodbc.so.0.26\n\nThe libpq should definitely not be there, but if additional libraries such\nas -lsocket make you happy then look at adding a line\n\nSHLIB_LINK += $(filter ...\n\nsimilar to what's in libpq's Makefile.\n\nHowever, I don't think this is strictly necessary, since the library is\ngoing to be loaded by a driver manager which is likely to have all these\nlibraries linked in. I don't understand this architecture too well, so\nit's best resolved by trying the library.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Thu, 22 Mar 2001 21:54:14 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: odbc/UnixWare 7.1.1: No Go. "
},
{
"msg_contents": "My question is WHY are we using -Bsymbolic and/or -z text anyway?\n\nThese options don't appear to buy us anything but grief on SVR[45] ELF \nsystems..\n\nThe -lpq is NOT needed, that was my f*** up. \n\nLER\n\n>>>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<<\n\nOn 3/22/01, 2:50:53 PM, Peter Eisentraut <peter_e@gmx.net> wrote regarding \nRe: [HACKERS] odbc/UnixWare 7.1.1: No Go. :\n\n\n> Larry Rosenman writes:\n\n> > need to kill the _init too. Then we get other symbol issues, I think due\n> > to -Wl,z,text, but I'm not sure.\n\n> These look to be due to the -Bsymbolic. Note they're only warnings, but\n> you could really only tell by trying out the driver. I have a slight\n> suspicion that -z text plus -Bsymbolic is an idiosyncratic combination,\n> but I might be off.\n\n> >\n> > ar crs libpsqlodbc.a `lorder info.o bind.o columninfo.o connection.o\n> > convert.o drvconn.o environ.o execute.o lobj.o misc.o options.o pgtypes.o\n> > psqlodbc.o qresult.o results.o socket.o parse.o statement.o gpps.o\n> > tuple.o tuplelist.o dlg_specific.o | tsort`\n> > UX:tsort: WARNING: Cycle in data\n> > UX:tsort: INFO: results.o\n> > UX:tsort: INFO: parse.o\n> > UX:tsort: INFO: info.o\n> > UX:tsort: WARNING: Cycle in data\n> > UX:tsort: INFO: execute.o\n> > UX:tsort: INFO: environ.o\n> > UX:tsort: INFO: dlg_specific.o\n> > UX:tsort: INFO: convert.o\n> > UX:tsort: INFO: connection.o\n> > UX:tsort: INFO: results.o\n> > UX:tsort: INFO: parse.o\n> > UX:tsort: INFO: statement.o\n> > UX:tsort: INFO: bind.o\n> > UX:tsort: WARNING: Cycle in data\n> > UX:tsort: INFO: execute.o\n> > UX:tsort: INFO: environ.o\n> > UX:tsort: INFO: dlg_specific.o\n> > UX:tsort: INFO: convert.o\n> > UX:tsort: INFO: connection.o\n> > UX:tsort: INFO: results.o\n> > UX:tsort: INFO: qresult.o\n> > UX:tsort: INFO: columninfo.o\n> > : libpsqlodbc.a\n> > cc -G -Wl,-z,text -Wl,-h,libpsqlodbc.so.0 -Wl,-Bsymbolic info.o bind.o\n> > columninfo.o connection.o convert.o drvconn.o environ.o execute.o lobj.o\n> > misc.o options.o pgtypes.o psqlodbc.o qresult.o results.o socket.o\n> > parse.o statement.o gpps.o tuple.o tuplelist.o dlg_specific.o -lm\n> > -Wl,-R/usr/local/pgsql/lib -o libpsqlodbc.so.0.26\n> > Undefined first referenced\n> > symbol in file\n> > close socket.o\n> > strcat info.o\n> > getpwuid misc.o\n> > atof connection.o\n> > atoi info.o\n> > atol convert.o\n> > malloc info.o\n> > labs tuplelist.o\n> > strchr info.o\n> > ldexpf libm.so\n> > ldexpl libm.so\n> > fgets gpps.o\n> > strcmp info.o\n> > strstr info.o\n> > _lib_version libm.so\n> > strcasecmp convert.o\n> > _modf libm.so\n> > strcpy info.o\n> > memcpy convert.o\n> > strlen info.o\n> > __ctype convert.o\n> > strrchr results.o\n> > strtok info.o\n> > modff libm.so\n> > modfl libm.so\n> > time convert.o\n> > localtime convert.o\n> > multibyte_char_check convert.o\n> > __thr_errno libm.so\n> > getpid misc.o\n> > sprintf info.o\n> > setbuf misc.o\n> > inet_addr socket.o\n> > multibyte_strchr convert.o\n> > strdup columninfo.o\n> > strncasecmp convert.o\n> > getuid misc.o\n> > check_client_encoding connection.o\n> > ldexp libm.so\n> > realloc info.o\n> > multibyte_init convert.o\n> > fclose gpps.o\n> > fopen misc.o\n> > strncat convert.o\n> > strncpy connection.o\n> > gethostbyname socket.o\n> > strncmp info.o\n> > sscanf connection.o\n> > vfprintf misc.o\n> > free info.o\n> > _write libm.so\n> > ilogb libm.so\n> > frexpl libm.so\n> > UX:ld: WARNING: Symbol referencing errors.\n> > rm -f libpsqlodbc.so.0\n> > ln -s libpsqlodbc.so.0.26 libpsqlodbc.so.0\n> > rm -f libpsqlodbc.so\n> > ln -s libpsqlodbc.so.0.26 libpsqlodbc.so\n> >\n> > >>>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<<\n> >\n> > On 3/22/01, 12:23:57 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote regarding Re:\n> > [HACKERS] odbc/UnixWare 7.1.1: No Go. :\n> >\n> >\n> > > Larry Rosenman <ler@lerctr.org> writes:\n> > > > Does this mean it's eligible to be fixed for 7.1?\n> >\n> > > We can talk about it anyway. Does removing the _fini alone make it work\n> > > for you, or do we have to remove _init too?\n> >\n> > > regards, tom lane\n> >\n> >\n\n> --\n> Peter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n",
"msg_date": "Thu, 22 Mar 2001 20:54:22 GMT",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: odbc/UnixWare 7.1.1: No Go. "
},
{
"msg_contents": "Larry Rosenman <ler@lerctr.org> writes:\n> My question is WHY are we using -Bsymbolic and/or -z text anyway?\n> These options don't appear to buy us anything but grief on SVR[45] ELF \n> systems..\n\nI have no idea what -z text means to your linker, but if it has a\n-Bsymbolic option then it's a good bet that you need that. The ODBC\ndriver contains some function names that duplicate names in the unixODBC\ndriver manager. The driver's own references to these functions *must*\nbe resolved to its own routines and not the manager's, else havoc\nensues. But for some reason, the other way is the default on many\nplatforms.\n\nDo not assume that you have this right just because the build succeeds.\nI found in testing on HPUX that not only could you build a wrongly\nlinked driver, but it would actually load and connect. Only certain\nkinds of queries exhibited the problem. In short: better test it before\nyou claim you have it fixed.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 22 Mar 2001 16:38:59 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: odbc/UnixWare 7.1.1: No Go. "
},
{
"msg_contents": "OK, will do. For the record:\n\n -z text\n In dynamic mode only, force a fatal error if any relocations\n against non-writable, allocatable sections remain. This is the\n default for IA-64 objects.\n\nI don't have a good way to test it yet, as the only ODBC client I have is \nStar Office which is a Linux binary, and I can't seem to get the unixODBC \nconfiguration to like it on the FreeBSD box. \n\nI guess this will have to wait for 7.2, since we are in Freeze, though. \n\nNo biggie for me, just trying to help.\n\nI do think that the _init/_fini stuff should go away for NON-GCC \nplatforms, though. \n\nAlso, docs for UnixWare are at:\nhttp://uw7doc.sco.com/\nor my local copy for my local system (accessable however):\nhttp://www.lerctr.org:457/\n\n\nLER\n\n>>>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<<\n\nOn 3/22/01, 3:38:59 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote regarding Re: \n[HACKERS] odbc/UnixWare 7.1.1: No Go. :\n\n\n> Larry Rosenman <ler@lerctr.org> writes:\n> > My question is WHY are we using -Bsymbolic and/or -z text anyway?\n> > These options don't appear to buy us anything but grief on SVR[45] ELF\n> > systems..\n\n> I have no idea what -z text means to your linker, but if it has a\n> -Bsymbolic option then it's a good bet that you need that. The ODBC\n> driver contains some function names that duplicate names in the unixODBC\n> driver manager. The driver's own references to these functions *must*\n> be resolved to its own routines and not the manager's, else havoc\n> ensues. But for some reason, the other way is the default on many\n> platforms.\n\n> Do not assume that you have this right just because the build succeeds.\n> I found in testing on HPUX that not only could you build a wrongly\n> linked driver, but it would actually load and connect. Only certain\n> kinds of queries exhibited the problem. In short: better test it before\n> you claim you have it fixed.\n\n> regards, tom lane\n",
"msg_date": "Thu, 22 Mar 2001 21:50:34 GMT",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: odbc/UnixWare 7.1.1: No Go. "
},
{
"msg_contents": "OK, it *IS* just a WARNING that the symbols are undefined.\n\nSO, can we get the _fini/_init stuff commented/taken out for 7.1?\n\nLER\n\n\n>>>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<<\n\nOn 3/22/01, 3:38:59 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote regarding Re: \n[HACKERS] odbc/UnixWare 7.1.1: No Go. :\n\n\n> Larry Rosenman <ler@lerctr.org> writes:\n> > My question is WHY are we using -Bsymbolic and/or -z text anyway?\n> > These options don't appear to buy us anything but grief on SVR[45] ELF\n> > systems..\n\n> I have no idea what -z text means to your linker, but if it has a\n> -Bsymbolic option then it's a good bet that you need that. The ODBC\n> driver contains some function names that duplicate names in the unixODBC\n> driver manager. The driver's own references to these functions *must*\n> be resolved to its own routines and not the manager's, else havoc\n> ensues. But for some reason, the other way is the default on many\n> platforms.\n\n> Do not assume that you have this right just because the build succeeds.\n> I found in testing on HPUX that not only could you build a wrongly\n> linked driver, but it would actually load and connect. Only certain\n> kinds of queries exhibited the problem. In short: better test it before\n> you claim you have it fixed.\n\n> regards, tom lane\n",
"msg_date": "Thu, 22 Mar 2001 22:02:45 GMT",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: odbc/UnixWare 7.1.1: No Go. "
},
{
"msg_contents": "Can I get a go/nogo decision on whether these two functions can be #if'd \nout for 7.1? \n\nThanks.\n\nLER\n\n\n>>>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<<\n\nOn 3/22/01, 4:02:45 PM, Larry Rosenman <ler@lerctr.org> wrote regarding Re: \n[HACKERS] odbc/UnixWare 7.1.1: No Go. :\n\n\n> OK, it *IS* just a WARNING that the symbols are undefined.\n\n> SO, can we get the _fini/_init stuff commented/taken out for 7.1?\n\n> LER\n\n\n> >>>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<<\n\n> On 3/22/01, 3:38:59 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote regarding Re:\n> [HACKERS] odbc/UnixWare 7.1.1: No Go. :\n\n\n> > Larry Rosenman <ler@lerctr.org> writes:\n> > > My question is WHY are we using -Bsymbolic and/or -z text anyway?\n> > > These options don't appear to buy us anything but grief on SVR[45] ELF\n> > > systems..\n\n> > I have no idea what -z text means to your linker, but if it has a\n> > -Bsymbolic option then it's a good bet that you need that. The ODBC\n> > driver contains some function names that duplicate names in the unixODBC\n> > driver manager. The driver's own references to these functions *must*\n> > be resolved to its own routines and not the manager's, else havoc\n> > ensues. But for some reason, the other way is the default on many\n> > platforms.\n\n> > Do not assume that you have this right just because the build succeeds.\n> > I found in testing on HPUX that not only could you build a wrongly\n> > linked driver, but it would actually load and connect. Only certain\n> > kinds of queries exhibited the problem. In short: better test it before\n> > you claim you have it fixed.\n\n> > regards, tom lane\n\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n",
"msg_date": "Fri, 23 Mar 2001 22:58:56 GMT",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: odbc/UnixWare 7.1.1: No Go. "
},
{
"msg_contents": "I'll take the deafening silence as a NO?\n\nLER\n\n* Larry Rosenman <ler@lerctr.org> [010323 16:59]:\n> Can I get a go/nogo decision on whether these two functions can be #if'd \n> out for 7.1? \n> \n> Thanks.\n> \n> LER\n> \n> \n> >>>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<<\n> \n> On 3/22/01, 4:02:45 PM, Larry Rosenman <ler@lerctr.org> wrote regarding Re: \n> [HACKERS] odbc/UnixWare 7.1.1: No Go. :\n> \n> \n> > OK, it *IS* just a WARNING that the symbols are undefined.\n> \n> > SO, can we get the _fini/_init stuff commented/taken out for 7.1?\n> \n> > LER\n> \n> \n> > >>>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<<\n> \n> > On 3/22/01, 3:38:59 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote regarding Re:\n> > [HACKERS] odbc/UnixWare 7.1.1: No Go. :\n> \n> \n> > > Larry Rosenman <ler@lerctr.org> writes:\n> > > > My question is WHY are we using -Bsymbolic and/or -z text anyway?\n> > > > These options don't appear to buy us anything but grief on SVR[45] ELF\n> > > > systems..\n> \n> > > I have no idea what -z text means to your linker, but if it has a\n> > > -Bsymbolic option then it's a good bet that you need that. The ODBC\n> > > driver contains some function names that duplicate names in the unixODBC\n> > > driver manager. The driver's own references to these functions *must*\n> > > be resolved to its own routines and not the manager's, else havoc\n> > > ensues. But for some reason, the other way is the default on many\n> > > platforms.\n> \n> > > Do not assume that you have this right just because the build succeeds.\n> > > I found in testing on HPUX that not only could you build a wrongly\n> > > linked driver, but it would actually load and connect. Only certain\n> > > kinds of queries exhibited the problem. In short: better test it before\n> > > you claim you have it fixed.\n> \n> > > regards, tom lane\n> \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Don't 'kill -9' the postmaster\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Sat, 24 Mar 2001 10:28:24 -0600",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: odbc/UnixWare 7.1.1: No Go."
},
{
"msg_contents": "Larry Rosenman <ler@lerctr.org> writes:\n> I'll take the deafening silence as a NO?\n\nI was (a) waiting to see what Peter thought about it, and (b) wondering\nwhether you'd actually tested to see that the built ODBC driver does\nsomething useful. I'm not eager to risk a post-RC1 change that could\nconceivably break other platforms. Moving ODBC on UW from \"doesn't\ncompile\" to \"compiles\" isn't enough of a reason to take the risk.\nYou need to demonstrate that it moves all the way to \"works\".\n\nIt'd probably be a good idea to discuss the change on pgsql-odbc,\ntoo, just in case anyone interested is hanging out there and not here.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 24 Mar 2001 11:49:26 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: odbc/UnixWare 7.1.1: No Go. "
},
{
"msg_contents": "* Peter Eisentraut <peter_e@gmx.net> [010324 17:35]:\n> Tom Lane writes:\n> \n> > Larry Rosenman <ler@lerctr.org> writes:\n> > > I'll take the deafening silence as a NO?\n> >\n> > I was (a) waiting to see what Peter thought about it,\n> \n> Don't ask me, I don't know what this does...\n> \n> > and (b) wondering\n> > whether you'd actually tested to see that the built ODBC driver does\n> > something useful. I'm not eager to risk a post-RC1 change that could\n> > conceivably break other platforms. Moving ODBC on UW from \"doesn't\n> > compile\" to \"compiles\" isn't enough of a reason to take the risk.\n> > You need to demonstrate that it moves all the way to \"works\".\n> \n> Methinks so too.\nthen it'll have to wait for 7.2. I can't get enough bits to work, \nand no answer on pgsql-odbc yet.\n> \n> -- \n> Peter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Sat, 24 Mar 2001 17:36:18 -0600",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] odbc/UnixWare 7.1.1: No Go."
},
{
"msg_contents": "Tom Lane writes:\n\n> Larry Rosenman <ler@lerctr.org> writes:\n> > I'll take the deafening silence as a NO?\n>\n> I was (a) waiting to see what Peter thought about it,\n\nDon't ask me, I don't know what this does...\n\n> and (b) wondering\n> whether you'd actually tested to see that the built ODBC driver does\n> something useful. I'm not eager to risk a post-RC1 change that could\n> conceivably break other platforms. Moving ODBC on UW from \"doesn't\n> compile\" to \"compiles\" isn't enough of a reason to take the risk.\n> You need to demonstrate that it moves all the way to \"works\".\n\nMethinks so too.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Sun, 25 Mar 2001 00:45:22 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: odbc/UnixWare 7.1.1: No Go. "
}
] |
[
{
"msg_contents": "At 22:03 21/03/01 +0100, Peter Eisentraut wrote:\n>\n>This is going to be a disaster for the coder. Every time you look at an\n>elog you don't know what it does? Is the first arg a %s or a %d? What's\n>the first %s, what the second?\n\nFWIW, I did a quick scan for elog in PG and found:\n\n- 6856 calls (may include commented-out calls) \n- 2528 unique messages\n- 1248 have no parameters\n- 859 have exactly one argument\n- 285 have exactly 2 args\n- 136 have 3 or more args\n\nso 83% have one or no arguments, which is probably not going to be very\nconfusing.\n\nLooking at the actual messages, there is also a great deal of opportunity\nto standardize and simplify since many of the messages only differ by their\nprefixed function name.\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Thu, 22 Mar 2001 15:47:52 +1100",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": true,
"msg_subject": "Re: More on elog and error codes"
}
] |
[
{
"msg_contents": "I have finished pgindent. We also had many old comments of the format:\n\n\t/* ------\n * comment\n * ------\n */\n\nThese are now the more concise:\n\n\t/*\n * comment\n */\n\nAlso, comments with dashes are not wrapped nicely by pgindent. Some\ncomments need dashes to preserver layout, but many did not need them. I\nran pgindent to re-wrap those.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 22 Mar 2001 01:25:07 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "pgindent completed"
},
{
"msg_contents": "\nOn Thu, Mar 22, 2001 at 01:25:07AM -0500, Bruce Momjian wrote:\n> I have finished pgindent. We also had many old comments of the format:\n> \n> \t/* ------\n> * comment\n> * ------\n> */\n> \n> These are now the more concise:\n> \n> \t/*\n> * comment\n> */\n\n Hmm, intereting. What is bad on the \"old\" format? IMHO it's more synoptical\nfor example before function, where a comment must be good visible for fast\nfile browsing. Inside functions I agree with more brief version. \n\n I have macro for this in my editor, update it? :-)\n\n\t\t\tKarel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Thu, 22 Mar 2001 10:54:05 +0100",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": false,
"msg_subject": "Re: pgindent completed"
},
{
"msg_contents": "> \n> On Thu, Mar 22, 2001 at 01:25:07AM -0500, Bruce Momjian wrote:\n> > I have finished pgindent. We also had many old comments of the format:\n> > \n> > \t/* ------\n> > * comment\n> > * ------\n> > */\n> > \n> > These are now the more concise:\n> > \n> > \t/*\n> > * comment\n> > */\n> \n> Hmm, intereting. What is bad on the \"old\" format? IMHO it's more synoptical\n> for example before function, where a comment must be good visible for fast\n> file browsing. Inside functions I agree with more brief version. \n> \n> I have macro for this in my editor, update it? :-)\n\nThese were all hand-modified. Having comments before functions was not\ntouched, only such comments in functions, and only where you didn't want\na heavy comment and there was no indenting that needed preserving.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 22 Mar 2001 10:14:14 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgindent completed"
}
] |
[
{
"msg_contents": "I know pgindent has risks. Unfortunately, if we don't run pgindent, or\nrun it at a different time, we have other problems. Either the code is\nnot consistent for new developers, or patches supplied against the most\nrecent release do not patch cleanly.\n\nBoth seem worse to me than taking the risk of pgindent. I am willing to\nrun it earlier in the beta cycle next time, but usually there is someone\nworking on some patches during beta and pgindent would affect them.\n\nI feel kind of stuck because none of the options seems very good.\n\nAnyway, it is done for 7.2. I'll keep my fingers crossed.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 22 Mar 2001 01:29:50 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "run of pgindent"
}
] |
[
{
"msg_contents": "\n============== shutting down postmaster ==============\n\n======================\n All 76 tests passed.\n======================\n\nrm regress.o\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n",
"msg_date": "Thu, 22 Mar 2001 11:33:33 -0400 (AST)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "solaris 7/sparc good to go:"
}
] |
[
{
"msg_contents": "I'm currently concerned about these recent reports:\n\n* Joel Burton's report of disappearing files, 3/20. This is real scary,\nbut no one else has reported anything like it.\n\n* Tatsuo's weird failure in XLogFileInit (\"ZeroFill: no such file or\ndirectory\"). I'm hoping this can be explained away, but probably we\nought to alter the code so that we can detect the case where no errno\nis set by write() and avoid printing a bogus message.\n\nDo people feel comfortable putting out RC1 when we don't know the\nreasons for these reports?\n\nAnother thing I'd like to fix before RC1 is Adriaan's complaint about\nmishandling of int8-sized numeric constants on Alpha. Seems to me that\nwe want Alpha to behave like other platforms, ie T_Integer parse nodes\nshould only be generated for values that fit in int4. Otherwise Alpha\nwill have different type resolution behavior for expressions that\ncontain such constants, and that's going to be real confusing. I'm\nthinking about making scan.l do\n\n long x;\n\n errno = 0;\n x = strtol((char *)yytext, &endptr, 10);\n if (*endptr != '\\0' || errno == ERANGE\n#ifdef HAVE_LONG_INT_64\n /* if long is wider than 32 bits, check for overflow */\n || x != (long) ((int32) x)\n#endif\n )\n {\n /* integer too large, treat it as a float */\n\nObjections?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 22 Mar 2001 11:03:51 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Potential RC1-stoppers"
},
{
"msg_contents": "> I'm currently concerned about these recent reports:\n> \n> * Joel Burton's report of disappearing files, 3/20. This is real scary,\n> but no one else has reported anything like it.\n> \n> * Tatsuo's weird failure in XLogFileInit (\"ZeroFill: no such file or\n> directory\"). I'm hoping this can be explained away, but probably we\n> ought to alter the code so that we can detect the case where no errno\n> is set by write() and avoid printing a bogus message.\n> \n> Do people feel comfortable putting out RC1 when we don't know the\n> reasons for these reports?\n\nCan we keep an eye on these and address in 7.1.1? 7.1 will need fixes\nanyway.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 22 Mar 2001 11:09:29 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Potential RC1-stoppers"
}
] |
[
{
"msg_contents": "Is it possible to fix this before RC1?\n\nbray=# \\d price\n Table \"price\"\n Attribute | Type | Modifier \n-------------+-----------------------+----------\n product | character varying(10) | not null\n cost | numeric(12,2) | not null\n home | numeric(12,2) | not null\n export | numeric(12,2) | not null\n next_home | numeric(12,2) | \n next_export | numeric(12,2) | \nIndex: price_pkey\n\nbray=# select * from price where home = 1.50;\nERROR: Unable to identify an operator '=' for types 'numeric' and 'float8'\n\tYou will have to retype this query using an explicit cast\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Every good gift and every perfect gift is from above, \n coming down from the Father of the heavenly lights,\n who does not change like shifting shadows.\" \n James 1:17 \n\n\n",
"msg_date": "Thu, 22 Mar 2001 16:34:56 +0000",
"msg_from": "\"Oliver Elphick\" <olly@lfix.co.uk>",
"msg_from_op": true,
"msg_subject": "Missing operator for numeric comparison"
},
{
"msg_contents": "\"Oliver Elphick\" <olly@lfix.co.uk> writes:\n> Is it possible to fix this before RC1?\n\nIf it were an easily fixed thing, it would have been fixed long ago.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 22 Mar 2001 12:06:50 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Missing operator for numeric comparison "
}
] |
[
{
"msg_contents": "\nLooks like initdb is just a tad too strict when checking to make sure\nthe data directory is empty. Yesterday I created a new data directory\nas it's own filesystem (linux ext2), it includes a lost+found directory.\nTo initdb this means that the directory is no longer empty and it refuses\nto run. While I hope to never need lost+found, e2fsck will recreate it\nif it is missing (it is safer to create it when the filesystem is stable).\n\nMy quick fix to allow the lost+found directory:\n--- src/bin/initdb/initdb.sh 2001/03/13 21:37:15 1.122\n+++ src/bin/initdb/initdb.sh 2001/03/22 15:45:46\n@@ -402,7 +402,7 @@\n \n # find out if directory is empty\n pgdata_contents=`ls -A \"$PGDATA\" 2>/dev/null`\n-if [ x\"$pgdata_contents\" != x ]\n+if [ x\"$pgdata_contents\" != x -a \"$pgdata_contents\" != \"lost+found\" ]\n then\n (\n echo \"$CMDNAME: The directory $PGDATA exists but is not empty.\"\n\n\nThis fix works for ext2, but will (obviously) not work if the filesystem\nuses something other than \"lost+found\".\n\nSteve Stock\nsteve@technolope.org\n",
"msg_date": "Thu, 22 Mar 2001 11:41:05 -0500",
"msg_from": "Steve Stock <steve@technolope.org>",
"msg_from_op": true,
"msg_subject": "initdb and data directories with lost+found"
},
{
"msg_contents": "Steve Stock <steve@technolope.org> writes:\n> --- src/bin/initdb/initdb.sh 2001/03/13 21:37:15 1.122\n> +++ src/bin/initdb/initdb.sh 2001/03/22 15:45:46\n> @@ -402,7 +402,7 @@\n \n> # find out if directory is empty\n> pgdata_contents=`ls -A \"$PGDATA\" 2>/dev/null`\n> -if [ x\"$pgdata_contents\" != x ]\n> +if [ x\"$pgdata_contents\" != x -a \"$pgdata_contents\" != \"lost+found\" ]\n> then\n> (\n> echo \"$CMDNAME: The directory $PGDATA exists but is not empty.\"\n\n> This fix works for ext2, but will (obviously) not work if the filesystem\n> uses something other than \"lost+found\".\n\nAFAIK that name is universally used. Seems like a reasonable change to\nme; Peter, do you agree?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 22 Mar 2001 12:08:49 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: initdb and data directories with lost+found "
},
{
"msg_contents": "Steve Stock writes:\n\n> Looks like initdb is just a tad too strict when checking to make sure\n> the data directory is empty. Yesterday I created a new data directory\n> as it's own filesystem (linux ext2), it includes a lost+found directory.\n\nIt is never a good idea to let initdb loose on a directory that might\npossibly have some other purpose, including that of being the root\ndirectory of an ext2 partition. Initdb or the database system can do\nanything they want in that directory, so it's not good to save lost blocks\nsomewhere in the middle, even if chances are low you need them. I say,\ncreate a subdirectory.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Thu, 22 Mar 2001 18:16:03 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: initdb and data directories with lost+found"
},
{
"msg_contents": "On Thu, Mar 22, 2001 at 06:16:03PM +0100, Peter Eisentraut wrote:\n> It is never a good idea to let initdb loose on a directory that might\n> possibly have some other purpose, including that of being the root\n> directory of an ext2 partition. Initdb or the database system can do\n> anything they want in that directory, so it's not good to save lost blocks\n> somewhere in the middle, even if chances are low you need them. I say,\n> create a subdirectory.\n\nWhile I agree that the PGDATA directory shouldn't be used by anything\nsave postgres, I don't think that lost+found constitutes another use of\nthe directory. The only way that a conflict could occur is if postgres\nused the lost+found directory, something I that don't expect will occur.\n\nSide note, the reason that I ran into this in the first place is because\nI'm toying with multiple data directories each on their own logical volume.\nFor the directory structure I'd prefer something like postgres/data[123]\nrather than postgres/data[123]/data or postgres[123]/data.\n\nSteve Stock\nsteve@technolope.org\n",
"msg_date": "Thu, 22 Mar 2001 14:30:52 -0500",
"msg_from": "Steve Stock <steve@technolope.org>",
"msg_from_op": true,
"msg_subject": "Re: initdb and data directories with lost+found"
}
] |
[
{
"msg_contents": "> * Joel Burton's report of disappearing files, 3/20. This is \n> real scary, but no one else has reported anything like it.\n\nCan please you remind that report?\n\nVadim\n",
"msg_date": "Thu, 22 Mar 2001 08:51:32 -0800",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: Potential RC1-stoppers"
},
{
"msg_contents": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n>> * Joel Burton's report of disappearing files, 3/20. This is \n>> real scary, but no one else has reported anything like it.\n\n> Can please you remind that report?\n\nIt's the \"pg_inherits: not found, but visible\" thread in pghackers\non 3/20 and 3/21. Briefly, he had two separate occurrences of a table\nfile disappearing while the pg_class row remained (and he hadn't\ntried to delete it, either). The only idea I can come up with is that\na removal of some other table removed the wrong file. Ugly.\n\nJoel, can you give us any more info? Do you have a postmaster log of\nthe queries that were issued while this was happening?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 22 Mar 2001 11:58:44 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Potential RC1-stoppers "
},
{
"msg_contents": "On Thu, 22 Mar 2001, Tom Lane wrote:\n\n> \"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n> >> * Joel Burton's report of disappearing files, 3/20. This is \n> >> real scary, but no one else has reported anything like it.\n> \n> > Can please you remind that report?\n> \n> It's the \"pg_inherits: not found, but visible\" thread in pghackers\n> on 3/20 and 3/21. Briefly, he had two separate occurrences of a table\n> file disappearing while the pg_class row remained (and he hadn't\n> tried to delete it, either). The only idea I can come up with is that\n> a removal of some other table removed the wrong file. Ugly.\n> \n> Joel, can you give us any more info? Do you have a postmaster log of\n> the queries that were issued while this was happening?\n\nSorry; I've been at client sites for the past day.\n\nI rebooted my machine, and it didn't happen again that night. Yesterday,\nmy staff reinstalled Pg straight from the CVS but without (!) tarring up\nthe old Pg install, so I'm afraid I don't have any logs. I run Pg w/debug\nswitches on my development machine; this machine did not have such.\n\nAfter rebooting, and since reinstalling Pg\nbeta-6-or-whatever-we're-at-now, it hasn't happened again. I'm afraid I\ncan't think of anything unusual about the PC.\n\nUnbranded, decent-quality components AMD K6-III/550\n256MB RAM\nLinux-Mandrake 7.2 w/the secure version of the kernel (2.2.17, IIRC)\nPg beta4\n\n\nI don't have a log, but do have the query that was issued, multiple times,\noverlapping:\n\nSELECT * FROM zope_facinst LIMIT 1000;\n\nwhere zope_facinst is the view\n\nSELECT DISTINCT ON (t.lname, \n t.fname, \n c.fulltitle, c.classcode,\n t.trainid) \n c.classcode, \n t.trainid, \n scw_namecode(t.fname, t.lname) AS namecode,\n t.fullname, \n c.fulltitle, \n c.descrip, \n t.descripshort AS train_descripshort, \n c.descripshort AS class_descripshort \nFROM vlkpclass c, \n vlkptrain t, \n tblinst i, \n trelinsttrain it \nWHERE (((c.classcode = i.classcode) AND \n (i.instid = it.instid)) \nAND (it.trainid = t.trainid)) \nORDER BY t.lname, \n t.fname,\n c.fulltitle, \n c.classcode, \n t.trainid;\n\nSo it's pretty complicated, but not terrible.\n\nThe classes starting w/'t' are tables; those starting with 'v' are\nviews; none of the views are too complex.\n\nscw_namecode() is a simple pl/pgsql routine that just joins the strings\ntogether in a particular way.\n\nThere are about 400 records returned by the view.\n\n\n\nEXPLAIN for it looks like this:\n\nreg2=# explain select * from zope_Facinst;\nNOTICE: QUERY PLAN:\n\nSubquery Scan zope_facinst (cost=339.93..356.42 rows=132 width=141)\n -> Unique (cost=339.93..356.42 rows=132 width=141)\n -> Sort (cost=339.93..339.93 rows=1319 width=141)\n -> Merge Join (cost=261.33..271.56 rows=1319 width=141)\n -> Sort (cost=223.52..223.52 rows=597 width=92)\n -> Merge Join (cost=131.72..195.99 rows=597\nwidth=92)\n -> Index Scan using tblinst_pkey on\ntblinst i (cost=0.00..53\n.69 rows=769 width=16)\n -> Sort (cost=131.72..131.72 rows=78\nwidth=76)\n -> Merge Join (cost=52.15..129.28\nrows=78 width=76)\n -> Merge Join\n(cost=52.15..59.96 rows=976 width=\n68)\n -> Sort\n(cost=27.28..27.28 rows=316 width=\n40)\n -> Seq Scan on\ntblpers p (cost=0.00.\n.14.16 rows=316 width=40)\n -> Sort\n(cost=24.87..24.87 rows=309 width=\n28)\n -> Seq Scan on\ntbltrain t (cost=0.00\n..12.09 rows=309 width=28)\n -> Index Scan using\ntrelinsttrain_trainid_idx on\ntrelinsttrain it (cost=0.00..42.75 rows=795 width=8)\n -> Sort (cost=37.82..37.82 rows=221 width=49)\n -> Seq Scan on tblclass c (cost=0.00..29.21\nrows=221 width=49)\n\n\n\nI can provide a dump of the database if anyone would like, or copies of\nthe Zope scripts (very, very simple: they just call the ZSQL method\n'select * from zope_facinst limit 1000')\n\n\nSorry I can't provide much more, and, yes, I know it sucks to have a\nproblem I can't replicate. Err. Computers can be like that.\n\nI hope this helps.\n\n\n-- \nJoel Burton <jburton@scw.org>\nDirector of Information Systems, Support Center of Washington\n\n",
"msg_date": "Thu, 22 Mar 2001 19:40:36 -0500 (EST)",
"msg_from": "Joel Burton <jburton@scw.org>",
"msg_from_op": false,
"msg_subject": "Re: Potential RC1-stoppers "
},
{
"msg_contents": "Joel Burton <jburton@scw.org> writes:\n> I rebooted my machine, and it didn't happen again that night. Yesterday,\n> my staff reinstalled Pg straight from the CVS but without (!) tarring up\n> the old Pg install, so I'm afraid I don't have any logs. I run Pg w/debug\n> switches on my development machine; this machine did not have such.\n\nDrat.\n\n> I don't have a log, but do have the query that was issued, multiple times,\n> overlapping:\n> SELECT * FROM zope_facinst LIMIT 1000;\n\nIt's really unlikely (I hope) that the clients running SELECTs had\nanything to do with it. You had mentioned that you were busy making\nmanual schema revisions while this went on; that process seems more\nlikely to be the guilty party. But if you don't have the logs anymore,\nI suppose there's not much chance of reconstructing what you did :-(\n\nI spent much of this afternoon groveling through the deletion-related\ncode, looking for some code path that could lead to a deletion operation\ndeleting the wrong file. I didn't find anything that looked plausible\nenough to be worth pursuing. So I'm stumped for the moment. We'll have\nto hope that if it happens again, we can gather more data.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 22 Mar 2001 20:03:08 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Potential RC1-stoppers "
},
{
"msg_contents": "On Thu, 22 Mar 2001, Tom Lane wrote:\n\n> Joel Burton <jburton@scw.org> writes:\n> > I rebooted my machine, and it didn't happen again that night. Yesterday,\n> > my staff reinstalled Pg straight from the CVS but without (!) tarring up\n> > the old Pg install, so I'm afraid I don't have any logs. I run Pg w/debug\n> > switches on my development machine; this machine did not have such.\n> \n> Drat.\n> \n> > I don't have a log, but do have the query that was issued, multiple times,\n> > overlapping:\n> > SELECT * FROM zope_facinst LIMIT 1000;\n> \n> It's really unlikely (I hope) that the clients running SELECTs had\n> anything to do with it. You had mentioned that you were busy making\n> manual schema revisions while this went on; that process seems more\n> likely to be the guilty party. But if you don't have the logs anymore,\n> I suppose there's not much chance of reconstructing what you did :-(\n\nThe dropping and re-making were the zope_facinst view listed in my email.\nI was tinkering with various parameters, trying to see if distinct on\n(list) was faster than distinct list, etc.\n\n> I spent much of this afternoon groveling through the deletion-related\n> code, looking for some code path that could lead to a deletion operation\n> deleting the wrong file. I didn't find anything that looked plausible\n> enough to be worth pursuing. So I'm stumped for the moment. We'll have\n> to hope that if it happens again, we can gather more data.\n\nIt could be my machine; it's not a heavily used machine, so I can't vouch\nfor its stability.\n\nSorry I couldn't help more.\n\nAs always, thanks.\n-- \nJoel Burton <jburton@scw.org>\nDirector of Information Systems, Support Center of Washington\n\n",
"msg_date": "Thu, 22 Mar 2001 20:07:12 -0500 (EST)",
"msg_from": "Joel Burton <jburton@scw.org>",
"msg_from_op": false,
"msg_subject": "Re: Potential RC1-stoppers "
}
] |
[
{
"msg_contents": "I think there is a prob with regexp, which is comparing one less\ncharacter as it should. Below is an example. Result is that last\ncharacter is omitted !\n\nPh.R.\n\npostgres=# select * from pg_database where datname ~ 'ibd01*';\n datname | datdba | encoding | datpath \n---------+--------+----------+---------\n ibd00_8 | 505 | 0 | ibd00_8\n ibd00_1 | 505 | 0 | ibd00_1\n ibd00_2 | 505 | 0 | ibd00_2\n ibd00_3 | 505 | 0 | ibd00_3\n ibd00_4 | 505 | 0 | ibd00_4\n ibd00_5 | 505 | 0 | ibd00_5\n ibd00_6 | 505 | 0 | ibd00_6\n ibd00_7 | 505 | 0 | ibd00_7\n ibd00_9 | 505 | 0 | ibd00_9\n ibd01_1 | 505 | 0 | ibd01_1\n ibd01_2 | 505 | 0 | ibd01_2\n ibd01_3 | 505 | 0 | ibd01_3\n ibd01_4 | 505 | 0 | ibd01_4\n ibd01_5 | 505 | 0 | ibd01_5\n(14 rows)\n\npostgres=# select * from pg_database where datname ~ 'ibd01_*';\n datname | datdba | encoding | datpath \n---------+--------+----------+---------\n ibd01_1 | 505 | 0 | ibd01_1\n ibd01_2 | 505 | 0 | ibd01_2\n ibd01_3 | 505 | 0 | ibd01_3\n ibd01_4 | 505 | 0 | ibd01_4\n ibd01_5 | 505 | 0 | ibd01_5\n(5 rows)\n",
"msg_date": "Thu, 22 Mar 2001 19:08:13 +0100",
"msg_from": "Philippe Rochat <mlrochat@lbdsun.epfl.ch>",
"msg_from_op": true,
"msg_subject": "Prob with regexp"
},
{
"msg_contents": "Philippe Rochat <mlrochat@lbdsun.epfl.ch> writes:\n> I think there is a prob with regexp, which is comparing one less\n> character as it should. Below is an example. Result is that last\n> character is omitted !\n\nPossibly there is a problem with your understanding of regexp patterns\n... but all those matches look valid to me. See\nhttp://www.postgresql.org/devel-corner/docs/postgres/functions-matching.html\nfor more info.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 25 Mar 2001 14:51:31 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Prob with regexp "
},
{
"msg_contents": "Philippe Rochat writes:\n\n> I think there is a prob with regexp, which is comparing one less\n> character as it should. Below is an example. Result is that last\n> character is omitted !\n\nA '*' means \"zero or more of the preceeding character\". You probably want\na '+'.\n\n>\n> Ph.R.\n>\n> postgres=# select * from pg_database where datname ~ 'ibd01*';\n> datname | datdba | encoding | datpath\n> ---------+--------+----------+---------\n> ibd00_8 | 505 | 0 | ibd00_8\n> ibd00_1 | 505 | 0 | ibd00_1\n> ibd00_2 | 505 | 0 | ibd00_2\n> ibd00_3 | 505 | 0 | ibd00_3\n> ibd00_4 | 505 | 0 | ibd00_4\n> ibd00_5 | 505 | 0 | ibd00_5\n> ibd00_6 | 505 | 0 | ibd00_6\n> ibd00_7 | 505 | 0 | ibd00_7\n> ibd00_9 | 505 | 0 | ibd00_9\n> ibd01_1 | 505 | 0 | ibd01_1\n> ibd01_2 | 505 | 0 | ibd01_2\n> ibd01_3 | 505 | 0 | ibd01_3\n> ibd01_4 | 505 | 0 | ibd01_4\n> ibd01_5 | 505 | 0 | ibd01_5\n> (14 rows)\n>\n> postgres=# select * from pg_database where datname ~ 'ibd01_*';\n> datname | datdba | encoding | datpath\n> ---------+--------+----------+---------\n> ibd01_1 | 505 | 0 | ibd01_1\n> ibd01_2 | 505 | 0 | ibd01_2\n> ibd01_3 | 505 | 0 | ibd01_3\n> ibd01_4 | 505 | 0 | ibd01_4\n> ibd01_5 | 505 | 0 | ibd01_5\n> (5 rows)\n>\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Sun, 25 Mar 2001 22:19:07 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Prob with regexp"
}
] |
[
{
"msg_contents": "FYI.....\n\n\n----- Forwarded message from Jordan Hubbard <jkh@osd.bsdi.com> -----\n\nSender: owner-freebsd-stable@FreeBSD.ORG\nFrom: Jordan Hubbard <jkh@osd.bsdi.com>\nSubject: Revised schedule for 4.3-RELEASE\nDate: Thu, 22 Mar 2001 12:04:09 -0800\nMessage-Id: <20010322120409J.jkh@osd.bsdi.com>\nX-Mailer: Mew version 1.94.1 on Emacs 20.7 / Mule 4.0 (HANANOEN)\nTo: developers@FreeBSD.ORG\nCc: stable@FreeBSD.ORG, qa@FreeBSD.ORG\nDelivered-To: freebsd-stable@freebsd.org\n\nHi folks,\n\nI've thought long and hard about this over the past week and I think\nwe're going to need a bit more time to get 4-stable into the kind of\nshape I think we all *want* it to be in come release time. Now that\npeople are really starting to ramp up and test the bits, we're getting\nsome interesting reports and some of them may be indicative of issues\nwhich are rather severe.\n\nIt's still too early to tell, but with the release date set for March\n30th we also don't really have much time to find out or to fix any of\nthe issues people are current identifying so, without further ado,\nhere's my provisional revised schedule:\n\nMarch 26th:\t 4.3-RC1\nApril 2nd:\t 4.3-RC2\nApril 10th:\t 4.3-RC3\nApril 15th:\t 4.3-RELEASE\n\nWhy three RC (Release Candidate) builds rather than just one? Because\nit seems that people really tend to test the RCs more than the -stable\nsnapshots, which lack ISO images and any real sense of collective\neffort behind them. If we break this into three checkpoints, each on\na Monday, we have a full week for people to syncronize themselves on\nthat RC and make sure that: a) Any bugs they reported in the previous\nweek are gone and b) There are no new bugs they can find.\n\nBy April 15th, tax day** here in the US, I think we'll be ready to\ncreate a very good 4.3-RELEASE!\n\nThanks,\n\n- Jordan\n\n** And I already filed mine, so nyah nyah nyah! :-) :-)\n\nTo Unsubscribe: send mail to majordomo@FreeBSD.org\nwith \"unsubscribe freebsd-stable\" in the body of the message\n----- End forwarded message -----\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Thu, 22 Mar 2001 14:10:29 -0600",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "(forw) Revised schedule for 4.3-RELEASE"
}
] |
[
{
"msg_contents": "We have pgsql-7.0.2 running on a production platform doing nightly dumps. I\ntried to import this dump using psql to a pgsql server running from a cvs\nupdate of 7.1 I did today. \n\nAll the data was imported OK except for the data in one table where I got\nthe following message on import :\n\nERROR: copy: line 154391, Bad timestamp external representation '2000-10-24 15:14:60.00+02'\nPQendcopy: resetting connection\n\n\nThe result was that this table turned up with no rows at all after the\nimport when it should have contained more than 900000 rows.\n\n\nI get exactly the same error trying to import into a 7.0.2 database on\nmy laptop as well, so I guess the problem might have been around for a\nwhile. \n\nThe production platform creating the dump file is Solaris 7 on an Ultra\nSparc, while the laptop I'm importing the file on is Redhat Linux 6.1 on an\nx86 processor. \n\nThe actual table \"access_log\" got these columns :\na_accesstime timestamp 8 \na_locid int4 4 \na_catid int4 4 \na_searchterm varchar 256 \na_host varchar 64 \na_requesturl varchar 128 \na_action varchar 16 \na_uid int4 4 \na_pt_id int4 4 \n\n\nDo anybody have suggestions to where I should look for the error or what\nother data I need to supply to help somebody look into it ? \n\nIf you look at the seconds part of the time above you notice 60, which make\nme wonder how that could get in there in the first place. \n\nSo to me there seems to bugs, it is possible to get invalid times into the\ndatabase and dump/restore breaks if you manage this.\n\n\nregards, \n\n\tGunnar\n",
"msg_date": "22 Mar 2001 21:48:13 +0100",
"msg_from": "Gunnar R|nning <gunnar@candleweb.no>",
"msg_from_op": true,
"msg_subject": "Problem migrating dump to latest CVS snapshot."
},
{
"msg_contents": "Gunnar R|nning <gunnar@candleweb.no> wrote:\n>All the data was imported OK except for the data in one table where I got\n>the following message on import :\n>\n>ERROR: copy: line 154391, Bad timestamp external representation '2000-10-24 15:14:60.00+02'\n>PQendcopy: resetting connection\n\nLooks like the \"ISO\" datestyle to me.\n\n>I get exactly the same error trying to import into a 7.0.2 database on my\n>laptop as well, so I guess the problem might have been around for a while. \n\nYou'll need to set the default PGDATESTYLE to ISO prior to importing.\n(I don't recall what the proper way to do this is, but it's definitely\ndocumented).\n\nHTH,\nRay\n-- \nWhere do you want to go today? \n\nConfutatis maledictis, flammis acribus addictis.\n\n",
"msg_date": "Thu, 22 Mar 2001 21:18:16 +0000 (UTC)",
"msg_from": "jdassen@cistron.nl (J.H.M. Dassen (Ray))",
"msg_from_op": false,
"msg_subject": "Re: Problem migrating dump to latest CVS snapshot."
},
{
"msg_contents": "Gunnar R|nning <gunnar@candleweb.no> writes:\n> ERROR: copy: line 154391, Bad timestamp external representation '2000-10-24 15:14:60.00+02'\n\nI'll venture it doesn't like the \"60\" for seconds.\n\n> The production platform creating the dump file is Solaris 7 on an Ultra\n> Sparc, while the laptop I'm importing the file on is Redhat Linux 6.1 on an\n> x86 processor. \n\nSeems Mandrake Linux is not the only platform where roundoff behavior is\nless IEEE-perfect than Thomas would like it to be. Perhaps we need a\nslightly more robust approach to controlling roundoff error.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 22 Mar 2001 16:19:30 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Problem migrating dump to latest CVS snapshot. "
},
{
"msg_contents": "Gunnar R|nning <gunnar@candleweb.no> writes:\n> ERROR: copy: line 154391, Bad timestamp external representation '2000-10-24 15:14:60.00+02'\n\nBTW, did your original data contain any fractional-second timestamps?\nI'm wondering if the original value might have been something like\n\t2000-10-24 15:14:59.999\nin which case sprintf's roundoff of the seconds field to %.2f format\nwould've been enough to do the damage.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 22 Mar 2001 18:24:04 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Problem migrating dump to latest CVS snapshot. "
},
{
"msg_contents": "> Seems Mandrake Linux is not the only platform where roundoff behavior is\n> less IEEE-perfect than Thomas would like it to be. Perhaps we need a\n> slightly more robust approach to controlling roundoff error.\n\nGo ahead. istm that asking modulo, trunc, etc to Do The Right Thing is\nnot a big deal, and it would be better to understand how to build\nexecutables that can do math.\n\nCertainly better than writing a bunch of extra checking code to work\naround the inability of a compiler (or compiler options) to do IEEE\nmath. It *is* a standard, ya know ;)\n\n - Thomas\n",
"msg_date": "Fri, 23 Mar 2001 00:14:29 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: Problem migrating dump to latest CVS snapshot."
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> Gunnar R|nning <gunnar@candleweb.no> writes:\n> > ERROR: copy: line 154391, Bad timestamp external representation '2000-10-24 15:14:60.00+02'\n> \n> BTW, did your original data contain any fractional-second timestamps?\n> I'm wondering if the original value might have been something like\n> \t2000-10-24 15:14:59.999\n> in which case sprintf's roundoff of the seconds field to %.2f format\n> would've been enough to do the damage.\n\nWhat do you mean by original value ? The value we have in the production\ndatabase ? If so, that shows up as 2000-10-24 15:14:60.00+02 independent of\nwhat platform my client is running on. The production platform was as I\nmentioned Solaris 2.7. \n\nThe value was generated at the time of a given web request by a Java\nservlet and inserted into the database using JDBC. The timestamp in Java is\nthe number of milliseconds since epoch, so yes it is quite probable that it\ncontained a fractional second timestamp ;-) \n\nBut the problem here then might be with the Solaris 2.7 platform and not\nRedhat Linux 6.1 if I am interpreting this right ???\n\nRegards, \n\n\tGunnar \n",
"msg_date": "23 Mar 2001 02:17:03 +0100",
"msg_from": "Gunnar R|nning <gunnar@candleweb.no>",
"msg_from_op": true,
"msg_subject": "Re: Problem migrating dump to latest CVS snapshot."
},
{
"msg_contents": "Gunnar R|nning <gunnar@candleweb.no> writes:\n>> BTW, did your original data contain any fractional-second timestamps?\n>> I'm wondering if the original value might have been something like\n>> 2000-10-24 15:14:59.999\n>> in which case sprintf's roundoff of the seconds field to %.2f format\n>> would've been enough to do the damage.\n\n> What do you mean by original value ? The value we have in the production\n> database ? If so, that shows up as 2000-10-24 15:14:60.00+02 independent of\n> what platform my client is running on. The production platform was as I\n> mentioned Solaris 2.7. \n\nIf you still have the value stored in the original database, please try\n\tselect date_part('seconds', ...)\nto see what that reports as the true seconds part of the value.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 22 Mar 2001 20:41:48 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Problem migrating dump to latest CVS snapshot. "
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> \n> If you still have the value stored in the original database, please try\n> \tselect date_part('seconds', ...)\n> to see what that reports as the true seconds part of the value.\n> \n\nSeems you hit the nail with your theory :\n\nsf-ng=# select date_part('seconds', a_accesstime) from access_log where\na_accesstime > '2000-10-24 15:14:59' limit 1;\n date_part\n-----------\n 59.997\n(1 row)\n\nsf-ng=#\n\n\nregards, \n\n\tGunnar\n",
"msg_date": "23 Mar 2001 03:07:16 +0100",
"msg_from": "Gunnar R|nning <gunnar@candleweb.no>",
"msg_from_op": true,
"msg_subject": "Re: Problem migrating dump to latest CVS snapshot."
},
{
"msg_contents": "Gunnar R|nning <gunnar@candleweb.no> writes:\n> Seems you hit the nail with your theory :\n\n> sf-ng=# select date_part('seconds', a_accesstime) from access_log where\n> a_accesstime > '2000-10-24 15:14:59' limit 1;\n> date_part\n> -----------\n> 59.997\n> (1 row)\n\nAh-hah. And we print that with a \"%.2f\" sort of format, so it rounds\noff to 60.00. Even in IEEE arithmetic ;-)\n\nI've suggested before that timestamp output should round the timestamp\nvalue to two fractional digits *before* breaking it down into year/\nmonth/etc. Seems like this is a perfect example of the necessity\nfor that. Thomas, what say you?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 22 Mar 2001 21:52:43 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Problem migrating dump to latest CVS snapshot. "
},
{
"msg_contents": "(moved to -hackers, since I don't have posting privileges on -general)\n\n> I've suggested before that timestamp output should round the timestamp\n> value to two fractional digits *before* breaking it down into year/\n> month/etc. Seems like this is a perfect example of the necessity\n> for that. Thomas, what say you?\n\nWell, that is a good idea to solve the \"hidden digits problem\",\nintroducing instead a new \"lost digits feature\". But I've been hoping to\nhear a suggestion on how to allow a variable number of digits, without\ncluttering things up with output values ending up with a bunch of 9's at\nthe end.\n\nWhen I first implemented the non-Unix-time date/time types, I was\nworried that the floating point math libraries on *some* of the two\ndozen platforms we support would tend to print out .9999... values\n(having seen this behavior *way* too often on older Unix boxes). But\nI've never actually asked folks to run tests, since I was just happy\nthat the floating point implementation worked well enough to go into\nproduction.\n\nThoughts?\n\n - Thomas\n",
"msg_date": "Fri, 23 Mar 2001 03:05:30 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: Problem migrating dump to latest CVS snapshot."
},
{
"msg_contents": "Thomas Lockhart <lockhart@alumni.caltech.edu> writes:\n>> I've suggested before that timestamp output should round the timestamp\n>> value to two fractional digits *before* breaking it down into year/\n>> month/etc. Seems like this is a perfect example of the necessity\n>> for that. Thomas, what say you?\n\n> Well, that is a good idea to solve the \"hidden digits problem\",\n> introducing instead a new \"lost digits feature\".\n\nWe already have the \"lost digits feature\", since you cannot get\ntimestamp_out to display anything after the second digit. Now,\nif you want to improve on that ...\n\n> But I've been hoping to\n> hear a suggestion on how to allow a variable number of digits, without\n> cluttering things up with output values ending up with a bunch of 9's at\n> the end.\n\nWell, we could print the seconds part with, say, %.6f format and then\nmanually delete trailing zeroes (and the trailing dot if we find all the\nfractional digits are zero, which would be a nice improvement anyway).\nI'd still think it a good idea to round to the intended number of digits\nbefore we break down the date, however.\n\nThe real question here is how far away from the Epoch do you wish to\nproduce reasonable display of fractional seconds? We have 6-digit\naccuracy out to around 200 years before and after Y2K, which is probably\nfar enough, though maybe we should make it 5 digits to allow some\nmore margin for error.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 22 Mar 2001 22:27:20 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Problem migrating dump to latest CVS snapshot. "
}
] |
[
{
"msg_contents": "I've successfully tested release 7.1beta4 on Irix 6.5.11. I'll run another\ntest on the most recent beta or RC next week.\n\n+----------------------------------+------------------------------------+\n| Robert E. Bruccoleri, Ph.D. | Phone: 609 737 6383 |\n| President, Congenomics, Inc. | Fax: 609 737 7528 |\n| 114 W Franklin Ave, Suite K1,4,5 | email: bruc@acm.org |\n| P.O. Box 314 | URL: http://www.congen.com/~bruc |\n| Pennington, NJ 08534 | |\n+----------------------------------+------------------------------------+\n",
"msg_date": "Thu, 22 Mar 2001 20:16:05 -0500 (EST)",
"msg_from": "bruc@stone.congenomics.com (Robert E. Bruccoleri)",
"msg_from_op": true,
"msg_subject": "Regression testing on Irix"
},
{
"msg_contents": "> I've successfully tested release 7.1beta4 on Irix 6.5.11. I'll run another\n> test on the most recent beta or RC next week.\n\nGreat Robert! I'll mark that as tested (if it works for you on beta4, it\nwill work on RC1). Looking forward to confirmation about RC1 when it\nappears :)\n\n - Thomas\n",
"msg_date": "Fri, 23 Mar 2001 02:59:24 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: Regression testing on Irix"
}
] |
[
{
"msg_contents": "Folks,\n\nI am learning and using SPI. In PostgreSQL documentation chapter \"Server\nProgramming Interface,\" there is a small example name \"execq(text,\nint)\".\n\nThis example works as the document says on 7.0.3 and earlier version,\nbut this example DOES NOT work on my 7.1 beta4.\n\nThis is the error message I got. I wonder does anyone test your SPI\nfunctions on 7.1 beta? Does anyone notice the problem I got here?\nI know I should test this under beta6 before I post this message, but my\nco-worker's anti-virus s/w gave him a virus warning message after him\nfinished the download of 7.1beta6 .gz! :-)\n\n------------\ndb1=# select execq('create table a (x int4)', 0);\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n!#\n------------\n\nThanks for your help\n\n--\nLM Liu\n\n\n",
"msg_date": "Thu, 22 Mar 2001 17:16:50 -0800",
"msg_from": "Limin Liu <limin@pumpkinnet.com>",
"msg_from_op": true,
"msg_subject": "SPI example does not work for 7.1beta4"
},
{
"msg_contents": "Limin Liu <limin@pumpkinnet.com> writes:\n> I am learning and using SPI. In PostgreSQL documentation chapter \"Server\n> Programming Interface,\" there is a small example name \"execq(text,\n> int)\".\n> This example works as the document says on 7.0.3 and earlier version,\n> but this example DOES NOT work on my 7.1 beta4.\n\nHm. textout() can't be called that way anymore --- as indeed your compiler\nshould have told you, if it's any good at all. I get\n\nexecq.c: In function `execq':\nexecq.c:13: warning: passing arg 1 of `textout' from incompatible pointer type\nexecq.c:13: warning: passing arg 1 of `SPI_exec' makes pointer from integer without a cast\n\nLooks like the example is in need of update. Thanks for the report.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 22 Mar 2001 20:24:36 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] SPI example does not work for 7.1beta4 "
},
{
"msg_contents": "Thanks Tom,\n\nBy the way, did you change the whole archetecture of SPI invocation?\n\nI put\n---\n SPI_connect();\n SPI_exec(\"create temp table tbl_tmp (n int);\",0);\n SPI_exec(\"insert into tbl_tmp values (1);\",0);\n SPI_finish();\n---\nafter InitPostgres and before setsigjmp().\nThis works perfectly in my 7.03 and earlier version. That means, after authentication, we\nwill have a temp table per each db connection. User can use the client program to \"select\"\nor \"insert\" upon this temp table.\n\nHowever, the same code does not work in 7.1. Actually, I got some messages contradicting to\neach other:\n---\ndb1=> select * from tbl_tmp;\nERROR: Relation 'tbl_tmp' does not exist\ndb1=> create temp table tbl_tmp (n int);\nERROR: Relation 'tbl_tmp' already exists\ndb1=>\n---\n\nCan you please give us some hints on what's going on here?\n\nThanks\n\nTom Lane wrote:\n\n> Limin Liu <limin@pumpkinnet.com> writes:\n> > I am learning and using SPI. In PostgreSQL documentation chapter \"Server\n> > Programming Interface,\" there is a small example name \"execq(text,\n> > int)\".\n> > This example works as the document says on 7.0.3 and earlier version,\n> > but this example DOES NOT work on my 7.1 beta4.\n>\n> Hm. textout() can't be called that way anymore --- as indeed your compiler\n> should have told you, if it's any good at all. I get\n>\n> execq.c: In function `execq':\n> execq.c:13: warning: passing arg 1 of `textout' from incompatible pointer type\n> execq.c:13: warning: passing arg 1 of `SPI_exec' makes pointer from integer without a cast\n>\n> Looks like the example is in need of update. Thanks for the report.\n>\n> regards, tom lane\n\n--\nLM Liu\n",
"msg_date": "Thu, 22 Mar 2001 18:00:10 -0800",
"msg_from": "Limin Liu <limin@pumpkinnet.com>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] SPI example does not work for 7.1beta4"
},
{
"msg_contents": "Limin Liu <limin@pumpkinnet.com> writes:\n> By the way, did you change the whole archetecture of SPI invocation?\n\nNo, I told you: what's broken here is the textout() call. Attached\nis the updated example.\n\n> I put\n> ---\n> SPI_connect();\n> SPI_exec(\"create temp table tbl_tmp (n int);\",0);\n> SPI_exec(\"insert into tbl_tmp values (1);\",0);\n> SPI_finish();\n> ---\n> after InitPostgres and before setsigjmp().\n\nI doubt this will work correctly without a transaction around it ...\n\n\t\t\tregards, tom lane\n\n\n#include \"executor/spi.h\" /* this is what you need to work with SPI */\n\nint execq(text *sql, int cnt);\n\nint\nexecq(text *sql, int cnt)\n{\n char *query;\n int ret;\n int proc;\n\n /* Convert given TEXT object to a C string */\n query = DatumGetCString(DirectFunctionCall1(textout,\n PointerGetDatum(sql)));\n\n SPI_connect();\n \n ret = SPI_exec(query, cnt);\n \n proc = SPI_processed;\n /*\n * If this is SELECT and some tuple(s) fetched -\n * returns tuples to the caller via elog (NOTICE).\n */\n if ( ret == SPI_OK_SELECT && SPI_processed > 0 )\n {\n TupleDesc tupdesc = SPI_tuptable->tupdesc;\n SPITupleTable *tuptable = SPI_tuptable;\n char buf[8192];\n int i,j;\n \n for (j = 0; j < proc; j++)\n {\n HeapTuple tuple = tuptable->vals[j];\n \n for (i = 1, buf[0] = 0; i <= tupdesc->natts; i++)\n sprintf(buf + strlen (buf), \" %s%s\",\n SPI_getvalue(tuple, tupdesc, i),\n (i == tupdesc->natts) ? \" \" : \" |\");\n elog (NOTICE, \"EXECQ: %s\", buf);\n }\n }\n\n SPI_finish();\n\n pfree(query);\n\n return (proc);\n}\n",
"msg_date": "Thu, 22 Mar 2001 21:07:25 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] SPI example does not work for 7.1beta4 "
},
{
"msg_contents": "> > ---\n> > SPI_connect();\n> > SPI_exec(\"create temp table tbl_tmp (n int);\",0);\n> > SPI_exec(\"insert into tbl_tmp values (1);\",0);\n> > SPI_finish();\n> > ---\n> > after InitPostgres and before setsigjmp().\n>\n> I doubt this will work correctly without a transaction around it ...\n\nThanks for the hint. It works fine now between start/finish_xact_command.\n\n--\nLM Liu\n\n\n",
"msg_date": "Fri, 23 Mar 2001 10:30:29 -0800",
"msg_from": "Limin Liu <limin@pumpkinnet.com>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] SPI example does not work for 7.1beta4"
}
] |
[
{
"msg_contents": "I have a plan to translate 7.1 docs into Japanases and I looked around\ncurrent docs. I noticed that contacts.sgml and ref/current*.sgml are\nnot used anywhere in the result html nor in man pages.\nDoes anybody know the reason? Or am I missing something?\n--\nTatsuo Ishii\n",
"msg_date": "Fri, 23 Mar 2001 17:50:44 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "7.1 docs"
},
{
"msg_contents": "> I have a plan to translate 7.1 docs into Japanases and I looked around\n> current docs. I noticed that contacts.sgml and ref/current*.sgml are\n> not used anywhere in the result html nor in man pages.\n> Does anybody know the reason? Or am I missing something?\n\nI put in contacts.sgml a *long* time ago, thinking that something like\nit should be in the docs. But it was not complete enough to consider\nincluding at the time (now that I look, it only has my name ;), so it\nwas only a placeholder for the future.\n\nWe could take it out altogether if we want. Peter?\n\nThe ref/current_{date,time...}.sgml files are there because (a)\nfunctions should be documented, and (b) someone documented them. But we\nnever documented enough functions to justify setting up an entire\nsection of a manual to cover them. I think that these are more likely to\nbe used in the future, but we do need additional pages written covering\nother functions.\n\n - Thomas\n",
"msg_date": "Fri, 23 Mar 2001 14:51:33 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: 7.1 docs"
},
{
"msg_contents": "> > I have a plan to translate 7.1 docs into Japanases and I looked around\n> > current docs. I noticed that contacts.sgml and ref/current*.sgml are\n> > not used anywhere in the result html nor in man pages.\n> > Does anybody know the reason? Or am I missing something?\n> \n> I put in contacts.sgml a *long* time ago, thinking that something like\n> it should be in the docs. But it was not complete enough to consider\n> including at the time (now that I look, it only has my name ;), so it\n> was only a placeholder for the future.\n> \n> We could take it out altogether if we want. Peter?\n> \n> The ref/current_{date,time...}.sgml files are there because (a)\n> functions should be documented, and (b) someone documented them. But we\n> never documented enough functions to justify setting up an entire\n> section of a manual to cover them. I think that these are more likely to\n> be used in the future, but we do need additional pages written covering\n> other functions.\n\nOh I see. Thanks for the explanation.\n--\nTatsuo Ishii\n",
"msg_date": "Sat, 24 Mar 2001 00:06:05 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Re: 7.1 docs"
},
{
"msg_contents": "On Fri, Mar 23, 2001 at 02:51:33PM +0000, Thomas Lockhart wrote:\n> \n> The ref/current_{date,time...}.sgml files are there because (a)\n> functions should be documented, and (b) someone documented them. But we\n> never documented enough functions to justify setting up an entire\n> section of a manual to cover them. I think that these are more likely to\n\n\tThis is one of the reasons why I created the PostgreSQL CookBook\nproject. The documentatin of PG function is really small, with barely any\nexamples.\n\tIt looks like either the PG community does not write/use functions or\nvery few people are willing to take 5 minutes and contribute an example\nfor a function. So far (after over a week) the CookBook has 8 recipes \nposted.\n\tFor those wondering, the cookbook project is at\nhttp://www.brasileiro.net/postgres. I plan to post a slew of functions \nfrom the OpenACS project sometime next week.\n\n\t-Roberto\n-- \n+----| http://fslc.usu.edu USU Free Software & GNU/Linux Club|------+\n Roberto Mello - Computer Science, USU - http://www.brasileiro.net \n http://www.sdl.usu.edu - Space Dynamics Lab, Web Developer \nWHeRe is ThaT DArN ShIfT keY?\n",
"msg_date": "Fri, 23 Mar 2001 08:25:22 -0700",
"msg_from": "Roberto Mello <rmello@cc.usu.edu>",
"msg_from_op": false,
"msg_subject": "Re: Re: 7.1 docs"
},
{
"msg_contents": "Tatsuo Ishii writes:\n\n> I have a plan to translate 7.1 docs into Japanases\n\nBtw...\n\nOne thing I am thinking about doing for the 7.2 cycle is set up the\ndoc/src/ directory in a way to keep translations in tree. It would\nprobably look something like:\n\ndoc/src/\n sgml/\t-- original (implicitly en_US)\n en_GB/\t-- translation\n de_DE/\t-- translation\n ...\n\nThis way (or at least the way I'm imagining it) you would make a directory\nfor you language, copy the file over that you want to translate, and edit\nit there. When you build, all the files that you haven't done will be\npicked up from the original.\n\nHow does that sound?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Sat, 24 Mar 2001 12:01:43 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: 7.1 docs"
},
{
"msg_contents": "Thomas Lockhart writes:\n\n> I put in contacts.sgml a *long* time ago, thinking that something like\n> it should be in the docs. But it was not complete enough to consider\n> including at the time (now that I look, it only has my name ;), so it\n> was only a placeholder for the future.\n\nThe \"Resources\" prefix section has contact info. The rest is on the web\nsite.\n\n> The ref/current_{date,time...}.sgml files are there because (a)\n> functions should be documented, and (b) someone documented them. But we\n> never documented enough functions to justify setting up an entire\n> section of a manual to cover them. I think that these are more likely to\n> be used in the future, but we do need additional pages written covering\n> other functions.\n\nAll functions are documented (for appropriate values of \"all\") in the\nUser's Guide, chapter 4. There was probably once the idea of setting up a\nreference page set for the functions, but I don't know if this is\nparticularly better than what we have now. In fact, I would argue it's\nworse.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Sat, 24 Mar 2001 13:48:35 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: 7.1 docs"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> All functions are documented (for appropriate values of \"all\") in the\n> User's Guide, chapter 4. There was probably once the idea of setting up a\n> reference page set for the functions, but I don't know if this is\n> particularly better than what we have now. In fact, I would argue it's\n> worse.\n\nA \"page per function\" approach is clearly overkill for the vast majority\nof our functions. I think that's not unrelated to the fact that no one's\never bothered to prepare such documentation ;-)\n\nOn the other hand, the existing layout of the User's Guide encourages a\n\"line per function\" approach, which is insufficient for at least some\nfunctions. We've worked around that by adding paragraphs below the main\ntable on each page, but that seems a little awkward in many cases.\n\nA reference section in the style of typical Unix section-3 man pages\n(multiple related functions per page, with text discussion and examples)\nwould be a useful compromise, maybe. Needs more thought.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 24 Mar 2001 11:32:02 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: 7.1 docs "
},
{
"msg_contents": "> Tatsuo Ishii writes:\n> \n> > I have a plan to translate 7.1 docs into Japanases\n> \n> Btw...\n> \n> One thing I am thinking about doing for the 7.2 cycle is set up the\n> doc/src/ directory in a way to keep translations in tree. It would\n> probably look something like:\n> \n> doc/src/\n> sgml/\t-- original (implicitly en_US)\n> en_GB/\t-- translation\n> de_DE/\t-- translation\n> ...\n> \n> This way (or at least the way I'm imagining it) you would make a directory\n> for you language, copy the file over that you want to translate, and edit\n> it there. When you build, all the files that you haven't done will be\n> picked up from the original.\n> \n> How does that sound?\n\nThat is almost what I am thinking:-) Sounds like a good idea. \n\nAnother thing what we should care about is man pages. Is there any\nstandard way to coexist multiple languages under /usr/man? I see \"ja\"\nsubdirectory under it in my localized version of Linux.\n--\nTatsuo Ishii\n",
"msg_date": "Sun, 25 Mar 2001 09:48:43 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: 7.1 docs"
},
{
"msg_contents": "On Sun, 25 Mar 2001, Tatsuo Ishii wrote:\n\n> > Tatsuo Ishii writes:\n> >\n> > > I have a plan to translate 7.1 docs into Japanases\n> >\n> > Btw...\n> >\n> > One thing I am thinking about doing for the 7.2 cycle is set up the\n> > doc/src/ directory in a way to keep translations in tree. It would\n> > probably look something like:\n> >\n> > doc/src/\n> > sgml/\t-- original (implicitly en_US)\n> > en_GB/\t-- translation\n> > de_DE/\t-- translation\n> > ...\n> >\n> > This way (or at least the way I'm imagining it) you would make a directory\n> > for you language, copy the file over that you want to translate, and edit\n> > it there. When you build, all the files that you haven't done will be\n> > picked up from the original.\n> >\n> > How does that sound?\n>\n> That is almost what I am thinking:-) Sounds like a good idea.\n>\n> Another thing what we should care about is man pages. Is there any\n> standard way to coexist multiple languages under /usr/man? I see \"ja\"\n> subdirectory under it in my localized version of Linux.\n\nwe have similar on FreeBSD ... /usr/share/man/ja ...\n\n\n",
"msg_date": "Sat, 24 Mar 2001 21:02:38 -0400 (AST)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: 7.1 docs"
},
{
"msg_contents": "Peter Eisentraut wrote:\n\n> Tatsuo Ishii writes:\n>\n> > I have a plan to translate 7.1 docs into Japanases\n>\n> Btw...\n>\n> One thing I am thinking about doing for the 7.2 cycle is set up the\n> doc/src/ directory in a way to keep translations in tree. It would\n> probably look something like:\n>\n> doc/src/\n> sgml/ -- original (implicitly en_US)\n> en_GB/ -- translation\n> de_DE/ -- translation\n> ...\n>\n> This way (or at least the way I'm imagining it) you would make a directory\n> for you language, copy the file over that you want to translate, and edit\n> it there. When you build, all the files that you haven't done will be\n> picked up from the original.\n>\n> How does that sound?\n>\n\nWonderful! I think I can provide en_GB encoding translations.\nMay be because there are lack of some software package, I still can't\ngenerate other format (html, ps etc.) on my machine after upgrade to 7.1,\nbut the old Makefile is ok. don't know why, if put up there, then you can\ngenerate other format without problem.\n\nThanks & Regards\n\nLaser Henry\n\n\n",
"msg_date": "Sun, 25 Mar 2001 10:08:14 +0800",
"msg_from": "\"He Weiping(Laser Henry)\" <laser@zhengmai.com.cn>",
"msg_from_op": false,
"msg_subject": "Re: 7.1 docs"
},
{
"msg_contents": "On Sat, Mar 24, 2001 at 11:32:02AM -0500, Tom Lane wrote:\n> \n> A \"page per function\" approach is clearly overkill for the vast majority\n> of our functions. I think that's not unrelated to the fact that no one's\n> ever bothered to prepare such documentation ;-)\n\n\tAgreed.\n \n> On the other hand, the existing layout of the User's Guide encourages a\n> \"line per function\" approach, which is insufficient for at least some\n> functions. We've worked around that by adding paragraphs below the main\n> table on each page, but that seems a little awkward in many cases.\n\n\tAgain I agree. The functions docs are insufficient for most functions\nI would say.\n\tI like the way the Oracle functions are documented, except for the\nfact that they have one huge page for all functions, which is hard on\nthose on slow connections reading docs online. \n\tThey have functions in tables grouped per functionality (e.g. character\nfunctions that returning character values, character functions returning\nnumber values) and with each function name (which is all that is in the\ntable) is linked to a larger explanation of the function with the complete\nsyntax and examples (usually two).\n\thttp://oradoc.photo.net/ora81/DOC/server.815/a67779/function.htm#1028572\n\n\t-Roberto\t\n-- \n+----| http://fslc.usu.edu USU Free Software & GNU/Linux Club|------+\n Roberto Mello - Computer Science, USU - http://www.brasileiro.net \n http://www.sdl.usu.edu - Space Dynamics Lab, Web Developer \nIf it wasn't for C, we would be using BASI, PASAL and OBOL!\n",
"msg_date": "Sun, 25 Mar 2001 09:23:10 -0700",
"msg_from": "Roberto Mello <rmello@cc.usu.edu>",
"msg_from_op": false,
"msg_subject": "Re: Re: [HACKERS] Re: 7.1 docs"
},
{
"msg_contents": "Roberto Mello <rmello@cc.usu.edu> writes:\n> \tI like the way the Oracle functions are documented, except for the\n> fact that they have one huge page for all functions, which is hard on\n> those on slow connections reading docs online. \n> \tThey have functions in tables grouped per functionality (e.g. character\n> functions that returning character values, character functions returning\n> number values) and with each function name (which is all that is in the\n> table) is linked to a larger explanation of the function with the complete\n> syntax and examples (usually two).\n\nYes, it'd be cool to have the User's Guide contain the existing function\ntables with each entry hotlinked to a more extensive reference entry.\nWe could eliminate some of the nitty-gritty details from the User's\nGuide that way, which I think is good. I don't want to reduce the\nfunction tables to just names a la Oracle --- I think the tables are\ngood as they are. But there are places, such as in the discussion of\nthe pattern-match functions, where we have reference-page-like material\nthat doesn't fit very well in the U.G.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 25 Mar 2001 11:38:44 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: [HACKERS] Re: 7.1 docs "
},
{
"msg_contents": "He Weiping(Laser Henry) writes:\n\n> Wonderful! I think I can provide en_GB encoding translations.\n\nen_GB would be a \"British English\" translation. I don't think this is\nwhat you wanted to do.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Mon, 26 Mar 2001 19:31:44 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: 7.1 docs"
},
{
"msg_contents": ">\n> en_GB would be a \"British English\" translation. I don't think this is\n> what you wanted to do.\n>\n\ncn_GB, sorry. :-D\n\nRegards\n\nLaser Henry\n\n",
"msg_date": "Tue, 27 Mar 2001 22:54:53 +0800",
"msg_from": "\"He Weiping(Laser Henry)\" <laser@zhengmai.com.cn>",
"msg_from_op": false,
"msg_subject": "Re: 7.1 docs"
}
] |
[
{
"msg_contents": "> > Recent changes in pg_crc.c (64 bit CRC) introduced non \n> portable constants of the form:\n> \n> > -c -o pg_crc.o pg_crc.c\n> > 287 | 0x0000000000000000, 0x42F0E1EBA9EA3693,\n> > ............................a..................\n> > a - 1506-207 (W) Integer constant 0x42F0E1EBA9EA3693 out of range.\n> \n> Please observe that this is a warning, not an error. Your proposed\n> fix is considerably worse than the disease, because it will break on\n> compilers that do not recognize \"LL\" constants, to say nothing of\n> machines where L is correct and LL is some yet wider datatype.\n> \n> I'm aware that some compilers will produce warnings about these\n> constants, but there should not be any that fail completely, since\n> (a) we won't be compiling this code unless we've proven that the\n> compiler supports a 64-bit-int datatype, and \n\nUnfortunately configure does not check the use of 64 bit integer \nconstants. A little check on AIX shows, that it indeed DOES NOT work !!!!!\n\n$ ./a.out\nhex: 0x012345670ABCDEF0LL==12345670abcdef0, 0x012345670ABCDEF0==abcdef02ff22ff8\n\n> (b) the C standard\n> forbids a compiler from requiring width suffixes (cf. 6.4.4.1 in C99).\n\nMaybe that standard is somewhat too recent to rely upon 100%.\n\n> I don't think it's a good tradeoff to risk breaking some platforms in\n> order to suppress warnings from overly anal-retentive compilers.\n\nI really do expect other compilers to break on this too. Thus I think\na workaround is needed. As it stands the code does not compute \na valid CRC64 on all platforms.\n\nDo you want me to supply an AIX specific patch with #if defined (_AIX) ?\n\nAndreas",
"msg_date": "Fri, 23 Mar 2001 17:01:41 +0100",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: Re: RELEASE STOPPER? nonportable int64 constants in\n\t pg_crc.c"
},
{
"msg_contents": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n>> I'm aware that some compilers will produce warnings about these\n>> constants, but there should not be any that fail completely, since\n>> (a) we won't be compiling this code unless we've proven that the\n>> compiler supports a 64-bit-int datatype, and \n\n> Unfortunately configure does not check the use of 64 bit integer \n> constants. A little check on AIX shows, that it indeed DOES NOT work !!!!!\n\nGrumble...\n\n>> (b) the C standard\n>> forbids a compiler from requiring width suffixes (cf. 6.4.4.1 in C99).\n\n> Maybe that standard is somewhat too recent to rely upon 100%.\n\nANSI C says the same thing, although of course it only discusses int and\nlong. But the spec has always been clear that the implied type of an\ninteger constant is whatever it takes to hold it; you do not need an\nexplicit \"L\" suffix to make a valid constant. AIX's compiler is broken.\n\n> Do you want me to supply an AIX specific patch with #if defined (_AIX) ?\n\nI'll do something about it. Would you check to see whether a macro like\n#define SIXTYFOUR(x) x##LL\nworks?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 23 Mar 2001 11:12:19 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: AW: Re: RELEASE STOPPER? nonportable int64 constants in pg_crc.c "
}
] |
[
{
"msg_contents": "Postgre 7.0.3, on RedHat Linux 6.2 stock 2.2.16 kernel. Nothing special I\ncan think of, this server has been up and in use for the last 128 days with\nno problem. Last night while cron was performing the nightly vacuuming of\nall databases on one of our servers, I got this from cron.\n\nVacuuming cms\nFATAL 1: _bt_restscan: my bits moved right off the end of the world!\n\tRecreate index history_id_key.\npqReadData() -- backend closed the channel unexpectedly.\n\tThis probably means the backend terminated abnormally\n\tbefore or while processing the request.\nconnection to server was lost\n\nSo this morning I did the following:\n\ncms=# drop index history_id_key;\nDROP\ncms=# create unique index history_id_key on history(id);\nCREATE\ncms=# vacuum;\nNOTICE: Index locks_operator_id_ndx: pointer to EmptyPage (blk 44 off 2) -\nfixing\nNOTICE: Index locks_operator_id_ndx: pointer to EmptyPage (blk 44 off 1) -\nfixing\nNOTICE: Index locks_operator_id_ndx: pointer to EmptyPage (blk 44 off 4) -\nfixing\nNOTICE: Index locks_operator_id_ndx: pointer to EmptyPage (blk 44 off 3) -\nfixing\nNOTICE: Index locks_operator_id_ndx: pointer to EmptyPage (blk 44 off 5) -\nfixing\nNOTICE: Index locks_id_key: pointer to EmptyPage (blk 44 off 2) - fixing\nNOTICE: Index locks_id_key: pointer to EmptyPage (blk 44 off 1) - fixing\nNOTICE: Index locks_id_key: pointer to EmptyPage (blk 44 off 4) - fixing\nNOTICE: Index locks_id_key: pointer to EmptyPage (blk 44 off 3) - fixing\nNOTICE: Index locks_id_key: pointer to EmptyPage (blk 44 off 5) - fixing\nNOTICE: Index locks_case_id_ndx: pointer to EmptyPage (blk 44 off 5) -\nfixing\nNOTICE: Index locks_case_id_ndx: pointer to EmptyPage (blk 44 off 2) -\nfixing\nNOTICE: Index locks_case_id_ndx: pointer to EmptyPage (blk 44 off 1) -\nfixing\nNOTICE: Index locks_case_id_ndx: pointer to EmptyPage (blk 44 off 4) -\nfixing\nNOTICE: Index locks_case_id_ndx: pointer to EmptyPage (blk 44 off 3) -\nfixing\nNOTICE: Message from PostgreSQL backend:\n The Postmaster has informed me that some other backend died\nabnormally and possibly corrupted shared memory.\n I have rolled back the current transaction and am going to terminate\nyour database system connection and exit.\n Please reconnect to the database system and repeat your query.\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n\nI have since stopped the database server and all my users are dead in the\nwater at the moment. I took postgres down to single user mode and I'm doing\na vacuum and was considering doing an iccpclean. Any other suggestions?\ndump & restore? Any Idea what happened?\n\nThank you,\n\nMatthew O'Connor\n\n\n",
"msg_date": "Fri, 23 Mar 2001 10:01:47 -0600",
"msg_from": "Matthew <matt@ctlno.com>",
"msg_from_op": true,
"msg_subject": "7.0.3 _bt_restscan: my bits moved right off the end of the world!"
},
{
"msg_contents": "Matthew <matt@ctlno.com> writes:\n> [ a tale of woe ]\n\nIt looks like dropping and rebuilding *all* the indexes on your history\ntable would be a good move (possibly with a vacuum of the table while\nthe indexes are removed). You might want to do a COPY out to try to\nsave the table data before the vacuum, in case there is corruption in\nthe table as well as the indexes.\n\nBefore you do all that, though, how big is the database? Would you be\nable/willing to tar up the whole $PGDATA tree and let some of us analyze\nit?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 23 Mar 2001 11:58:25 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 7.0.3 _bt_restscan: my bits moved right off the end of the world!"
}
] |
[
{
"msg_contents": "> Postgre 7.0.3, on RedHat Linux 6.2 stock 2.2.16 kernel. Nothing special I\n> can think of, this server has been up and in use for the last 128 days\n> with\n> no problem. Last night while cron was performing the nightly vacuuming of\n> all databases on one of our servers, I got this from cron.\n> \n> Vacuuming cms\n> FATAL 1: _bt_restscan: my bits moved right off the end of the world!\n> \tRecreate index history_id_key.\n> pqReadData() -- backend closed the channel unexpectedly.\n\t[snip ] \n\n> I have since stopped the database server and all my users are dead in the\n> water at the moment. I took postgres down to single user mode and I'm\n> doing\n> a vacuum and was considering doing an iccpclean. Any other suggestions?\n> dump & restore? Any Idea what happened?\n> \nThe vacuum I tried in single user mode failed (froze on a table, for over 20\nminutes) so I killed it. I cleaned up shared memory (there were some things\nleft over). I started in single user mode and reindex database cms force,\nalso reindexed a few tables, then tried the vacuum again with the same\nresult. Do I need to dump resore? We should have a valid backup from the\nnight before.\n\n\n",
"msg_date": "Fri, 23 Mar 2001 10:43:11 -0600",
"msg_from": "Matthew <matt@ctlno.com>",
"msg_from_op": true,
"msg_subject": "RE: 7.0.3 _bt_restscan: my bits moved right off the end of the world!"
}
] |
[
{
"msg_contents": "> I have since stopped the database server and all my users are \n> dead in the water at the moment. I took postgres down to single\n> user mode and I'm doing a vacuum and was considering doing an\n> iccpclean. Any other suggestions? dump & restore?\n> Any Idea what happened?\n\nDrop indices; vacuum; create indices.\n\nVadim\n",
"msg_date": "Fri, 23 Mar 2001 09:01:49 -0800",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: 7.0.3 _bt_restscan: my bits moved right off the end of the world!"
}
] |
[
{
"msg_contents": "\n> >> I'm aware that some compilers will produce warnings about these\n> >> constants, but there should not be any that fail completely, since\n> >> (a) we won't be compiling this code unless we've proven that the\n> >> compiler supports a 64-bit-int datatype, and \n> \n> > Unfortunately configure does not check the use of 64 bit integer \n> > constants. A little check on AIX shows, that it indeed DOES \n> NOT work !!!!!\n> \n> Grumble...\n> \n> >> (b) the C standard\n> >> forbids a compiler from requiring width suffixes (cf. \n> 6.4.4.1 in C99).\n> \n> > Maybe that standard is somewhat too recent to rely upon 100%.\n> \n> ANSI C says the same thing, although of course it only discusses int and\n> long. But the spec has always been clear that the implied type of an\n> integer constant is whatever it takes to hold it; you do not need an\n> explicit \"L\" suffix to make a valid constant. AIX's compiler \n> is broken.\n\nReading your above note I do not see, how you map this statement\nto a long long int (64 bits) on a platform where int is 32 bits. On this platform\nwe are definitely not talking about an integer constant.\n\n> > Do you want me to supply an AIX specific patch with #if defined (_AIX) ?\n> \n> I'll do something about it. Would you check to see whether a > macro like\n> #define SIXTYFOUR(x) x##LL works?\n\nYes, that works. Unfortunately I will only be able to test on Monday.\n\nCould you use something like the following in configure, to test it ?\n\n#define SIXTYFOUR(x) x\nif (sizeof(SIXTYFOUR(0x0)) == 8)\n printf (\"0x0 is 64 bits\\n\");\nelse return(-1);\n\n#define SIXTYFOUR(x) x##LL\nif (sizeof(SIXTYFOUR(0x0)) == 8)\n printf (\"0x0LL is 64 bits\\n\");\nelse return (-1);\n\nAndreas\n",
"msg_date": "Fri, 23 Mar 2001 18:10:23 +0100",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: AW: Re: RELEASE STOPPER? nonportable int64 constant s in pg_crc.c "
},
{
"msg_contents": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n>> ANSI C says the same thing, although of course it only discusses int and\n>> long. But the spec has always been clear that the implied type of an\n>> integer constant is whatever it takes to hold it; you do not need an\n>> explicit \"L\" suffix to make a valid constant. AIX's compiler \n>> is broken.\n\n> Reading your above note I do not see, how you map this statement to a\n> long long int (64 bits) on a platform where int is 32 bits. On this\n> platform we are definitely not talking about an integer constant.\n\nSorry, perhaps I should have said \"integral\" constant, or something like\nthat. But if you read the spec you will find it calls all these things\ninteger constants. The relevant part of C99 says\n\n [#5] The type of an integer constant is the first of the\n corresponding list in which its value can be represented.\n\n || |\n || | Octal or Hexadecimal\n Suffix || Decimal Constant | Constant\n -------------++-----------------------+------------------------\n none ||int | int\n ||long int | unsigned int\n ||long long int | long int\n || | unsigned long int\n || | long long int\n || | unsigned long long int\n\nand I'm quite sure that ANSI C says exactly the same thing except for\nnot listing the long long types. This behavior is not some weird new\ninvention of C99 --- it has been part of the language definition since\nK&R's first edition (see K&R ref section 2.4.1, if you have a copy).\nApparently the AIX compiler writers' memories do not go back to times\nwhen int was commonly narrower than long and so this part of the spec\nwas really significant. Otherwise they'd not have had any difficulty\nin extrapolating the correct handling of long long constants.\n\n>>>> Do you want me to supply an AIX specific patch with #if defined (_AIX) ?\n>> \n>> I'll do something about it. Would you check to see whether a macro like\n>> #define SIXTYFOUR(x) x##LL works?\n\n> Yes, that works.\n\nOkay. I've committed a configure check that tests to see whether a\nmacro defined as above compiles, and if so it will be used (if we are\nusing \"long long\" for int64). Hopefully the check will prevent breakage\non machines where LL is not appropriate.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 23 Mar 2001 14:56:18 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: Re: RELEASE STOPPER? nonportable int64 constant s in\n\tpg_crc.c"
},
{
"msg_contents": "Tom Lane writes:\n\n> Okay. I've committed a configure check that tests to see whether a\n> macro defined as above compiles, and if so it will be used (if we are\n> using \"long long\" for int64). Hopefully the check will prevent breakage\n> on machines where LL is not appropriate.\n\nI don't see what this configure check buys us, since it does not check for\nanything that's ever been reported not working. Do you think there are\nplatforms that have 'long long int' but no 'LL' suffix? That seems more\nthan unlikely.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Sat, 24 Mar 2001 13:20:30 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: Re: RELEASE STOPPER? nonportable int64 constant\n\ts in pg_crc.c"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> I don't see what this configure check buys us, since it does not check for\n> anything that's ever been reported not working. Do you think there are\n> platforms that have 'long long int' but no 'LL' suffix? That seems more\n> than unlikely.\n\nWell, I don't know. Up till yesterday I would have said that there were\nno compilers that violated the C specification (not to mention twenty\nyears of traditional practice) in the handling of integral constants\nof varying widths.\n\nIf you look closely, the configure test is not simply checking whether\nLL is accepted, it is checking whether we can construct an acceptable\nconstant by macroized token-pasting. That's a slightly larger\nassumption, but it's the one the code must actually make to cover\nAIX's problem. As I remarked before, I think that ## is more typically\nused to paste identifiers and strings together; I don't really want to\nbet that pasting 0xNNN ## LL will work on every compiler.\n\nMainly it's a schedule-driven thing. I don't want to take any risk that\na last-minute patch to work around AIX's broken compiler will break any\nother platforms. If we had found this problem before beta cycle\nstarted, I would be more willing to say \"let's try it and find out\nwhether it works everywhere\".\n\nYeah, it's paranoia, but considering that the whole thing is an exercise\nin covering up a \"shouldn't happen\" compiler bug, I think paranoia is\nnot so unreasonable.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 24 Mar 2001 11:12:46 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: Re: RELEASE STOPPER? nonportable int64 constant s in\n\tpg_crc.c"
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> If you look closely, the configure test is not simply checking whether\n> LL is accepted, it is checking whether we can construct an acceptable\n> constant by macroized token-pasting. That's a slightly larger\n> assumption, but it's the one the code must actually make to cover\n> AIX's problem. As I remarked before, I think that ## is more typically\n> used to paste identifiers and strings together; I don't really want to\n> bet that pasting 0xNNN ## LL will work on every compiler.\n> \n> Mainly it's a schedule-driven thing. I don't want to take any risk that\n> a last-minute patch to work around AIX's broken compiler will break any\n> other platforms. If we had found this problem before beta cycle\n> started, I would be more willing to say \"let's try it and find out\n> whether it works everywhere\".\n\nA safe way to construct a long long constant is to do it using an\nexpression:\n ((((uint64) 0xdeadbeef) << 32) | (uint64) 0xfeedface)\nIt's awkward, obviously, but it works with any compiler.\n\nIan\n",
"msg_date": "24 Mar 2001 13:27:56 -0800",
"msg_from": "Ian Lance Taylor <ian@airs.com>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: Re: RELEASE STOPPER? nonportable int64 constant s in\n\tpg_crc.c"
},
{
"msg_contents": "Ian Lance Taylor <ian@airs.com> writes:\n> A safe way to construct a long long constant is to do it using an\n> expression:\n> ((((uint64) 0xdeadbeef) << 32) | (uint64) 0xfeedface)\n> It's awkward, obviously, but it works with any compiler.\n\nAn interesting example. That will work as intended if and only if the\ncompiler regards 0xfeedface as unsigned --- if the constant is initially\ntreated as a signed int, then extension to 64 bits will propagate the\nwrong bit value into the high-order bits.\n\nIndeed, according to the ANSI C spec, 0xfeedface *should* be regarded as\nunsigned in a machine whose ints are 32 bits. However, this conclusion\ncomes from the exact same paragraph that AIX got wrong to begin with.\nI'm not sure that doing it this way actually affords any extra protection\nagainst compilers that can't be trusted to handle integral constants\nper-spec...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 24 Mar 2001 16:43:14 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: Re: RELEASE STOPPER? nonportable int64 constant s in\n\tpg_crc.c"
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> Ian Lance Taylor <ian@airs.com> writes:\n> > A safe way to construct a long long constant is to do it using an\n> > expression:\n> > ((((uint64) 0xdeadbeef) << 32) | (uint64) 0xfeedface)\n> > It's awkward, obviously, but it works with any compiler.\n> \n> An interesting example. That will work as intended if and only if the\n> compiler regards 0xfeedface as unsigned --- if the constant is initially\n> treated as a signed int, then extension to 64 bits will propagate the\n> wrong bit value into the high-order bits.\n> \n> Indeed, according to the ANSI C spec, 0xfeedface *should* be regarded as\n> unsigned in a machine whose ints are 32 bits. However, this conclusion\n> comes from the exact same paragraph that AIX got wrong to begin with.\n> I'm not sure that doing it this way actually affords any extra protection\n> against compilers that can't be trusted to handle integral constants\n> per-spec...\n\nTrue, for additional safety, do this:\n ((((uint64) (unsigned long) 0xdeadbeef) << 32) | (uint64) (unsigned long) 0xfeedface)\n\nOf course, this won't work if long is 64 bits and int is 32 bits and\nthe compiler erroneously makes a hexidecimal constant signed int\nrather than unsigned int, signed long (incorrect but possible), or\nunsigned long.\n\nI seem to recall that even K&R compilers support an L suffix to\nindicate a long value. If that is the case, then I think this is\nalways safe for any compiler:\n ((((uint64) (unsigned long) 0xdeadbeefL) << 32) | (uint64) (unsigned long) 0xfeedfaceL)\n\n(In reality that should certainly be safe, because there are no K&R\ncompilers which support a 64 bit integer type.)\n\nIan\n",
"msg_date": "24 Mar 2001 14:05:05 -0800",
"msg_from": "Ian Lance Taylor <ian@airs.com>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: Re: RELEASE STOPPER? nonportable int64 constant s in\n\tpg_crc.c"
},
{
"msg_contents": "On Sat, Mar 24, 2001 at 02:05:05PM -0800, Ian Lance Taylor wrote:\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n> > Ian Lance Taylor <ian@airs.com> writes:\n> > > A safe way to construct a long long constant is to do it using an\n> > > expression:\n> > > ((((uint64) 0xdeadbeef) << 32) | (uint64) 0xfeedface)\n> > > It's awkward, obviously, but it works with any compiler.\n> > \n> > An interesting example. That will work as intended if and only if the\n> > compiler regards 0xfeedface as unsigned ...\n> \n> True, for additional safety, do this:\n> ((((uint64) (unsigned long) 0xdeadbeef) << 32) |\n> (uint64) (unsigned long) 0xfeedface)\n\nFor the paranoid,\n\n ((((uint64) 0xdead) << 48) | (((uint64) 0xbeef) << 32) | \\\n (((uint64) 0xfeed) << 16) | ((uint64) 0xface))\n\nOr, better\n\n #define FRAG64(bits,shift) (((uint64)(bits)) << (shift))\n #define LITERAL64(a,b,c,d) \\\n FRAG64(a,48) | FRAG64(b,32) | FRAG64(c,16) | FRAG64(d,0)\n LITERAL64(0xdead,0xbeef,0xfeed,0xface)\n\nThat might be overkill for just a single literal...\n\nNathan Myers\nncm\n",
"msg_date": "Sat, 24 Mar 2001 16:46:32 -0800",
"msg_from": "ncm@zembu.com (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: RELEASE STOPPER? nonportable int64 constant s in pg_crc.c"
}
] |
[
{
"msg_contents": "> Matthew <matt@ctlno.com> writes:\n> > [ a tale of woe ]\n> \n> It looks like dropping and rebuilding *all* the indexes on your history\n> table would be a good move (possibly with a vacuum of the table while\n> the indexes are removed). You might want to do a COPY out to try to\n> save the table data before the vacuum, in case there is corruption in\n> the table as well as the indexes.\n> \n> Before you do all that, though, how big is the database? Would you be\n> able/willing to tar up the whole $PGDATA tree and let some of us analyze\n> it?\n> \n> \t\t\tregards, tom lane\n> \n\tI am going to tar up the $PGDATA directory so I have a backup of it\nin case of bigger problems. I will send you a copy if you like but I have\nalready done some of the things you suggested but not all of them. The\ndatabase in question is (cms) 450 Meg, but we have a lot of databases in the\nPGDATA directory, so the whole PGDATA directory totals to 3.1G. I don't\nknow if you want the whole thing or not. Let me know. \n\n\tI dropped all the indexes on the history table did a vacuum then\nrecreated the index and vacuumed that table. That went fine, when I tried\nto vacuum the entire database it hung on the cases table. I tried\nreindexing that to no avail, and then tried dropping the index it appeared\nto be hanging on then vacuuming and I got a different error, something about\nmemory being exhausted, which should not be the case, I don't have the exact\nerror in front of me any more :-( . \n\n\tI'm still trying to get things back up and running. I believe I\nhave a successful pg_dump of the data in the cms database so I am going to\ntry to drop the cms database and restore from the dump. Unless you think\nthis is a bad idea. \n\n\tThank you very much for you help.\n",
"msg_date": "Fri, 23 Mar 2001 11:11:39 -0600",
"msg_from": "Matthew <matt@ctlno.com>",
"msg_from_op": true,
"msg_subject": "RE: 7.0.3 _bt_restscan: my bits moved right off the end of the world! "
}
] |
[
{
"msg_contents": "> The vacuum I tried in single user mode failed (froze on a\n> table, for over 20 minutes) so I killed it.\n\nDid you destroy indices before vacuum?\n\nVadim\n",
"msg_date": "Fri, 23 Mar 2001 10:30:13 -0800",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: 7.0.3 _bt_restscan: my bits moved right off the end of the world!"
}
] |
[
{
"msg_contents": "> > The vacuum I tried in single user mode failed (froze on a\n> > table, for over 20 minutes) so I killed it.\n> \n> Did you destroy indices before vacuum?\n> \n\tNot all of them, it's a large database and I am trying to get it up\nand running asap.\n\n\tFYI now when I try to use psql to connect to the database I get this\nerror:\n\n\tbash$ psql cms\n\tpsql: FATAL 1: cannot find attribute 1 of relation pg_trigger\n\n\n",
"msg_date": "Fri, 23 Mar 2001 12:38:38 -0600",
"msg_from": "Matthew <matt@ctlno.com>",
"msg_from_op": true,
"msg_subject": "RE: 7.0.3 _bt_restscan: my bits moved right off the end of the world!"
},
{
"msg_contents": "Matthew <matt@ctlno.com> writes:\n> \tFYI now when I try to use psql to connect to the database I get this\n> error:\n> \tbash$ psql cms\n> \tpsql: FATAL 1: cannot find attribute 1 of relation pg_trigger\n\nSo the indexes on pg_attribute are hosed too. I wonder whether that was\nthe original source of the problem, and the rest of this is\nside-effects?\n\nI am starting to think that you'd best initdb and reload, but there is\none more thing to try: run REINDEX on the whole database in standalone\nmode. See the documentation for the procedure; I'm not too clear on it\nsince I've never had to do it myself.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 23 Mar 2001 13:51:35 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 7.0.3 _bt_restscan: my bits moved right off the end of the world!"
},
{
"msg_contents": "> -----Original Message-----\n> From: Matthew\n>\n> > > The vacuum I tried in single user mode failed (froze on a\n> > > table, for over 20 minutes) so I killed it.\n> >\n> > Did you destroy indices before vacuum?\n> >\n> \tNot all of them, it's a large database and I am trying to get it up\n> and running asap.\n>\n> \tFYI now when I try to use psql to connect to the database I get this\n> error:\n>\n> \tbash$ psql cms\n> \tpsql: FATAL 1: cannot find attribute 1 of relation pg_trigger\n>\n\nPlease try the following query under standalone postgres(with -P and -O\noptions).\n\nselect * from pg_attribute where attrelid=1219 and attnum=1;\n\nIf you would get no result, your pg_attribute is corrupted unfortunately.\n\nregards,\nHiroshi Inoue\n\n",
"msg_date": "Sat, 24 Mar 2001 07:56:17 +0900",
"msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "RE: 7.0.3 _bt_restscan: my bits moved right off the end of the world!"
}
] |
[
{
"msg_contents": "> Matthew <matt@ctlno.com> writes:\n> > \tFYI now when I try to use psql to connect to the database I get this\n> > error:\n> > \tbash$ psql cms\n> > \tpsql: FATAL 1: cannot find attribute 1 of relation pg_trigger\n> \n> So the indexes on pg_attribute are hosed too. I wonder whether that was\n> the original source of the problem, and the rest of this is\n> side-effects?\n> \n> I am starting to think that you'd best initdb and reload, but there is\n> one more thing to try: run REINDEX on the whole database in standalone\n> mode. See the documentation for the procedure; I'm not too clear on it\n> since I've never had to do it myself.\n> \n> \t\t\tregards, tom lane\n> \n\tWhat do you mean by the whole database? I have already executed:\n\n\treindex database cms force\n\treindex table cases force\n\treindex table cases force\n\treindex table hits force\n\treindex table history force (and a few more)\n\n\tHow do I get it do reindex the system tables? One table at a time?\n\n",
"msg_date": "Fri, 23 Mar 2001 12:51:43 -0600",
"msg_from": "Matthew <matt@ctlno.com>",
"msg_from_op": true,
"msg_subject": "RE: 7.0.3 _bt_restscan: my bits moved right off the end of the world! "
},
{
"msg_contents": "Matthew <matt@ctlno.com> writes:\n> \tWhat do you mean by the whole database? I have already executed:\n\n> \treindex database cms force\n\n(checks manual...) That appears to be the right syntax. If you did\nthat in a standalone backend with the appropriate command line options\n(-O and -P) then I think you've done all you can. Time for a reload :-(\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 23 Mar 2001 14:00:24 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 7.0.3 _bt_restscan: my bits moved right off the end of the world!"
}
] |
[
{
"msg_contents": "> > > The vacuum I tried in single user mode failed (froze on a\n> > > table, for over 20 minutes) so I killed it.\n> > \n> > Did you destroy indices before vacuum?\n> > \n> Not all of them, it's a large database and I am trying \n> to get it up and running asap.\n\nDid you destroy *all* indices of table vacuum hung on?\n\nVadim\n",
"msg_date": "Fri, 23 Mar 2001 10:57:59 -0800",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: 7.0.3 _bt_restscan: my bits moved right off the end of the world!"
}
] |
[
{
"msg_contents": "> \tWhat do you mean by the whole database? I have already \n> executed:\n> \n> \treindex database cms force\n> \treindex table cases force\n> \treindex table cases force\n> \treindex table hits force\n> \treindex table history force (and a few more)\n> \n> \tHow do I get it do reindex the system tables? One \n> table at a time?\n\n\"reindex database cms force\" is supposed to reindex all system\ntables (and *system ones only*), but from my recollection it\ndidn't help sometime, so it's better reindex pg_attributes\n& pg_class with separate command. Maybe after vacuuming them.\n\nVadim\n",
"msg_date": "Fri, 23 Mar 2001 11:03:29 -0800",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: 7.0.3 _bt_restscan: my bits moved right off the end of the world! "
}
] |
[
{
"msg_contents": "\nWell, its been a hard, arduous journey for this one, with several delays\ncaused by the massive amount of changes that have gone into v7.1 ... but,\ntonight I've finally wrapped up Release Candidate 1 ...\n\nI'm going to hold off on a formal announcement to -announce until tomorrow\nevening, to give the mirrors a chance to update, but if anyone would like\nto download and run through the package, make sure all looks okay, its\navailable in the dev directory ...\n\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n",
"msg_date": "Fri, 23 Mar 2001 21:05:53 -0400 (AST)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Release Candidate 1 ..."
},
{
"msg_contents": "The Hermit Hacker <scrappy@hub.org> writes:\n> I'm going to hold off on a formal announcement to -announce until tomorrow\n> evening, to give the mirrors a chance to update, but if anyone would like\n> to download and run through the package, make sure all looks okay, its\n> available in the dev directory ...\n\nTar packages look okay here.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 23 Mar 2001 21:00:59 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Release Candidate 1 ... "
},
{
"msg_contents": "In article <Pine.BSF.4.33.0103232102250.41105-100000@mobile.hub.org>, \"The\nHermit Hacker\" <scrappy@hub.org> wrote:\n\n\n> Well, its been a hard, arduous journey for this one, with several delays\n> caused by the massive amount of changes that have gone into v7.1 ...\n> but, tonight I've finally wrapped up Release Candidate 1 ...\n\nWell, done, too! I've been banging on Beta6 with data and\nqueries from DB2, and it's been the easiest transition I've\never had between RDBMSs.\n\nWill upgrading from Beta6 to RC1 require dumping and restoring\ndatabases?\n\nThanks again for a great product!\n\nGordon.\n-- \nIt doesn't get any easier, you just go faster.\n -- Greg LeMond\n",
"msg_date": "Fri, 23 Mar 2001 23:25:45 -0500",
"msg_from": "\"Gordon A. Runkle\" <gar@no-spam-integrated-dynamics.com>",
"msg_from_op": false,
"msg_subject": "Re: Release Candidate 1 ..."
},
{
"msg_contents": "\"Gordon A. Runkle\" <gar@no-spam-integrated-dynamics.com> writes:\n> Will upgrading from Beta6 to RC1 require dumping and restoring\n> databases?\n\nNo, just compile and install. If you'd been on beta5 or earlier,\nyou'd need to run contrib/pg_resetxlog to update pg_control format,\nbut still no initdb.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 25 Mar 2001 14:45:34 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Release Candidate 1 ... "
},
{
"msg_contents": "On Fri, 23 Mar 2001, Gordon A. Runkle wrote:\n\n> In article <Pine.BSF.4.33.0103232102250.41105-100000@mobile.hub.org>, \"The\n> Hermit Hacker\" <scrappy@hub.org> wrote:\n>\n>\n> > Well, its been a hard, arduous journey for this one, with several delays\n> > caused by the massive amount of changes that have gone into v7.1 ...\n> > but, tonight I've finally wrapped up Release Candidate 1 ...\n>\n> Well, done, too! I've been banging on Beta6 with data and\n> queries from DB2, and it's been the easiest transition I've\n> ever had between RDBMSs.\n>\n> Will upgrading from Beta6 to RC1 require dumping and restoring\n> databases?\n\nMy understanding was that originally, there was a dump/reload required,\nbut I believe that requirement was negated by a 'resetxlog' utility that\nTom Lane created ...\n\n\n",
"msg_date": "Sun, 25 Mar 2001 16:15:29 -0400 (AST)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Re: Release Candidate 1 ..."
},
{
"msg_contents": "In article <18508.985549534@sss.pgh.pa.us>, \"Tom Lane\" <tgl@sss.pgh.pa.us>\nwrote:\n> No, just compile and install. If you'd been on beta5 or earlier, you'd\n> need to run contrib/pg_resetxlog to update pg_control format, but still\n> no initdb.\n\nThanks, Tom!\n\nJust so you know:� I've built a test database in PostgreSQL which is a\nfull copy of a large database I have in DB2.� On the same physical\nmachine, no less.� Many of my large, rather intensive queries are\nrunning 4 times faster than in DB2!� I stand in awe.� You guys have\ndone a real bang-up job!\n\nThanks again,\n\nGordon.\n\n-- \nIt doesn't get any easier, you just go faster.\n -- Greg LeMond\n",
"msg_date": "Sun, 25 Mar 2001 21:23:57 -0500",
"msg_from": "\"Gordon A. Runkle\" <gar@no-spam-integrated-dynamics.com>",
"msg_from_op": false,
"msg_subject": "Re: Release Candidate 1 ..."
}
] |
[
{
"msg_contents": "Are we ready to start freezing docs? I'll assume that the tutorial\nsections can freeze first, and that the admin sections will freeze last\n(to get the latest platform support info).\n\nAny preference on order, and does anyone have more docs changes in the\npipe? If so, we had better plan on doing them soon (in the next two or\nthree days?).\n\nPeter, what would you suggest for schedule?\n\n - Thomas\n",
"msg_date": "Sat, 24 Mar 2001 01:44:02 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": true,
"msg_subject": "Docs freeze?"
},
{
"msg_contents": "I would like to add to the release.sgml file. Seems that should be near\nthe end like the platform list.\n\n> Are we ready to start freezing docs? I'll assume that the tutorial\n> sections can freeze first, and that the admin sections will freeze last\n> (to get the latest platform support info).\n> \n> Any preference on order, and does anyone have more docs changes in the\n> pipe? If so, we had better plan on doing them soon (in the next two or\n> three days?).\n> \n> Peter, what would you suggest for schedule?\n> \n> - Thomas\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 23 Mar 2001 23:40:52 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Docs freeze?"
},
{
"msg_contents": "Thomas Lockhart writes:\n\n> Peter, what would you suggest for schedule?\n\nWe freeze everything now and move on with our lives. There's always a\nnext release.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n",
"msg_date": "Sat, 24 Mar 2001 11:39:52 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Docs freeze?"
}
] |
[
{
"msg_contents": "\nI did a cvs pull of the head on 3/24/01 and used Sun's cc\ncompiler 5.0 (with all patches as of 2/1/01) to build.\n\nI ran into a problem building pg_backup_null.c rev 1.5.\nThe following patch lets me build:\n\nIndex: src/bin/pg_dump/pg_backup_null.c\n===================================================================\nRCS file: \n/home/projects/pgsql/cvsroot/pgsql/src/bin/pg_dump/pg_backup_null.c,v\nretrieving revision 1.5\ndiff -u -r1.5 pg_backup_null.c\n--- src/bin/pg_dump/pg_backup_null.c 2001/03/22 04:00:13 1.5\n+++ src/bin/pg_dump/pg_backup_null.c 2001/03/24 22:00:07\n@@ -98,7 +98,7 @@\n static void\n _PrintTocData(ArchiveHandle *AH, TocEntry *te, RestoreOptions *ropt)\n {\n- if (*te->dataDumper)\n+ if (te->dataDumper)\n {\n AH->currToc = te;\n (*te->dataDumper) ((Archive *) AH, te->oid, te->dataDumperArg);\n\nSun's cc complains that *te->dataDumper is not a scalar type.\n\nSteve Nicolai \n\n",
"msg_date": "Sat, 24 Mar 2001 16:21:40 -0600",
"msg_from": "Steve Nicolai <snicolai@mac.com>",
"msg_from_op": true,
"msg_subject": "Build problem and patch with Sun cc"
},
{
"msg_contents": "Steve Nicolai <snicolai@mac.com> writes:\n> diff -u -r1.5 pg_backup_null.c\n> --- src/bin/pg_dump/pg_backup_null.c 2001/03/22 04:00:13 1.5\n> +++ src/bin/pg_dump/pg_backup_null.c 2001/03/24 22:00:07\n> @@ -98,7 +98,7 @@\n> static void\n> _PrintTocData(ArchiveHandle *AH, TocEntry *te, RestoreOptions *ropt)\n> {\n> - if (*te->dataDumper)\n> + if (te->dataDumper)\n> {\n> AH->currToc = te;\n> (*te->dataDumper) ((Archive *) AH, te->oid, te->dataDumperArg);\n\n> Sun's cc complains that *te->dataDumper is not a scalar type.\n\nApplied. Thanks!\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 24 Mar 2001 18:11:51 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Build problem and patch with Sun cc "
}
] |
[
{
"msg_contents": "\nI recently got sent a survey to fill out that is meant to compare various\nObject databases ... there are ~20 sections to this thing, asking\nquestions ranging from General Architecture to interactions with External\nDBMSs ... and *alot* of questions that I've no experience in, and,\ntherefore, no answers to ...\n\nI've HTMLized it as best as I can and put the resultant sections up at\nhttp://www.postgresql.org/survey ...\n\nI'm just starting to go through the sections, so right now, none of them\nhave answers yet ... if ppl could help by reading through and providing\nanswers so that I can provide as accurate of information as possible, it\nshould give for a good initial showing for PgSQL on the Object stage ...\n\nI don't need the whole section answered ... for instance, there is a\nsection on Mapping_Objects_To_External_DBMS, that have sub-sections like:\n\n===============\n C++ Map Generation Processing\n\n This table describes the process for generating the mapping.\n\n\n\n processing .h files\n\n processing generated C++ files\n\n external run-time via reflection\n===============\n\ngetting an email back with the section name and the sub-section\ncut-n-paste with an appropriate 'Yes' or 'No' after each 'question' would\nbe great, and I'll merge that back into the HTML itself ...\n\nThanks ...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n",
"msg_date": "Sat, 24 Mar 2001 18:44:51 -0400 (AST)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Onject Database Survey ... Help needed ..."
},
{
"msg_contents": "The Hermit Hacker <scrappy@hub.org> writes:\n\n> I'm just starting to go through the sections, so right now, none of them\n> have answers yet ... if ppl could help by reading through and providing\n> answers so that I can provide as accurate of information as possible, it\n> should give for a good initial showing for PgSQL on the Object stage ...\n\nI honestly don't make sense of some of the survey questions. On the Java\nMap Generation for instance, I feel the possible are answers is not close\nto reality at all - and there is not very much pgsql specific work in this\nprocess either... I usually have a XML representation of the schema that is\ntransformed by XSL into SQL schema and docs. The same XML schema is then\nused to create the basic object model for the schema by using straight\nclass to relation mapping. \n\nregards, \n\n\tGunnar\n",
"msg_date": "25 Mar 2001 15:20:36 +0200",
"msg_from": "Gunnar R|nning <gunnar@candleweb.no>",
"msg_from_op": false,
"msg_subject": "Re: Onject Database Survey ... Help needed ..."
}
] |
[
{
"msg_contents": "Greetings,\n\n I have been following along with the thread and would just like to say a \nfew paragraphs.\n\n Can't the contributors themselves run pgindent on the files which they \nhave changed _just_ before creating the patch which is to be contributed?\n\n Thus, patching a \"pure\" pgindented file from cvs with another \n\"pure\" file created loacally will allow for the contributor to work with \ntheir own ideosyncratic indentation style, yet at the same time keep the \npgindentification of the cvs tree \"pure\".\n\n Otherwise the alternative is to insist that people code according to the \nproject's standards, which is really just another application of the \nslogans; \"When in Rome do as Rome does.\" and \"There is no \"I\" in \"TEAM.\"\n\n If there is one thing the Mozilla project has done well, it is the \ncreation of all the tools to make cooperative coding simpler. As the \nPostgreSQL grows in complexity and following you might find that these \ntools are helpful.\n\n-- \nSincerely etc.,\n\n NAME Christopher Sawtell\n CELL PHONE 021 257 4451\n ICQ UIN 45863470\n EMAIL csawtell @ xtra . co . nz\n CNOTES ftp://ftp.funet.fi/pub/languages/C/tutorials/sawtell_C.tar.gz\n\n -->> Please refrain from using HTML or WORD attachments in e-mails to me \n<<--\n\n",
"msg_date": "Sun, 25 Mar 2001 11:36:35 +1200",
"msg_from": "Christopher Sawtell <csawtell@xtra.co.nz>",
"msg_from_op": true,
"msg_subject": "Re: pgindent run?"
},
{
"msg_contents": "Christopher Sawtell <csawtell@xtra.co.nz> writes:\n> Can't the contributors themselves run pgindent on the files which they \n> have changed _just_ before creating the patch which is to be contributed?\n\nThat would require everyone to have a working copy of BSD indent (gnu\nindent does not behave the same, btw). Doesn't seem real workable.\n\nIt'd be nice if we had a more portable/widespread tool, but that's not\na very high priority, at least not for yours truly.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 24 Mar 2001 19:07:43 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgindent run? "
}
] |
[
{
"msg_contents": "\nContinuing on my quest to get 7.1 to build on Solaris 8 with\nSun's cc 5.0, I found an alignment problem in\nbackend/access/heap/tuptoaster.c\n\n(/opt/SUNWspro/bin/../WS5.0/bin/sparcv9/dbx) where\n=>[1] toast_save_datum(rel = 0x5fb4c8, mainoid = 18714U, attno = 7, value =\n6548664U), line 816 in \"tuptoaster.c\"\n [2] toast_insert_or_update(rel = 0x5fb4c8, newtup = 0x653c98, oldtup =\n(nil)), line 493 in \"tuptoaster.c\"\n [3] heap_tuple_toast_attrs(rel = 0x5fb4c8, newtup = 0x653c98, oldtup =\n(nil)), line 66 in \"tuptoaster.c\"\n [4] heap_insert(relation = 0x5fb4c8, tup = 0x653c98), line 1316 in\n\"heapam.c\"\n [5] update_attstats(relid = 17058U, natts = 7, vacattrstats = 0x627920),\nline 619 in \"analyze.c\"\n [6] analyze_rel(relid = 17058U, anal_cols2 = (nil), MESSAGE_LEVEL = -2),\nline 242 in \"analyze.c\"\n [7] vac_vacuum(VacRelP = (nil), analyze = '\\001', anal_cols2 = (nil)),\nline 248 in \"vacuum.c\"\n [8] vacuum(vacrel = (nil), verbose = '\\0', analyze = '\\001', anal_cols =\n(nil)), line 163 in \"vacuum.c\"\n [9] ProcessUtility(parsetree = 0x5a8358, dest = Debug), line 711 in\n\"utility.c\"\n [10] pg_exec_query_string(query_string = 0x5a81b8 \"VACUUM ANALYZE\\n\", dest\n= Debug, parse_context = 0x599a60), line 773 in \"postgres.c\"\n [11] PostgresMain(argc = 7, argv = 0xffbef93c, real_argc = 7, real_argv =\n0xffbef93c, username = 0x490e28 \"steven\"), line 1904 in \"postgres.c\"\n [12] main(argc = 7, argv = 0xffbef93c), line 171 in \"main.c\"\n\n(/opt/SUNWspro/bin/../WS5.0/bin/sparcv9/dbx) list 813 +7\n 813 * Build a tuple\n 814 */\n 815 t_values[1] = Int32GetDatum(chunk_seq++);\n 816 VARATT_SIZEP(chunk_data) = chunk_size + VARHDRSZ;\n 817 memcpy(VARATT_DATA(chunk_data), data_p, chunk_size);\n 818 toasttup = heap_formtuple(toasttupDesc, t_values,\nt_nulls);\n 819 if (!HeapTupleIsValid(toasttup))\n 820 elog(ERROR, \"Failed to build TOAST tuple\");\n\n(/opt/SUNWspro/bin/../WS5.0/bin/sparcv9/dbx) list 737 +14\n 737 static Datum\n 738 toast_save_datum(Relation rel, Oid mainoid, int16 attno, Datum\nvalue)\n 739 {\n 740 Relation toastrel;\n 741 Relation toastidx;\n 742 HeapTuple toasttup;\n 743 InsertIndexResult idxres;\n 744 TupleDesc toasttupDesc;\n 745 Datum t_values[3];\n 746 char t_nulls[3];\n 747 varattrib *result;\n 748 char chunk_data[VARHDRSZ + TOAST_MAX_CHUNK_SIZE];\n 749 int32 chunk_size;\n 750 int32 chunk_seq = 0;\n 751 char *data_p;\n\nVARATT_SIZEP casts the char pointer to a varattrib pointer which must\nbe int32 aligned. Nothing requires the char pointer to be so aligned.\n\nThe following cheap and ugly patch gets around that problem. I have\nnot checked the source for other places where this would need to be\nfixed. If you know of a better way to fix this, go ahead and use it.\n\nIndex: src/backend/access/heap/tuptoaster.c\n===================================================================\nRCS file: \n/home/projects/pgsql/cvsroot/pgsql/src/backend/access/heap/tuptoaster.c,v\nretrieving revision 1.20\ndiff -u -r1.20 tuptoaster.c\n--- src/backend/access/heap/tuptoaster.c 2001/03/23 04:49:51 1.20\n+++ src/backend/access/heap/tuptoaster.c 2001/03/24 23:44:07\n@@ -745,7 +745,10 @@\n Datum t_values[3];\n char t_nulls[3];\n varattrib *result;\n- char chunk_data[VARHDRSZ + TOAST_MAX_CHUNK_SIZE];\n+ union {\n+ varattrib a;\n+ char d[VARHDRSZ + TOAST_MAX_CHUNK_SIZE];\n+ } chunk_data;\n int32 chunk_size;\n int32 chunk_seq = 0;\n char *data_p;\n@@ -780,7 +783,7 @@\n * Initialize constant parts of the tuple data\n */\n t_values[0] =\nObjectIdGetDatum(result->va_content.va_external.va_valueid);\n- t_values[2] = PointerGetDatum(chunk_data);\n+ t_values[2] = PointerGetDatum(&chunk_data);\n t_nulls[0] = ' ';\n t_nulls[1] = ' ';\n t_nulls[2] = ' ';\n@@ -813,8 +816,8 @@\n * Build a tuple\n */\n t_values[1] = Int32GetDatum(chunk_seq++);\n- VARATT_SIZEP(chunk_data) = chunk_size + VARHDRSZ;\n- memcpy(VARATT_DATA(chunk_data), data_p, chunk_size);\n+ VARATT_SIZEP(&chunk_data) = chunk_size + VARHDRSZ;\n+ memcpy(VARATT_DATA(&chunk_data), data_p, chunk_size);\n toasttup = heap_formtuple(toasttupDesc, t_values, t_nulls);\n if (!HeapTupleIsValid(toasttup))\n elog(ERROR, \"Failed to build TOAST tuple\");\n\nWith this patch applied, make check gets through database initialization.\n\nSteve Nicolai\n\n",
"msg_date": "Sat, 24 Mar 2001 17:54:01 -0600",
"msg_from": "Steve Nicolai <snicolai@mac.com>",
"msg_from_op": true,
"msg_subject": "gmake check fails on Solaris 8 with Sun cc"
},
{
"msg_contents": "Steve Nicolai <snicolai@mac.com> writes:\n> Continuing on my quest to get 7.1 to build on Solaris 8 with\n> Sun's cc 5.0, I found an alignment problem in\n> backend/access/heap/tuptoaster.c\n\nGood catch. I fixed a couple similar problems (assuming that a local\n\"char buffer[N]\" object would be aligned on better-than-char boundaries)\nin xlog.c not long ago. I wonder if any others are lurking?\n\n> - char chunk_data[VARHDRSZ + TOAST_MAX_CHUNK_SIZE];\n> + union {\n> + varattrib a;\n> + char d[VARHDRSZ + TOAST_MAX_CHUNK_SIZE];\n> + } chunk_data;\n\nThis is pretty ugly, it'd be better to use a struct of a struct-varlena\nheader followed by a char[TOAST_MAX_CHUNK_SIZE] data area. Will fix.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 24 Mar 2001 19:17:05 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: gmake check fails on Solaris 8 with Sun cc "
}
] |
[
{
"msg_contents": "[Cced: to hackers list]\n\nCan you turn off --enable-locale or set locale to C and try again? I\nguess upper() does bad thing in that it thinks the letter is Polish\nbut actually it is unicode(UTF-8).\n--\nTatsuo Ishii\n\n> I have encountered some problems while trying to create a database that \n> would use the Polish locale. \n> After compiling with --enable-multibyte --enable-locale \n> --enable-unicode-conversion, I initdb-ed postgres with -E UNICODE and \n> started the database (the correct locale was set - pl_PL).\n> \n> However, upon connecting to psql, using \\encoding LATIN2 (as suggested by \n> the iso8859-2 locale) and doing a test :\n> SELECT upper('acelnoszx'); (these are Polish national chars (0x81 etc.), \n> not the ASCII \n> ones), I keep getting the message:\n> \n> utf_to_latin: could not convert UTF-8 (0xc3a3) ignored\n> (repeated 3x for different chars).\n> \n> The letters are not converted to uppercase, either.\n> \n> When using LATIN2 at initdb it works fine (Unfortunately I need an UTF-8 \n> database for the i18n issues with Tcl8.x), so Unicode support is a must.\n> \n> Any hints? (That was tried with PG7.1RC1 and Beta 5)\n> \n> -- \n> Grzegorz Mucha <mucher@tigana.pl> ICQ #91619595, tel.(502)261417\n> ----------------------------------------------------------------\n> An optimist is a man who looks forward to marriage.\n> A pessimist is a married optimist\n",
"msg_date": "Sun, 25 Mar 2001 14:51:05 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Problems with Polish locale"
}
] |
[
{
"msg_contents": "Following up on the recent bug report from Steve Nicolai, I spent a\ntedious hour groveling through all the warnings emitted by gcc with\n-Wcast-align. (We ought to try to reduce the number of them, but that's\na task for another day.)\n\nI found seven places, in addition to the tuptoaster.c error originally\nidentified by Steve, in which the code is assuming that a \"char foo[N]\"\nlocal variable will be aligned on better-than-char boundaries by the\ncompiler. All were inserted since 7.0. All but one were inserted by\nVadim in the new WAL code; the other one is in large-object support\nand is my fault :-(\n\nI will fix these shortly, but I wanted to raise a flag to people:\ndon't do that. An array of X is not guaranteed to be aligned any\nbetter than an X is.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 25 Mar 2001 16:27:53 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "More bogus alignment assumptions"
},
{
"msg_contents": "> Following up on the recent bug report from Steve Nicolai, I spent a\n> tedious hour groveling through all the warnings emitted by gcc with\n> -Wcast-align. (We ought to try to reduce the number of them, but that's\n> a task for another day.)\n> \n> I found seven places, in addition to the tuptoaster.c error originally\n> identified by Steve, in which the code is assuming that a \"char foo[N]\"\n> local variable will be aligned on better-than-char boundaries by the\n> compiler. All were inserted since 7.0. All but one were inserted by\n> Vadim in the new WAL code; the other one is in large-object support\n> and is my fault :-(\n> \n> I will fix these shortly, but I wanted to raise a flag to people:\n> don't do that. An array of X is not guaranteed to be aligned any\n> better than an X is.\n\nAdded to TODO:\n\n\t* Remove warnings created by -Wcast-align\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 25 Mar 2001 16:41:40 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: More bogus alignment assumptions"
}
] |
[
{
"msg_contents": "Was just trying to recreate my docs from CVS, using the same tools\nI've been using for months, and got the following:\n\ngmake -C sgml clean\ngmake[1]: Entering directory `/home/ler/pg-dev/pgsql/doc/src/sgml'\nrm -f catalog\nrm -f HTML.manifest *.html\nrm -rf *.1 *.l man1 manl manpage.refs manpage.links manpage.log\nrm -f *.rtf *.tex *.dvi *.aux *.log *.ps *.pdf\ngmake[1]: Leaving directory `/home/ler/pg-dev/pgsql/doc/src/sgml'\ngmake -C sgml programmer.html\ngmake[1]: Entering directory `/home/ler/pg-dev/pgsql/doc/src/sgml'\n(echo \"PUBLIC \\\"-//Norman Walsh//DOCUMENT DocBook HTML\nStylesheet//EN\\\"\n\\\"/usr/local/share/sgml/docbook/dsssl/modular/html/docbook.dsl\\\"\"; \\\necho \"PUBLIC \\\"-//Norman Walsh//DOCUMENT DocBook Print\nStylesheet//EN\\\"\n\\\"/usr/local/share/sgml/docbook/dsssl/modular/print/docbook.dsl\\\"\") >\ncatalog\njade -D . -D ./ref -D ./../graphics -d stylesheet.dsl -i output-html\n-t sgml book-decl.sgml programmer.sgml\nln -sf programmer.html index.html\ngmake[1]: Leaving directory `/home/ler/pg-dev/pgsql/doc/src/sgml'\ncd sgml && tar -cf ../programmer.tar --exclude=Makefile\n--exclude='*.sgml' --exclude=ref *.html -C `cd . && pwd`/graphics\ncatalogs.gif connections.gif\ntar: can't add file catalogs.gif : No such file or directory\ntar: can't add file connections.gif : No such file or directory\ngmake: *** [programmer.tar] Error 1\ngmake: *** Deleting file `programmer.tar'\n$ \n\nthis was after making a graphics directory.\n\nDid someone forget to add some files to CVS?\n\nLER\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Sun, 25 Mar 2001 18:07:35 -0600",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "docs toolchain appears broke?"
},
{
"msg_contents": "Larry Rosenman <ler@lerctr.org> writes:\n> gmake[1]: Leaving directory `/home/ler/pg-dev/pgsql/doc/src/sgml'\n> cd sgml && tar -cf ../programmer.tar --exclude=Makefile\n> --exclude='*.sgml' --exclude=ref *.html -C `cd . && pwd`/graphics\n> catalogs.gif connections.gif\n> tar: can't add file catalogs.gif : No such file or directory\n> tar: can't add file connections.gif : No such file or directory\n> gmake: *** [programmer.tar] Error 1\n\nKinda looks like Ian broke the compile-in-source-dir case while\nmaking the compile-in-separate-dir case work. Tut tut.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 26 Mar 2001 01:15:02 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: docs toolchain appears broke? "
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> Larry Rosenman <ler@lerctr.org> writes:\n> > gmake[1]: Leaving directory `/home/ler/pg-dev/pgsql/doc/src/sgml'\n> > cd sgml && tar -cf ../programmer.tar --exclude=Makefile\n> > --exclude='*.sgml' --exclude=ref *.html -C `cd . && pwd`/graphics\n> > catalogs.gif connections.gif\n> > tar: can't add file catalogs.gif : No such file or directory\n> > tar: can't add file connections.gif : No such file or directory\n> > gmake: *** [programmer.tar] Error 1\n> \n> Kinda looks like Ian broke the compile-in-source-dir case while\n> making the compile-in-separate-dir case work. Tut tut.\n\nYes. My apologies. This patch is one way to fix things.\n\nIan\n\nIndex: Makefile\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/doc/src/Makefile,v\nretrieving revision 1.17\ndiff -u -r1.17 Makefile\n--- Makefile\t2001/03/25 08:32:24\t1.17\n+++ Makefile\t2001/03/26 07:03:23\n@@ -43,17 +43,20 @@\n programmer.tar:\n \t$(MAKE) -C sgml clean\n \t$(MAKE) -C sgml programmer.html\n-\tcd sgml && $(TAR) -cf ../$@ $(TAREXCLUDE) *.html -C `cd $(srcdir) && pwd`/graphics catalogs.gif connections.gif\n+\tabssrcdir=`cd $(srcdir) && pwd`; \\\n+\tcd sgml && $(TAR) -cf ../$@ $(TAREXCLUDE) *.html -C $$abssrcdir/graphics catalogs.gif connections.gif\n \n tutorial.tar:\n \t$(MAKE) -C sgml clean\n \t$(MAKE) -C sgml tutorial.html\n-\tcd sgml && $(TAR) -cf ../$@ $(TAREXCLUDE) *.html -C `cd $(srcdir) && pwd`/graphics clientserver.gif\n+\tabssrcdir=`cd $(srcdir) && pwd`; \\\n+\tcd sgml && $(TAR) -cf ../$@ $(TAREXCLUDE) *.html -C $$abssrcdir/graphics clientserver.gif\n \n postgres.tar:\n \t$(MAKE) -C sgml clean\n \t$(MAKE) -C sgml postgres.html\n-\tcd sgml && $(TAR) -cf ../$@ $(TAREXCLUDE) *.html -C `cd $(srcdir) && pwd`/graphics catalogs.gif clientserver.gif connections.gif\n+\tabssrcdir=`cd $(srcdir) && pwd`; \\\n+\tcd sgml && $(TAR) -cf ../$@ $(TAREXCLUDE) *.html -C $$abssrcdir/graphics catalogs.gif clientserver.gif connections.gif\n \n man.tar:\n \t$(MAKE) -C sgml man\n",
"msg_date": "25 Mar 2001 23:04:53 -0800",
"msg_from": "Ian Lance Taylor <ian@airs.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] docs toolchain appears broke?"
},
{
"msg_contents": "* Ian Lance Taylor <ian@airs.com> [010326 01:14]:\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n> \n> > Larry Rosenman <ler@lerctr.org> writes:\n> > > gmake[1]: Leaving directory `/home/ler/pg-dev/pgsql/doc/src/sgml'\n> > > cd sgml && tar -cf ../programmer.tar --exclude=Makefile\n> > > --exclude='*.sgml' --exclude=ref *.html -C `cd . && pwd`/graphics\n> > > catalogs.gif connections.gif\n> > > tar: can't add file catalogs.gif : No such file or directory\n> > > tar: can't add file connections.gif : No such file or directory\n> > > gmake: *** [programmer.tar] Error 1\n> > \n> > Kinda looks like Ian broke the compile-in-source-dir case while\n> > making the compile-in-separate-dir case work. Tut tut.\n> \n> Yes. My apologies. This patch is one way to fix things.\n> \nThis patch seems to work. Thanks. \n\nLER\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Mon, 26 Mar 2001 05:13:39 -0600",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] docs toolchain appears broke?"
},
{
"msg_contents": "Can Ian's patch be committed, please?\n\nThanks.\nLER\n\n\n>>>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<<\n\nOn 3/26/01, 5:13:39 AM, Larry Rosenman <ler@lerctr.org> wrote regarding Re: \n[HACKERS] docs toolchain appears broke?:\n\n\n> * Ian Lance Taylor <ian@airs.com> [010326 01:14]:\n> > Tom Lane <tgl@sss.pgh.pa.us> writes:\n> >\n> > > Larry Rosenman <ler@lerctr.org> writes:\n> > > > gmake[1]: Leaving directory `/home/ler/pg-dev/pgsql/doc/src/sgml'\n> > > > cd sgml && tar -cf ../programmer.tar --exclude=Makefile\n> > > > --exclude='*.sgml' --exclude=ref *.html -C `cd . && pwd`/graphics\n> > > > catalogs.gif connections.gif\n> > > > tar: can't add file catalogs.gif : No such file or directory\n> > > > tar: can't add file connections.gif : No such file or directory\n> > > > gmake: *** [programmer.tar] Error 1\n> > >\n> > > Kinda looks like Ian broke the compile-in-source-dir case while\n> > > making the compile-in-separate-dir case work. Tut tut.\n> >\n> > Yes. My apologies. This patch is one way to fix things.\n> >\n> This patch seems to work. Thanks.\n\n> LER\n\n> --\n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 E-Mail: ler@lerctr.org\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n\n> http://www.postgresql.org/users-lounge/docs/faq.html\n",
"msg_date": "Tue, 27 Mar 2001 15:56:01 GMT",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] docs toolchain appears broke?"
},
{
"msg_contents": "\nI still have it. I am waiting for someone to comment on it. Seems you\nare the one to comment. Applying now.\n\n\n[ Charset ISO-8859-1 unsupported, converting... ]\n> Can Ian's patch be committed, please?\n> \n> Thanks.\n> LER\n> \n> \n> >>>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<<\n> \n> On 3/26/01, 5:13:39 AM, Larry Rosenman <ler@lerctr.org> wrote regarding Re: \n> [HACKERS] docs toolchain appears broke?:\n> \n> \n> > * Ian Lance Taylor <ian@airs.com> [010326 01:14]:\n> > > Tom Lane <tgl@sss.pgh.pa.us> writes:\n> > >\n> > > > Larry Rosenman <ler@lerctr.org> writes:\n> > > > > gmake[1]: Leaving directory `/home/ler/pg-dev/pgsql/doc/src/sgml'\n> > > > > cd sgml && tar -cf ../programmer.tar --exclude=Makefile\n> > > > > --exclude='*.sgml' --exclude=ref *.html -C `cd . && pwd`/graphics\n> > > > > catalogs.gif connections.gif\n> > > > > tar: can't add file catalogs.gif : No such file or directory\n> > > > > tar: can't add file connections.gif : No such file or directory\n> > > > > gmake: *** [programmer.tar] Error 1\n> > > >\n> > > > Kinda looks like Ian broke the compile-in-source-dir case while\n> > > > making the compile-in-separate-dir case work. Tut tut.\n> > >\n> > > Yes. My apologies. This patch is one way to fix things.\n> > >\n> > This patch seems to work. Thanks.\n> \n> > LER\n> \n> > --\n> > Larry Rosenman http://www.lerctr.org/~ler\n> > Phone: +1 972-414-9812 E-Mail: ler@lerctr.org\n> > US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> \n> > http://www.postgresql.org/users-lounge/docs/faq.html\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 27 Mar 2001 11:33:23 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] docs toolchain appears broke?"
},
{
"msg_contents": "\nApplied. Thanks.\n\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n> \n> > Larry Rosenman <ler@lerctr.org> writes:\n> > > gmake[1]: Leaving directory `/home/ler/pg-dev/pgsql/doc/src/sgml'\n> > > cd sgml && tar -cf ../programmer.tar --exclude=Makefile\n> > > --exclude='*.sgml' --exclude=ref *.html -C `cd . && pwd`/graphics\n> > > catalogs.gif connections.gif\n> > > tar: can't add file catalogs.gif : No such file or directory\n> > > tar: can't add file connections.gif : No such file or directory\n> > > gmake: *** [programmer.tar] Error 1\n> > \n> > Kinda looks like Ian broke the compile-in-source-dir case while\n> > making the compile-in-separate-dir case work. Tut tut.\n> \n> Yes. My apologies. This patch is one way to fix things.\n> \n> Ian\n> \n> Index: Makefile\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/doc/src/Makefile,v\n> retrieving revision 1.17\n> diff -u -r1.17 Makefile\n> --- Makefile\t2001/03/25 08:32:24\t1.17\n> +++ Makefile\t2001/03/26 07:03:23\n> @@ -43,17 +43,20 @@\n> programmer.tar:\n> \t$(MAKE) -C sgml clean\n> \t$(MAKE) -C sgml programmer.html\n> -\tcd sgml && $(TAR) -cf ../$@ $(TAREXCLUDE) *.html -C `cd $(srcdir) && pwd`/graphics catalogs.gif connections.gif\n> +\tabssrcdir=`cd $(srcdir) && pwd`; \\\n> +\tcd sgml && $(TAR) -cf ../$@ $(TAREXCLUDE) *.html -C $$abssrcdir/graphics catalogs.gif connections.gif\n> \n> tutorial.tar:\n> \t$(MAKE) -C sgml clean\n> \t$(MAKE) -C sgml tutorial.html\n> -\tcd sgml && $(TAR) -cf ../$@ $(TAREXCLUDE) *.html -C `cd $(srcdir) && pwd`/graphics clientserver.gif\n> +\tabssrcdir=`cd $(srcdir) && pwd`; \\\n> +\tcd sgml && $(TAR) -cf ../$@ $(TAREXCLUDE) *.html -C $$abssrcdir/graphics clientserver.gif\n> \n> postgres.tar:\n> \t$(MAKE) -C sgml clean\n> \t$(MAKE) -C sgml postgres.html\n> -\tcd sgml && $(TAR) -cf ../$@ $(TAREXCLUDE) *.html -C `cd $(srcdir) && pwd`/graphics catalogs.gif clientserver.gif connections.gif\n> +\tabssrcdir=`cd $(srcdir) && pwd`; \\\n> +\tcd sgml && $(TAR) -cf ../$@ $(TAREXCLUDE) *.html -C $$abssrcdir/graphics catalogs.gif clientserver.gif connections.gif\n> \n> man.tar:\n> \t$(MAKE) -C sgml man\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 27 Mar 2001 11:34:09 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] docs toolchain appears broke?"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I still have it. I am waiting for someone to comment on it. Seems you\n> are the one to comment. Applying now.\n\nI was waiting for Peter E. to comment ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 27 Mar 2001 11:35:21 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] docs toolchain appears broke? "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I still have it. I am waiting for someone to comment on it. Seems you\n> > are the one to comment. Applying now.\n> \n> I was waiting for Peter E. to comment ...\n\nYea, me too.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 27 Mar 2001 11:36:53 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] docs toolchain appears broke?"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.