threads
listlengths
1
2.99k
[ { "msg_contents": "> > Was the following bug already fixed ?\n> \n> Dunno. I've changed the WAL ReadRecord code so that it fails soft (no\n> Asserts or elog(STOP)s) for all failure cases, so the particular crash\n> mode exhibited here should be gone. But I'm not sure why the code\n> appears to be trying to open the wrong log segment, as Vadim comments.\n> That bug might still be there. Need to try to reproduce the problem\n> with new code.\n\nDid you try to start up with wal-debug?\n\nVadim\n", "msg_date": "Thu, 8 Mar 2001 09:31:43 -0800 ", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: WAL does not recover gracefully from out-of-disk-sp\n\tace" } ]
[ { "msg_contents": "> > I see that seek+write was changed to write-s in XLogFileInit\n> > (that was induced by subj, right?), but what about problem\n> > itself?\n> \n> > BTW, were performance tests run after seek+write --> write-s\n> > change?\n> \n> That change was for safety, not for performance. It might be a\n> performance win on systems that support fdatasync properly (because it\n> lets us use fdatasync), otherwise it's probably not a performance win.\n\nEven with true fdatasync it's not obviously good for performance - it takes\ntoo long time to write 16Mb files and fills OS buffer cache with trash-:(\nProbably, we need in separate process like LGWR (log writer) in Oracle.\nI also like the Andreas idea about re-using log files.\n\n> But we need it regardless --- if you didn't want a fully-allocated WAL\n> file, why'd you bother with the original seek-and-write-1-byte code?\n\nI considered this mostly as hint for OS about how log file should be\nallocated (to decrease fragmentation). Not sure how OSes use such hints\nbut seek+write costs nothing.\n\nVadim\n", "msg_date": "Thu, 8 Mar 2001 09:56:24 -0800 ", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: WAL does not recover gracefully from out-of-disk-sp\n\tace" }, { "msg_contents": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n\n> > But we need it regardless --- if you didn't want a fully-allocated WAL\n> > file, why'd you bother with the original seek-and-write-1-byte code?\n> \n> I considered this mostly as hint for OS about how log file should be\n> allocated (to decrease fragmentation). Not sure how OSes use such hints\n> but seek+write costs nothing.\n\nDoing a seek to a large value and doing a write is not a hint to a\nUnix system that you are going to write a large sequential file. If\nanything, it's a hint that you are going to write a sparse file. A\nUnix kernel will optimize by not allocating blocks you aren't going to\nwrite to.\n\nIan\n\n---------------------------(end of broadcast)---------------------------\nTIP 97: Oh this age! How tasteless and ill-bred it is.\n\t\t-- Gaius Valerius Catullus\n", "msg_date": "08 Mar 2001 10:51:38 -0800", "msg_from": "Ian Lance Taylor <ian@airs.com>", "msg_from_op": false, "msg_subject": "Re: WAL does not recover gracefully from out-of-disk-sp ace" }, { "msg_contents": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n> Even with true fdatasync it's not obviously good for performance - it takes\n> too long time to write 16Mb files and fills OS buffer cache with trash-:(\n\nTrue. But at least the write is (hopefully) being done at a\nnon-performance-critical time.\n\n> Probably, we need in separate process like LGWR (log writer) in Oracle.\n\nI think the create-ahead feature in the checkpoint maker should be on\nby default.\n\n>> But we need it regardless --- if you didn't want a fully-allocated WAL\n>> file, why'd you bother with the original seek-and-write-1-byte code?\n\n> I considered this mostly as hint for OS about how log file should be\n> allocated (to decrease fragmentation). Not sure how OSes use such hints\n> but seek+write costs nothing.\n\nAFAIK, extant Unixes will not regard this as a hint at all; they'll\nthink it is a great opportunity to not store zeroes :-(.\n\nOne reason that I like logfile fill to be done separately is that it's\neasier to convince ourselves that failure (due to out of disk space)\nneed not require elog(STOP) than if we have the same failure during\nXLogWrite. You are right that we don't have time to consider each STOP\nin the WAL code, but I think we should at least look at that case...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 08 Mar 2001 15:04:22 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: WAL does not recover gracefully from out-of-disk-sp ace " } ]
[ { "msg_contents": "I am currently looking at a frozen system: a backend crashed during XLOG\nwrite (which I was deliberately provoking, via running it out of disk\nspace), and now the postmaster is unable to recover because it's waiting\naround for a checkpoint process that it had launched milliseconds before\nthe crash. The checkpoint process, unfortunately, is not going to quit\nanytime soon because it's hung up trying to get a spinlock that the\ncrashing backend left locked.\n\nEventually the checkpoint process will time out the spinlock and abort\n(but please note that this is true only because I insisted --- Vadim\nwanted to have infinite timeouts on the WAL spinlocks. I think this is\ngood evidence that that's a bad idea). However, while sitting here\nlooking at it I can't help wondering whether the checkpoint process\nshouldn't have responded to the SIGTERM that the postmaster sent it\nwhen the other backend crashed.\n\nIs it really such a good idea for the checkpoint process to ignore\nSIGTERM?\n\nWhile we're at it: is it really such a good idea to use elog(STOP)\nall over the place in the WAL stuff? If XLogFileInit had chosen\nto exit with elog(FATAL), then we would have released the spinlock\non the way out of the failing backend, and the checkpointer wouldn't\nbe stuck.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 08 Mar 2001 13:34:38 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Checkpoint process signal handling seems wrong" } ]
[ { "msg_contents": "Greetings,\n\nI have a real simple table with a timestamp field. The timestamp field has \nan index on it. But, the index does not seem to be taken into account for \nselects that return rows:\n\npglog=# explain select time_stamp from history_entries where time_stamp < \n'03-01-2000';\nNOTICE: QUERY PLAN:\n\nIndex Scan using hist_entries_timestamp on \nhistory_entries (cost=0.00..12810.36 rows=3246 width=8)\n\nEXPLAIN\npglog=# explain select time_stamp from history_entries where time_stamp < \n'04-01-2000';\nNOTICE: QUERY PLAN:\n\nSeq Scan on history_entries (cost=0.00..160289.71 rows=138215 width=8)\n\nEXPLAIN\npglog=# set enable_seqscan to off;\nSET VARIABLE\npglog=# explain select time_stamp from history_entries where time_stamp < \n'04-01-2000';\nNOTICE: QUERY PLAN:\n\nIndex Scan using hist_entries_timestamp on \nhistory_entries (cost=0.00..368241.51 rows=138215 width=8)\n\nEXPLAIN\npglog=# set enable_seqscan to on;\nSET VARIABLE\npglog=#\n\nThe query where the time_stamp < '03-01-2000' does not return any rows, the \n04-01-2000 date does return rows. When I disable seqscan the query is \nalmost instant, but with it on, it takes about 3 or 4 minutes. Why can't \nthe query planner use the index in the later case?\n\nThanks,\nMatthew\n\n", "msg_date": "Thu, 08 Mar 2001 13:49:42 -0500", "msg_from": "Matthew Hagerty <mhagerty@voyager.net>", "msg_from_op": true, "msg_subject": "Query not using index, please explain." }, { "msg_contents": "Richard,\n\nThanks for the response, I guess I should have included a little more \ninformation. The table contains 3.5 million rows. The indexes were \ncreated after the data was imported into the table and I had just run \nvacuum and vacuum analyze on the database before trying the queries and \nsending this question to hackers.\n\nWhen I turned the seqscan variable off and ran the query with the \n'04-01-2000' date the results were literally instantaneous. Turn the \nseqscan back on and it takes right around 3 minutes. Also, the query for \nany date older than the '04-01-2000' returns zero rows. The actual number \nof rows for the '04-01-2000' select is right around 8300.\n\nHere is the table for more information:\n\npglog=# \\d history_entries\n Table \"history_entries\"\n Attribute | Type | Modifier\n------------+-------------+----------\n domain | varchar(80) |\n time_stamp | timestamp |\n response | integer |\n transfered | integer |\n reqtime | integer |\n entry | text |\nIndices: hist_entries_domain,\n hist_entries_timestamp\n\nI'm also having problems with this query:\n\nselect domain from history_entries group by domain;\n\nTo me, since there is an index on domain, it seems like this should be a \nrather fast thing to do? It takes a *very* long time, no matter if I turn \nseqscan on or off.\n\npglog=# select version();\n version\n-------------------------------------------------------------------------\n PostgreSQL 7.0.3 on i386-unknown-freebsdelf3.4, compiled by gcc 2.7.2.3\n(1 row)\n\nThanks,\nMatthew\n\n\nAt 07:18 PM 3/8/2001 +0000, you wrote:\n>On Thu, Mar 08, 2001 at 01:49:42PM -0500, Matthew Hagerty wrote:\n> > Greetings,\n> >\n> > I have a real simple table with a timestamp field. The timestamp field \n> has\n> > an index on it. But, the index does not seem to be taken into account for\n> > selects that return rows:\n> >\n> > pglog=# explain select time_stamp from history_entries where time_stamp <\n> > '03-01-2000';\n> > NOTICE: QUERY PLAN:\n> >\n> > Index Scan using hist_entries_timestamp on\n> > history_entries (cost=0.00..12810.36 rows=3246 width=8)\n> >\n> > EXPLAIN\n> > pglog=# explain select time_stamp from history_entries where time_stamp <\n> > '04-01-2000';\n> > NOTICE: QUERY PLAN:\n> >\n> > Seq Scan on history_entries (cost=0.00..160289.71 rows=138215 width=8)\n> >\n> > EXPLAIN\n> > pglog=# set enable_seqscan to off;\n> > SET VARIABLE\n> > pglog=# explain select time_stamp from history_entries where time_stamp <\n> > '04-01-2000';\n> > NOTICE: QUERY PLAN:\n> >\n> > Index Scan using hist_entries_timestamp on\n> > history_entries (cost=0.00..368241.51 rows=138215 width=8)\n> >\n> > EXPLAIN\n> > pglog=# set enable_seqscan to on;\n> > SET VARIABLE\n> > pglog=#\n> >\n> > The query where the time_stamp < '03-01-2000' does not return any rows, \n> the\n> > 04-01-2000 date does return rows. When I disable seqscan the query is\n> > almost instant, but with it on, it takes about 3 or 4 minutes. Why can't\n> > the query planner use the index in the later case?\n>\n>Well, it can, it just chooses not to. Your second EXPLAIN shows that\n>it thinks it's going to get 138215 rows from that select; it then\n>calculates that it would be more expensive to use the index than simply\n>to scan the table. Presumably it actually returns many fewer rows than\n>that. Have you done a VACUUM ANALYZE recently? If you get plans this\n>badly wrong immediately after a VACUUM ANALYZE, *then*'s the time to\n>ask -hackers about it (FAQ item 4.9).\n>\n>Richard\n\n", "msg_date": "Thu, 08 Mar 2001 14:43:54 -0500", "msg_from": "Matthew Hagerty <mhagerty@voyager.net>", "msg_from_op": true, "msg_subject": "Re: Query not using index, please explain." }, { "msg_contents": "Matthew Hagerty <mhagerty@voyager.net> writes:\n> The query where the time_stamp < '03-01-2000' does not return any rows, the \n> 04-01-2000 date does return rows. When I disable seqscan the query is \n> almost instant, but with it on, it takes about 3 or 4 minutes. Why can't \n> the query planner use the index in the later case?\n\nIt *can* (and did, in two of the three examples you gave). It just\ndoesn't think the indexscan is faster --- note the cost estimates.\nEvidently the cost estimates are way off, probably because the estimated\nnumber of selected rows is way off.\n\nHave you done a VACUUM ANALYZE lately? Not that that will help if the\ndistribution of timestamps is highly irregular :-(. See the many past\ndiscussions of these issues.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 08 Mar 2001 15:19:43 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Query not using index, please explain. " }, { "msg_contents": "On Thu, Mar 08, 2001 at 02:43:54PM -0500, Matthew Hagerty wrote:\n> Richard,\n> \n> Thanks for the response, I guess I should have included a little more \n> information. The table contains 3.5 million rows. The indexes were \n> created after the data was imported into the table and I had just run \n> vacuum and vacuum analyze on the database before trying the queries and \n> sending this question to hackers.\n> \n> When I turned the seqscan variable off and ran the query with the \n> '04-01-2000' date the results were literally instantaneous. Turn the \n> seqscan back on and it takes right around 3 minutes. Also, the query for \n> any date older than the '04-01-2000' returns zero rows. The actual number \n> of rows for the '04-01-2000' select is right around 8300.\n\nThis is where you need an expert. :) But I'll have a go and someone\nwill correct me if I'm wrong...\n\nThe statistics which are kept aren't fine-grained enough to be right\nhere. All the optimiser knows are the highest and lowest values of\nthe attribute, the most common value (not really useful here), the\nnumber of nulls in the column, and the \"dispersion\" (a sort of\nhandwavy measure of how bunched-together the values are). So in a\ncase like this, where effectively the values are all different over\na certain range, all it can do is (more or less) linearly interpolate\nin the range to guess how many tuples are going to be returned. Which\nmeans it's liable to be completely wrong if your values aren't\nevenly distributed over their whole range, which it seems they aren't.\nIt thinks you're going to hit around 1/28 of the tuples in this table,\npresumably because '04/01/2000' is about 1/28 of the way from your\nminimum value to your maximum.\n\nThis sort of thing will all become much better one fine day when\nwe have much better statistics available, and so many of us want\nsuch things that that fine day will surely come. Until then, I think\nyou're best off turning off seqscans from your client code when\nyou know they'll be wrong. (That's what we do here in several similar\ncases).\n\nCan someone who really knows this stuff (Tom?) step in if what I've\njust said is completely wrong?\n\n> select domain from history_entries group by domain;\n> \n> To me, since there is an index on domain, it seems like this should be a \n> rather fast thing to do? It takes a *very* long time, no matter if I turn \n> seqscan on or off.\n\nThe reason this is slow is that Postgres always has to look at heap\ntuples, even when it's been sent there by indexes. This in turn is\nbecause of the way the storage manager works (only by looking in the\nheap can you tell for sure whether a tuple is valid for the current\ntransaction). So a \"group by\" always has to look at every heap tuple\n(that hasn't been eliminated by a where clause). \"select distinct\"\nhas the same problem. I don't think there's a way to do what you\nwant here with your existing schema without a sequential scan over\nthe table.\n\n\nRichard\n", "msg_date": "Thu, 8 Mar 2001 20:38:33 +0000", "msg_from": "Richard Poole <richard.poole@vi.net>", "msg_from_op": false, "msg_subject": "Re: Query not using index, please explain." }, { "msg_contents": "Richard Poole <richard.poole@vi.net> writes:\n> [ snip a bunch of commentary about optimizer statistics ]\n> Can someone who really knows this stuff (Tom?) step in if what I've\n> just said is completely wrong?\n\nLooked good to me.\n\n>> select domain from history_entries group by domain;\n>> \n>> To me, since there is an index on domain, it seems like this should be a \n>> rather fast thing to do? It takes a *very* long time, no matter if I turn \n>> seqscan on or off.\n\n> The reason this is slow is that Postgres always has to look at heap\n> tuples, even when it's been sent there by indexes. This in turn is\n> because of the way the storage manager works (only by looking in the\n> heap can you tell for sure whether a tuple is valid for the current\n> transaction). So a \"group by\" always has to look at every heap tuple\n> (that hasn't been eliminated by a where clause). \"select distinct\"\n> has the same problem. I don't think there's a way to do what you\n> want here with your existing schema without a sequential scan over\n> the table.\n\nBut this last I object to. You certainly could use an index scan here,\nit's just that it's not very likely to be faster. The way that Postgres\npresently does GROUP BY and SELECT DISTINCT is to sort the tuples into\norder and then combine/eliminate adjacent duplicates. (If you've ever\nused a \"sort | uniq\" pipeline in Unix then you know the principle.)\nSo the initial requirement for this query is to scan the history_entries\ntuples in order by domain. We can do this either by a sequential scan\nand explicit sort, or by an index scan using an ordered index on domain.\nIt turns out that unless the physical order of the tuples is pretty\nclose to their index order, the index-scan method is actually slower,\nbecause it results in a lot of random-access thrashing. But neither way\nis exactly speedy.\n\nOne thing that's on the to-do list is to look at reimplementing these\noperations using in-memory hash tables, with one hash entry per distinct\nvalue of the GROUP/DISTINCT columns. Then you can just do a sequential\nscan, with no sort, and as long as there aren't so many distinct values\nas to make the hash table overflow memory you're going to win. However\nuntil we have statistics that can give us some idea how many distinct\nvalues there might be, there's no way for the planner to make an\nintelligent choice between this way and the sort/uniq way...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 09 Mar 2001 12:34:40 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Query not using index, please explain. " } ]
[ { "msg_contents": "> > It seems that you want guarantee more than me, Tom -:)\n> \n> No, but I want a system that's not brittle. You seem to be content to\n> design a system that is reliable as long as the WAL log is OK \n> but loses the entire database unrecoverably as soon as one bit goes bad\n> in the log.\n\nI don't see how absence of old checkpoint forces losing entire database.\nYou probably will get better consistency by re-applying modifications\nwhich supposed to be in data files already but it seems questionable\nto me.\n\n> > BTW, in some my tests size of on-line logs was ~ 200Mb with default\n> > checkpoint interval. So, it's worth to care about on-line logs size.\n> \n> Okay, but to me that suggests we need a smarter log management strategy,\n> not a management strategy that throws away data we might wish we still\n> had (for manual analysis if nothing else).\n\nThis is what should be covered by archiving log files. Unimplemented -:(\n\n> Perhaps the checkpoint creation rule should be \"every M seconds *or*\n> every N megabytes of log, whichever comes first\".\n\nI like this! Regardless usability of keeping older checkpoint (especially\nin future, with log archiving) your rule is worth in any case.\n(Nevertheless, keeping two checkpoints still increases disk requirements -:)\nBut seems I have to waive my objection if I didn't convince you - it's\nreally simplest way to get WAL restart-ability and I personally have no\nability to implement log scanning now).\n\n> > Please convince me that NEXTXID is necessary.\n> > Why add anything that is not useful?\n> \n> I'm not convinced that it's not necessary. In particular, \n> consider the case where we are trying to recover from a crash using\n> an on-line checkpoint as our last readable WAL entry. In the pre-NEXTXID\n> code, this checkpoint would contain the current XID counter and an\n> advanced-beyond-current OID counter. I think both of those numbers should\n> be advanced beyond current, so that there's some safety margin against\n> reusing XIDs/OIDs that were allocated by now-lost XLOG entries.\n> The OID code is doing this right, but the XID code wasn't.\n> \n> Again, it's a question of brittleness. Yes, as long as everything\n> operates as designed and the WAL log never drops a bit, we don't need\n> it. But I want a safety margin for when things aren't perfect.\n\nOnce again - my point is that in the event of lost log record one shouldn't\ntry to use existent database but just dump it, etc - ie no reason to keep\ninfo about allocated XIDs. But keeping NEXTXID costs nothing, at least -:)\n\nVadim\n", "msg_date": "Thu, 8 Mar 2001 11:00:47 -0800 ", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: Proposed WAL changes " }, { "msg_contents": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n>> No, but I want a system that's not brittle. You seem to be content to\n>> design a system that is reliable as long as the WAL log is OK \n>> but loses the entire database unrecoverably as soon as one bit goes bad\n>> in the log.\n\n> I don't see how absence of old checkpoint forces losing entire database.\n\nAs the code stood last week that's what would happen, because the system\nwould not restart unless pg_control pointed to a valid checkpoint\nrecord. I addressed that in a way that seemed good to me.\n\nNow, from what you've said in this conversation you would rather have\nthe system scan XLOG to decide where to replay from if it cannot read\nthe last checkpoint record. That would be okay with me, but even with\nthat approach I do not think it's safe to truncate the log to nothing\nas soon as we've written a checkpoint record. I want to see a reasonable\namount of log data there at all times. I don't insist that \"reasonable\namount\" necessarily means \"back to the prior checkpoint\" --- but that's\na simple and easy-to-implement interpretation.\n\n> You probably will get better consistency by re-applying modifications\n> which supposed to be in data files already but it seems questionable\n> to me.\n\nIt's not a guarantee, no, but it gives you a better probability of\nrecovering recent changes when things are hosed.\n\nBTW, can we really trust checkpoint to mean that all data file changes\nare down on disk? I see that the actual implementation of checkpoint is\n\n\twrite out all dirty shmem buffers;\n\tsync();\n\tif (IsUnderPostmaster)\n\t\tsleep(2);\n\tsync();\n\twrite checkpoint record to XLOG;\n\tfsync XLOG;\n\nNow HP's man page for sync() says\n\n The writing, although scheduled, is not necessarily complete upon\n return from sync.\n\nI can assure you that 2 seconds is nowhere near enough to ensure that a\nsync is complete on my workstation... and I doubt that \"scheduled\" means\n\"guaranteed to complete before any subsequently-requested I/O is done\".\nI think it's entirely possible that the checkpoint record will hit the\ndisk before the last heap buffer does.\n\nTherefore, even without considering disk drive write reordering, I do\nnot believe that a checkpoint guarantees very much, and so I think it's\npretty foolish to delete the preceding XLOG data immediately afterwards.\n\n\n>> Perhaps the checkpoint creation rule should be \"every M seconds *or*\n>> every N megabytes of log, whichever comes first\".\n\n> I like this! Regardless usability of keeping older checkpoint (especially\n> in future, with log archiving) your rule is worth in any case.\n\nOkay, I'll see if I can do something with this idea.\n\nOther than what we've discussed, do you have any comments/objections to\nmy proposed patch? I've been holding off committing it so that you have\ntime to review it...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 08 Mar 2001 14:30:30 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposed WAL changes " } ]
[ { "msg_contents": "> Eventually the checkpoint process will time out the spinlock and abort\n> (but please note that this is true only because I insisted --- Vadim\n> wanted to have infinite timeouts on the WAL spinlocks. I think this is\n> good evidence that that's a bad idea).\n\nI disagree - this is evidence of bug in implementation -:)\nTimeout will take too long time - so it's not solution.\nYears ago we used timeouts for deadlock detection.\nSpin timeouts are yesterday too -:)\n\n> However, while sitting here looking at it I can't help wondering whether\n> the checkpoint process shouldn't have responded to the SIGTERM that the\n> postmaster sent it when the other backend crashed.\n> \n> Is it really such a good idea for the checkpoint process to ignore\n> SIGTERM?\n\nSeems not, SIGTERM --> elog(STOP) should be Ok here.\n\n> While we're at it: is it really such a good idea to use elog(STOP)\n> all over the place in the WAL stuff? If XLogFileInit had chosen\n\nI just hadn't time to consider each particular case.\nIt's better to restart than to break WAL rules (only bogus disks\nare allowed to do this -:)).\n\n> to exit with elog(FATAL), then we would have released the spinlock\n> on the way out of the failing backend, and the checkpointer wouldn't\n> be stuck.\n\nI didn't use elog(FATAL) exactly because of it releases spins!\nWho knows what and how much other backend will have time to do\nif we'll release spins when things are bad. Each particular case\nmust be carefully considered.\n\nExample: one backend failed to insert log record for modified data\npage, releases that page write spin lock, concurrent checkpoint maker\ntakes read spin lock on that buffer and write it to disk before\npostmaster kill it...\n\nVadim\n", "msg_date": "Thu, 8 Mar 2001 11:30:34 -0800 ", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: Checkpoint process signal handling seems wrong" }, { "msg_contents": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n>> However, while sitting here looking at it I can't help wondering whether\n>> the checkpoint process shouldn't have responded to the SIGTERM that the\n>> postmaster sent it when the other backend crashed.\n>> \n>> Is it really such a good idea for the checkpoint process to ignore\n>> SIGTERM?\n\n> Seems not, SIGTERM --> elog(STOP) should be Ok here.\n\nYes, after further thought this seems not only desirable but\n*necessary*. Else the checkpoint maker might be writing bad data\nfrom corrupted shmem structures, which is exactly what the system-wide\nrestart mechanism is supposed to prevent.\n\nI'll fix the checkpoint process to accept SIGTERM and SIGUSR1 (but\nnot SIGINT) from the postmaster.\n\n\n>> While we're at it: is it really such a good idea to use elog(STOP)\n>> all over the place in the WAL stuff? If XLogFileInit had chosen\n\n> I just hadn't time to consider each particular case.\n\nOkay. You're right, that probably needs case-by-case thought that\nwe haven't time for right now.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 08 Mar 2001 14:45:48 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Checkpoint process signal handling seems wrong " } ]
[ { "msg_contents": "> Other than what we've discussed, do you have any \n> comments/objections to my proposed patch?\n> I've been holding off committing it so that you have\n> time to review it...\n\nSorry - I'm heading to meet Marc, Thomas & Geoff right now,\nwill try to comment asap.\n\nVadim\n", "msg_date": "Thu, 8 Mar 2001 11:34:47 -0800 ", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: Proposed WAL changes " } ]
[ { "msg_contents": "To implement the idea of performing a checkpoint after every so many\nXLOG megabytes (as well as after every so many seconds), I need to pick\nan additional signal number for the postmaster to accept. Seems like\nthe most appropriate choice for this is SIGUSR1, which isn't currently\nbeing used at the postmaster level.\n\nHowever, if I just do that, then SIGUSR1 and SIGQUIT will have\ncompletely different meanings for the postmaster and for the backends,\nin fact SIGQUIT to the postmaster means send SIGUSR1 to the backends.\nThis seems hopelessly confusing.\n\nI think it'd be a good idea to change the code so that SIGQUIT is the\nper-backend quickdie() signal, not SIGUSR1, to bring the postmaster and\nbackend signals back into some semblance of agreement.\n\nFor the moment we could leave the backends also accepting SIGUSR1 as\nquickdie, just in case someone out there is in the habit of sending\nthat signal manually to individual backends. Eventually backend SIGUSR1\nmight be reassigned to mean something else. (I suspect Bruce is\ncoveting it already ;-).)\n\nAny objections?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 08 Mar 2001 16:06:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Use SIGQUIT instead of SIGUSR1?" }, { "msg_contents": "On Thu, Mar 08, 2001 at 04:06:16PM -0500, Tom Lane wrote:\n> To implement the idea of performing a checkpoint after every so many\n> XLOG megabytes (as well as after every so many seconds), I need to pick\n> an additional signal number for the postmaster to accept. Seems like\n> the most appropriate choice for this is SIGUSR1, which isn't currently\n> being used at the postmaster level.\n> \n> However, if I just do that, then SIGUSR1 and SIGQUIT will have\n> completely different meanings for the postmaster and for the backends,\n> in fact SIGQUIT to the postmaster means send SIGUSR1 to the backends.\n> This seems hopelessly confusing.\n> \n> I think it'd be a good idea to change the code so that SIGQUIT is the\n> per-backend quickdie() signal, not SIGUSR1, to bring the postmaster and\n> backend signals back into some semblance of agreement.\n> \n> For the moment we could leave the backends also accepting SIGUSR1 as\n> quickdie, just in case someone out there is in the habit of sending\n> that signal manually to individual backends. Eventually backend SIGUSR1\n> might be reassigned to mean something else. (I suspect Bruce is\n> coveting it already ;-).)\n\nThe number and variety of signals used in PG is already terrifying.\n\nAttaching a specific meaning to SIGQUIT may be dangerous if the OS and \nits daemons also send SIGQUIT to mean something subtly different. I'd \nrather see a reduction in the use of signals, and a movement toward more \nmodern, better behaved interprocess communication mechanisms. Still, \n\"if it were done when 'tis done, then 'twere well It were done\" cleanly.\n\n--\nNathan Myers\nncm@zembu.com\n", "msg_date": "Thu, 8 Mar 2001 13:33:50 -0800", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: Use SIGQUIT instead of SIGUSR1?" }, { "msg_contents": "Tom Lane writes:\n\n> I think it'd be a good idea to change the code so that SIGQUIT is the\n> per-backend quickdie() signal, not SIGUSR1, to bring the postmaster and\n> backend signals back into some semblance of agreement.\n\nI think we agreed on this already when someone wanted to use a signal for\nsynchronizing \"near-committers\". Still seems like a good idea.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Thu, 8 Mar 2001 23:18:58 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Use SIGQUIT instead of SIGUSR1?" }, { "msg_contents": "ncm@zembu.com (Nathan Myers) writes:\n> On Thu, Mar 08, 2001 at 04:06:16PM -0500, Tom Lane wrote:\n>> I think it'd be a good idea to change the code so that SIGQUIT is the\n>> per-backend quickdie() signal, not SIGUSR1, to bring the postmaster and\n>> backend signals back into some semblance of agreement.\n\n> The number and variety of signals used in PG is already terrifying.\n\n> Attaching a specific meaning to SIGQUIT may be dangerous if the OS and \n> its daemons also send SIGQUIT to mean something subtly different.\n\nQuite true. One additional reason for this change is to make SIGQUIT\ndo something a little closer to what one would expect, ie, force-quit\nthe backend, and in particular to ensure that SIGQUIT'ing the whole\npostmaster-and-backends process group produces a reasonable result.\n\nWe've been gradually rationalizing the signal usage over the last few\nreleases, and this is another step in the process.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 08 Mar 2001 17:54:54 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Use SIGQUIT instead of SIGUSR1? " } ]
[ { "msg_contents": "Greetings,\n\nSorry about all the posts lately, but things seems to be running *really* \nslow on my database. I have two tables, both are identical and one is used \nto hold entries older than a certain date, i.e. the history table. I use \nthis query to move the old records from one to the other. In this case, is \neach insert part of a big transaction that commits when it is done, or is \neach insert its own transaction? Is there anything I can do to make this \nfaster? On average the entries table has about 50,000 records and the \nhistory_entries table has about 3.5 million.\n\ninsert into history_entries\nselect * from entries where domain='somevalue' and time_stamp between \n'date1' and 'date2'\n\nThanks,\nMatthew\n\n", "msg_date": "Thu, 08 Mar 2001 16:14:08 -0500", "msg_from": "Matthew Hagerty <matthew@venux.net>", "msg_from_op": true, "msg_subject": "Is INSERT FROM considered a transaction?" } ]
[ { "msg_contents": "Greetings,\n\nSorry about all the posts lately, but things seems to be running *really* \nslow on my database. I have two tables, both are identical and one is used \nto hold entries older than a certain date, i.e. the history table. I use \nthis query to move the old records from one to the other. In this case, is \neach insert part of a big transaction that commits when it is done, or is \neach insert its own transaction? Is there anything I can do to make this \nfaster? On average the entries table has about 50,000 records and the \nhistory_entries table has about 3.5 million.\n\ninsert into history_entries\nselect * from entries where domain='somevalue' and time_stamp between \n'date1' and 'date2'\n\nThanks,\nMatthew \n\n", "msg_date": "Thu, 08 Mar 2001 16:30:40 -0500", "msg_from": "Matthew Hagerty <mhagerty@voyager.net>", "msg_from_op": true, "msg_subject": "Is INSERT FROM considered a transaction?" } ]
[ { "msg_contents": "hi,\nI an using postgresql-7.1beta4 and am trying to use the large text fields. \nI have heard of TOAST. There is little documentation. \n I found one section about creating a data type,\nthen creating two functions to convert the data types.\nIs this how TOAST is implemented?\nAm I on the right track?. If so, what do\nthe conversion functions look like. I am using plpgsql.\nThanks, Pam\n\n", "msg_date": "Fri, 9 Mar 2001 13:54:40 +1100 ", "msg_from": "Pam Withnall <Pamw@zoom.com.au>", "msg_from_op": true, "msg_subject": "TOAST" }, { "msg_contents": "On Fri, Mar 09, 2001 at 01:54:40PM +1100, Pam Withnall wrote:\n> hi,\n> I an using postgresql-7.1beta4 and am trying to use the large text fields. \n> I have heard of TOAST. There is little documentation. \n> I found one section about creating a data type,\n> then creating two functions to convert the data types.\n> Is this how TOAST is implemented?\n> Am I on the right track?. If so, what do\n> the conversion functions look like. I am using plpgsql.\n> Thanks, Pam\n\nTOAST works transparently for the user. It just means that the old\npre-7.1 8k (actually 32) row length limit is obsolete. In order to use\nlarge text fields all you need to do is install 7.1 (which you already\nhave, as you say). No function creation etc. is required.\n\nRegards, Frank\n", "msg_date": "Tue, 13 Mar 2001 09:11:15 +0100", "msg_from": "Frank Joerdens <frank@joerdens.de>", "msg_from_op": false, "msg_subject": "Re: TOAST" } ]
[ { "msg_contents": "In case anyone cares, there is a bug in pg_dump in 7.0.3 when using the bit\nfields.\n\neg: (This is a dump)\n\nINSERT INTO \"menu_plans\" VALUES (7,'B''100000000''');\nINSERT INTO \"menu_plans\" VALUES (6,'B''100000000''');\nINSERT INTO \"menu_plans\" VALUES (8,'B''100000000''');\n\nI think what's happening is that pg_dump is automatically putting quotes\naround the fields, and escaping quotes in the string. Strangely enough, if\nthis dump is restored then you get your original bitset back, with a zero\nappended to each end! Has this been fixed in 7.1?\n\nActually, I think that in 7.0.3's parser implementation they should be\ndumped as :\n\nINSERT INTO \"menu_plans\" VALUES (7,'b100000000');\nINSERT INTO \"menu_plans\" VALUES (6,'b100000000');\nINSERT INTO \"menu_plans\" VALUES (8,'b100000000');\n\n--\nChristopher Kings-Lynne\nFamily Health Network (ACN 089 639 243)\n\n", "msg_date": "Fri, 9 Mar 2001 12:37:18 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "7.0.3 Bitset dumping" }, { "msg_contents": "Christopher Kings-Lynne writes:\n\n> In case anyone cares, there is a bug in pg_dump in 7.0.3 when using the bit\n> fields.\n\nThere are no bit fields in 7.0, at least no officially. 7.1 does support\nthem.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Fri, 9 Mar 2001 17:17:12 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: 7.0.3 Bitset dumping" } ]
[ { "msg_contents": "\n> > Even with true fdatasync it's not obviously good for performance - it takes\n> > too long time to write 16Mb files and fills OS buffer cache with trash-:(\n> \n> True. But at least the write is (hopefully) being done at a\n> non-performance-critical time.\n\nSo you have non critical time every five minutes ?\nThose platforms that don't have fdatasync won't profit anyway.\nEven if the IO is done by postmaster the write to the disk has a severe\nimpact on concurrent other disk activity.\nIn a real 5 minutes checkpoint setup we are seriously talking about\n48 Mb at least, or you risc foreground log creation. On systems I know,\nthat means 100% busying the disk for at least 8 seconds.\n\nAndreas\n", "msg_date": "Fri, 9 Mar 2001 10:28:14 +0100 ", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: WAL does not recover gracefully from out-of-disk-sp\n\t ace" }, { "msg_contents": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n> Even with true fdatasync it's not obviously good for performance - it takes\n> too long time to write 16Mb files and fills OS buffer cache with trash-:(\n>> \n>> True. But at least the write is (hopefully) being done at a\n>> non-performance-critical time.\n\n> So you have non critical time every five minutes ?\n> Those platforms that don't have fdatasync won't profit anyway.\n\nYes they will; you're forgetting the cost of updating filesystem overhead.\n\nSuppose that we do not preallocate the log files. Each WAL fsync will\nrequire a write of the added data block(s), plus a write of at least one\nindirect block to record the allocation of new blocks to the file, plus\na write of the file's inode, plus a write of the cylinder group's\nfree-space bitmap. It takes extremely lucky placement of the file and\nindirect blocks to achieve less than four seeks per WAL block written.\nTotal cost to write a 16MB file: roughly eight thousand seeks, assuming\n8K block size. Even if we consider it safe to use fdatasync in this\nscenario, it will save only one of the four seeks, since the indirect\nblock and freespace map *must* be updated regardless.\n\nNow consider the preallocation approach. In the preallocation phase,\nwe write like mad and then fsync the file ONCE. This means *one* write\nof each affected data block, indirect block, freespace map block, and\nthe inode, versus one write of each data block and circa two thousand\nwrites of the others. Furthermore the kernel is free to schedule these\nwrites in some reasonable fashion, and so we may hope that something\nless than two thousand seeks will be used to do it.\n\nThen we come to the phase of actually writing the file. No indirect\nblock or freespace bitmap updates will occur. On a machine that\nimplements fdatasync, we write data blocks and nothing else. One\nseek per block written, possibly no seeks if the layout is good.\nEven if we don't have fdatasync, it's only two seeks per block written\n(the block and the inode only). So, at worst four thousand seeks in\nthis phase, at best much less than two thousand.\n\nBottom line is that it should take fewer seeks overall to do it this\nway, even on a machine without fdatasync, and even if we don't get to\ncount any benefit from doing a large part of the work outside the\ncritical path of transaction commit.\n\nAlso, given that modern systems *do* have fdatasync, I do not see why\nwe should not optimize for that case.\n\nIt is true that prezeroing the file will tend to fill the kernel's disk\ncache with entirely useless blocks. I don't know of any portable way\naround that, but even an unportable way might be worth #ifdefing in on\nplatforms where it works. Does anyone know a way of suppressing caching\nof outgoing blocks, or flushing them from the kernel's cache right away?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 09 Mar 2001 10:07:13 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: WAL does not recover gracefully from out-of-disk-sp ace " } ]
[ { "msg_contents": "Hi,\n\nI'm wondering how safe it is to pass an uninitialized conn to\nPQfinish....\n\nI have set a SIGALRM handler that terminates my daemon and closes\nconnection\nto Postgres if time-out happens. However, Im setting the alarm few times\nbefore the connection to Postgres was established... which means that \nconn pointer in not initialized yet and it will be passed to PQfinish in\nmy\nSIGALRM handler. Im not sure how Postgres will handle that.... so far\nthat\ndoesnt seem to be causing any errors but Im not sure what's actually\nhappening\nbehind the scenes...\n\nAny input would be appreciated.\n\nRegards,\nBoulat Khakimov\n\n-- \nNothing Like the Sun\n", "msg_date": "Fri, 09 Mar 2001 12:37:36 -0500", "msg_from": "Boulat Khakimov <boulat@inet-interactif.com>", "msg_from_op": true, "msg_subject": "PQfinish(const PGconn *conn) question" }, { "msg_contents": "Boulat Khakimov <boulat@inet-interactif.com> writes:\n> I'm wondering how safe it is to pass an uninitialized conn to\n> PQfinish....\n\nYou can pass a NULL pointer to PQfinish safely, if that's what you\nmeant. Passing a pointer to uninitialized memory seems like a bad\nidea.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 09 Mar 2001 12:50:27 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PQfinish(const PGconn *conn) question " } ]
[ { "msg_contents": "> > Even with true fdatasync it's not obviously good for performance - it takes\n> > too long time to write 16Mb files and fills OS buffer cache \n> with trash-:(\n> >> \n> >> True. But at least the write is (hopefully) being done at a\n> >> non-performance-critical time.\n> \n> > So you have non critical time every five minutes ?\n> > Those platforms that don't have fdatasync won't profit anyway.\n> \n> Yes they will; you're forgetting the cost of updating \n> filesystem overhead.\n\nI did have that in mind, but I thought that in effect the OS would \noptimize sparse file allocation somehow.\nDoing some tests however showed that while your variant is really good\nand saves 12 seconds, the performance is *very* poor for eighter variant.\n\nA short test shows, that opening the file O_SYNC, and thus avoiding fsync()\nwould cut the effective time needed to sync write the xlog more than in half.\nOf course we would need to buffer >= 1 xlog page before write (or commit)\nto gain the full advantage.\n\nprewrite 0 + write and fsync:\t\t60.4 sec\nsparse file + write with O_SYNC:\t\t37.5 sec\nno prewrite + write with O_SYNC:\t\t36.8 sec\nprewrite 0 + write with O_SYNC:\t\t24.0 sec\n\nThese times include the prewrite when applicable on AIX with jfs.\nTestprogram attached. I may be overseeing something, though.\n\nAndreas", "msg_date": "Fri, 9 Mar 2001 18:41:15 +0100 ", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: AW: WAL does not recover gracefully from out-of-dis\n\tk-sp ace" }, { "msg_contents": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n> A short test shows, that opening the file O_SYNC, and thus avoiding fsync()\n> would cut the effective time needed to sync write the xlog more than in half.\n> Of course we would need to buffer >= 1 xlog page before write (or commit)\n> to gain the full advantage.\n\n> prewrite 0 + write and fsync:\t\t60.4 sec\n> sparse file + write with O_SYNC:\t\t37.5 sec\n> no prewrite + write with O_SYNC:\t\t36.8 sec\n> prewrite 0 + write with O_SYNC:\t\t24.0 sec\n\nThis seems odd. As near as I can tell, O_SYNC is simply a command to do\nfsync implicitly during each write call. It cannot save any I/O unless\nI'm missing something significant. Where is the performance difference\ncoming from?\n\nThe reason I'm inclined to question this is that what we want is not an\nfsync per write but an fsync per transaction, and we can't easily buffer\nall of a transaction's XLOG writes...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 09 Mar 2001 12:48:39 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: AW: WAL does not recover gracefully from out-of-dis k-sp ace " } ]
[ { "msg_contents": "\n> > A short test shows, that opening the file O_SYNC, and thus avoiding fsync()\n> > would cut the effective time needed to sync write the xlog more than in half.\n> > Of course we would need to buffer >= 1 xlog page before write (or commit)\n> > to gain the full advantage.\n> \n> > prewrite 0 + write and fsync:\t\t60.4 sec\n> > sparse file + write with O_SYNC:\t\t37.5 sec\n> > no prewrite + write with O_SYNC:\t\t36.8 sec\n> > prewrite 0 + write with O_SYNC:\t\t24.0 sec\n> \n> This seems odd. As near as I can tell, O_SYNC is simply a command to do\n> fsync implicitly during each write call. It cannot save any I/O unless\n> I'm missing something significant. Where is the performance difference\n> coming from?\n\nYes, odd, but sure very reproducible here.\n\n> The reason I'm inclined to question this is that what we want is not an\n> fsync per write but an fsync per transaction, and we can't easily buffer\n> all of a transaction's XLOG writes...\n\nYes, that is something to consider, but it would probably be sufficient to buffer \n1-3 optimal IO blocks (32-256k here).\nI assumed that with a few busy clients the fsyncs would come close to \none xlog page, but that is probably too few. \n\nAndreas\n", "msg_date": "Fri, 9 Mar 2001 19:00:04 +0100 ", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: AW: AW: WAL does not recover gracefully from out-of -dis k-sp ace " }, { "msg_contents": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n>> This seems odd. As near as I can tell, O_SYNC is simply a command to do\n>> fsync implicitly during each write call. It cannot save any I/O unless\n>> I'm missing something significant. Where is the performance difference\n>> coming from?\n\n> Yes, odd, but sure very reproducible here.\n\nI tried this on HPUX 10.20, which has not only O_SYNC but also O_DSYNC\n(defined to do the equivalent of fdatasync()), and got truly fascinating\nresults. Apparently, on this platform these flags change the kernel's\nbuffering behavior! Observe:\n\n$ gcc -Wall -O tfsync.c\n$ time a.out\n\nreal 1m0.32s\nuser 0m0.02s\nsys 0m16.16s\n$ gcc -Wall -O -DINIT_WRITE tfsync.c\n$ time a.out\n\nreal 1m15.11s\nuser 0m0.04s\nsys 0m32.76s\n\nNote the large amount of system time here, and the fact that the extra\ntime in INIT_WRITE is all system time. I have previously observed that\nfsync() on HPUX 10.20 appears to iterate through every kernel disk\nbuffer belonging to the file, presumably checking their dirtybits one by\none. The INIT_WRITE form loses because each fsync in the second loop\nhas to iterate through a full 16Mb worth of buffers, whereas without\nINIT_WRITE there will only be as many buffers as the amount of file\nwe've filled so far. (On this platform, it'd probably be a win to use\nlog segments smaller than 16Mb...) It's interesting that there's no\nvisible I/O cost here for the extra write pass --- the extra I/O must be\ncompletely overlapped with the extra system time.\n\n$ gcc -Wall -O -DINIT_WRITE -DUSE_OSYNC tfsync.c\n$ time a.out\n\nreal 0m45.04s\nuser 0m0.02s\nsys 0m0.83s\n\nWe just bought back almost all the system time. The only possible\nexplanation is that this way either doesn't keep the buffers from prior\nblocks, or does not scan them for dirtybits. I note that the open(2)\nman page is phrased so that O_SYNC is actually defined not to fsync the\nwhole file, but only the part you just wrote --- I wonder if it's\nactually implemented that way?\n\n$ gcc -Wall -O -DINIT_WRITE -DUSE_SPARSE tfsync.c\n$ time a.out\n\nreal 1m2.96s\nuser 0m0.02s\nsys 0m27.11s\n$ gcc -Wall -O -DINIT_WRITE -DUSE_OSYNC -DUSE_SPARSE tfsync.c\n$ time a.out\n\nreal 1m1.34s\nuser 0m0.01s\nsys 0m0.59s\n\nSparse initialization wins a little in the non-O_SYNC case, but loses\nwhen compared with O_SYNC on. Not sure why; perhaps it alters the\namount of I/O that has to be done for indirect blocks?\n\n$ gcc -Wall -O -DINIT_WRITE -DUSE_ODSYNC tfsync.c\n$ time a.out\n\nreal 0m21.40s\nuser 0m0.02s\nsys 0m0.60s\n\nAnd the piece de resistance: O_DSYNC *actually works* here, even though\nI previously found that the fdatasync() call is stubbed to fsync() in\nlibc! This old HP box is built like a tank and has a similar lack of\nattention to noise level ;-) so I can very easily tell by ear that I am\nnot getting back-and-forth seeks in this last case, even if the timing\ndidn't prove it to be true.\n\n$ gcc -Wall -O -DUSE_ODSYNC tfsync.c\n$ time a.out\n\nreal 1m1.56s\nuser 0m0.02s\nsys 0m0.67s\n\nWithout INIT_WRITE, we are back to essentially the performance of fsync\neven though we use DSYNC. This is expected since the inode must be\nwritten to change the EOF value. Interestingly, the system time is\nsmall, whereas in my first example it was large; but the elapsed time\nis the same. Evidently the system time is nearly all overlapped with\nI/O in the first example.\n\nAt least on this platform, it would be definitely worthwhile to use\nO_DSYNC even if that meant fsync per write rather than per transaction.\nCan anyone else reproduce these results?\n\nI attach my modified version of Andreas' program. Note I do not believe\nhis assertion that close() implies fsync() --- on the machines I've\nused, it demonstrably does not sync. You'll also note that I made the\nlseek optional in the second loop. This appears to make no real\ndifference, so I didn't include timings with the lseek enabled.\n\n\t\t\tregards, tom lane\n\n\n\n#include <stdio.h>\n#include <fcntl.h>\n#include <unistd.h>\n/* #define USE_SPARSE */\n// #define INIT_WRITE\n// #define USE_OSYNC\n\nint main()\n{\n\tchar zbuffer[8192];\n\tint nbytes;\n\tint fd;\n\tchar *tpath=\"xlog_testfile\";\n\n\tfd = open(tpath, O_RDWR | O_CREAT | O_EXCL, S_IRUSR | S_IWUSR);\n\tif (fd < 0)\n\t\texit(1);\n#ifdef INIT_WRITE\n#ifdef USE_SPARSE\n/* the sparse file makes things worse here */\n\tlseek(fd, 16*1024*1024 - 1, SEEK_SET);\n\twrite(fd, 0, 1);\n#else\n\tmemset(zbuffer, '\\0', sizeof(zbuffer));\n\tfor (nbytes = 0; nbytes < 16*1024*1024; nbytes += sizeof(zbuffer))\n\t{\n\t\twrite(fd, zbuffer, sizeof(zbuffer));\n\t}\n#endif\n\tfsync(fd);\n#endif\n\t/* no fsync here, since close does it for us */\n\t/* You think so? */\n\tclose (fd);\n\n#ifdef USE_OSYNC\n\tfd = open(tpath, O_RDWR | O_SYNC, S_IRUSR | S_IWUSR);\n#else\n#ifdef USE_ODSYNC\n\tfd = open(tpath, O_RDWR | O_DSYNC, S_IRUSR | S_IWUSR);\n#else\n\tfd = open(tpath, O_RDWR , S_IRUSR | S_IWUSR);\n#endif\n#endif\n\tif (fd < 0)\n\t\texit(1);\n\n\tmemset(zbuffer, 'X', sizeof(zbuffer));\n\tnbytes = 0;\n\tdo \n\t{\n#ifdef USE_LSEEK\n\t\tlseek(fd, nbytes, SEEK_SET); \n#endif\n\t\twrite(fd, zbuffer, sizeof(zbuffer));\n#ifndef USE_OSYNC\n#ifndef USE_ODSYNC\n\t\tfsync(fd);\n#endif\n#endif\n\t\tnbytes += sizeof(zbuffer);\n\t} while (nbytes < 16*1024*1024);\n\n\tclose(fd);\n\n\treturn 0;\n\n}", "msg_date": "Fri, 09 Mar 2001 13:42:24 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: AW: AW: WAL does not recover gracefully from out-of -dis k-sp\n\tace" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> We just bought back almost all the system time. The only possible\n> explanation is that this way either doesn't keep the buffers from prior\n> blocks, or does not scan them for dirtybits. I note that the open(2)\n> man page is phrased so that O_SYNC is actually defined not to fsync the\n> whole file, but only the part you just wrote --- I wonder if it's\n> actually implemented that way?\n\nSure, why not? That's how it is implemented in the Linux kernel. If\nyou do a write with O_SYNC set, the write simply flushes out the\nbuffers it just modified. If you call fsync, the kernel has to walk\nthrough all the buffers looking for ones associated with the file in\nquestion.\n\nIan\n", "msg_date": "09 Mar 2001 13:30:38 -0800", "msg_from": "Ian Lance Taylor <ian@airs.com>", "msg_from_op": false, "msg_subject": "Re: AW: AW: AW: WAL does not recover gracefully from out-of -dis k-sp\n\tace" } ]
[ { "msg_contents": "OK, I have finished the first version of my performance monitor. You\ncan get the TCL code at:\n\n\tftp://candle.pha.pa.us/pub/postgresql/pgtop.tcl\n\nOnce people like it, I will add it to CVS /contrib. It probably will\nonly work only *BSD and Linux right now. It tries to find the USER and\nCOMMAND columns of ps, and gets the PostgreSQL username by looking at\nthe owner of the /tmp socket file or the owner of the postmaster\nprocess.\n\nI have added many comments, so hopefully people can make suggestions to\nget it working on more platforms. Right now, it doesn't show anything\nexcept 'ps' output. I also want to add the ability for it to sort on\ncertain columns.\n\nAttached is a screenshot for the curious. \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026", "msg_date": "Fri, 9 Mar 2001 13:03:05 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Performance monitor ready" } ]
[ { "msg_contents": "\n> > > Of course we would need to buffer >= 1 xlog page before write (or commit)\n> > > to gain the full advantage.\n> > \n> > > prewrite 0 + write and fsync:\t\t60.4 sec\n> > > sparse file + write with O_SYNC:\t\t37.5 sec\n> > > no prewrite + write with O_SYNC:\t\t36.8 sec\n> > > prewrite 0 + write with O_SYNC:\t\t24.0 sec\n\n> > The reason I'm inclined to question this is that what we want is not an\n> > fsync per write but an fsync per transaction, and we can't easily buffer\n> > all of a transaction's XLOG writes...\n> \n> Yes, that is something to consider, but it would probably be sufficient to buffer \n> 1-3 optimal IO blocks (32-256k here).\n> I assumed that with a few busy clients the fsyncs would come close to \n> one xlog page, but that is probably too few. \n\nI get best performance with eighter:\nprewrite + 16k write with O_SYNC:\t\t15.5 sec\nprewrite + 32k write with O_SYNC:\t\t11.5 sec\nno prewite + 256k write with O_SYNC:\t\t 5.4 sec\n\nBut this 256k per tx would probably be very unrealistic, thus\nbest overall performance would probably be achieved with\na 32k (or tuneable) xlog buffer O_SYNC and prewrite. \n\nMaybe a good thing for 7.1.1 :-) \n\nAndreas\n", "msg_date": "Fri, 9 Mar 2001 19:44:49 +0100 ", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: AW: AW: WAL does not recover gracefully from out-of -dis k-sp ace " } ]
[ { "msg_contents": "\n> >> This seems odd. As near as I can tell, O_SYNC is simply a command to do\n> >> fsync implicitly during each write call. It cannot save any I/O unless\n> >> I'm missing something significant. Where is the performance difference\n> >> coming from?\n> \n> > Yes, odd, but sure very reproducible here.\n> \n> I tried this on HPUX 10.20, which has not only O_SYNC but also O_DSYNC\n\nAIX has O_DSYNC (which is _FDATASYNC) too, but I assumed O_SYNC \nwould be more portable. Now we have two, maybe it is more widespread\nthan I thought.\n\n> I attach my modified version of Andreas' program. Note I do \n> not believe his assertion that close() implies fsync() --- on the machines I've\n> used, it demonstrably does not sync.\n\nOk, I am not sure, but essentially do we need it to sync ? The OS sure isn't\nsupposed to notice after closing the file, that it ran out of disk space.\n\nAndreas\n", "msg_date": "Fri, 9 Mar 2001 20:01:28 +0100 ", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: AW: AW: AW: WAL does not recover gracefully from ou\n\tt-of -dis k-sp ace" }, { "msg_contents": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n>> I attach my modified version of Andreas' program. Note I do not\n>> believe his assertion that close() implies fsync() --- on the\n>> machines I've used, it demonstrably does not sync.\n\n> Ok, I am not sure, but essentially do we need it to sync? The OS sure isn't\n> supposed to notice after closing the file, that it ran out of disk space.\n\nI believe that out-of-space would be reported during the writes, anyway,\nso that's not the issue.\n\nThe point of fsync'ing after the prewrite is to ensure that the indirect\nblocks are down on disk. If you trust fdatasync (or O_DSYNC) to write\nindirect blocks then it's not necessary --- but I'm pretty sure I heard\nsomewhere that some versions of fdatasync fail to guarantee that.\n\nIn any case, the real point of the prewrite is to move work out of the\ntransaction commit path, and so we're better off if we can sync the\nindirect blocks during prewrite.\n\n>> I tried this on HPUX 10.20, which has not only O_SYNC but also O_DSYNC\n\n> AIX has O_DSYNC (which is _FDATASYNC) too, but I assumed O_SYNC \n\nOh? What speeds do you get if you use that?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 09 Mar 2001 14:20:30 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: AW: AW: AW: WAL does not recover gracefully from ou t-of -dis\n\tk-sp ace" } ]
[ { "msg_contents": "Hi pgsql-hackers,\n\nI'm currently porting 7.0.3 to the HP MPE/iX OS to join my other ports of\nApache, BIND, sendmail, Perl, and others. I'm at the point where I'm trying to\nrun the \"make runcheck\" regression tests, and I've just run into a problem\nwhere I need to seek the advice of psql-hackers.\n\nMPE is a proprietary OS with a POSIX layer on top. The concept of POSIX uids\nand gids has been mapped to the concept of MPE usernames and MPE accountnames. \nAn example MPE username would be \"MGR.BIXBY\", and if you do a POSIX\ngetpwuid(getuid()), the contents of pw_name will be the same \"MGR.BIXBY\".\n\nThe fact that pw_name contains a period on MPE has been confusing to some\nprevious ports I've done, and it now appears PostgreSQL is being confused too. \nMake runcheck is dying in the initdb phase:\n\nCreating global relations in /blah/blah/blah\nERROR: pg_atoi: error in \"BIXBY\": can't parse \"BIXBY\"\nERROR: pg_atoi: error in \"BIXBY\": can't parse \"BIXBY\"\nsyntax error 25 : -> .\n\nI'm guessing that something tried to parse \"MGR.BIXBY\", saw the decimal point\ncharacter and passed the string to pg_atoi() thinking it's a number instead of\na name. This seems like a really bad omen hinting at trouble on a fundamental\nlevel.\n\nWhat are my options here?\n\n1) I'm screwed; go try porting MySQL instead. ;-)\n\n2) Somehow modify username parsing to be tolerant of the \".\" character? I was\nable to do this when I ported sendmail. Where should I be looking in the\nPostgreSQL source? Is this going to require language grammar changes?\n\n3) Always specify numeric uids instead of user names. Is this even possible?\n\nYour advice will be greatly appreciated. MPE users are currently whining on\ntheir mailing list about the lack of standard databases for the platform, and I\nwanted to surprise them by releasing a PostgreSQL port.\n\nThanks!\n-- \nmark@bixby.org\nRemainder of .sig suppressed to conserve scarce California electrons...\n", "msg_date": "Fri, 09 Mar 2001 11:11:07 -0800", "msg_from": "Mark Bixby <mark@bixby.org>", "msg_from_op": true, "msg_subject": "porting question: funky uid names?" }, { "msg_contents": "Mark Bixby <mark@bixby.org> writes:\n> MPE is a proprietary OS with a POSIX layer on top. The concept of\n> POSIX uids and gids has been mapped to the concept of MPE usernames\n> and MPE accountnames. An example MPE username would be \"MGR.BIXBY\",\n> and if you do a POSIX getpwuid(getuid()), the contents of pw_name will\n> be the same \"MGR.BIXBY\".\n\nHm. And what is returned in pw_uid?\n\nI think you are getting burnt by initdb's attempt to assign the postgres\nsuperuser's numeric ID to be the same as the Unix userid number of the\nuser running initdb. Look at the uses of pg_id in the initdb script,\nand experiment with running pg_id by hand to see what it produces.\n\nA quick and dirty experiment would be to run \"initdb -i 42\" (or\nwhatever) to override the result of pg_id. If that succeeds, the\nreal answer may be that pg_id needs a patch to behave reasonably on MPE.\n\nLet us know...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 09 Mar 2001 14:31:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: porting question: funky uid names? " }, { "msg_contents": "\n\nTom Lane wrote:\n> \n> Mark Bixby <mark@bixby.org> writes:\n> > MPE is a proprietary OS with a POSIX layer on top. The concept of\n> > POSIX uids and gids has been mapped to the concept of MPE usernames\n> > and MPE accountnames. An example MPE username would be \"MGR.BIXBY\",\n> > and if you do a POSIX getpwuid(getuid()), the contents of pw_name will\n> > be the same \"MGR.BIXBY\".\n> \n> Hm. And what is returned in pw_uid?\n\nA valid numeric uid.\n\n> I think you are getting burnt by initdb's attempt to assign the postgres\n> superuser's numeric ID to be the same as the Unix userid number of the\n> user running initdb. Look at the uses of pg_id in the initdb script,\n> and experiment with running pg_id by hand to see what it produces.\n\npg_id without parameters returns uid=484(MGR.BIXBY), which matches what I get\nfrom MPE's native id command.\n\nThe pg_id -n and -u options behave as expected.\n\n> A quick and dirty experiment would be to run \"initdb -i 42\" (or\n> whatever) to override the result of pg_id. If that succeeds, the\n> real answer may be that pg_id needs a patch to behave reasonably on MPE.\n\nI just hacked src/test/regress/run_check.sh to invoke initdb with --show. The\nuser name/id is behaving \"correctly\" for an MPE machine:\n\nSUPERUSERNAME: MGR.BIXBY\nSUPERUSERID: 484\n\nThe initdb -i option will only override the SUPERUSERID, but it's already\ncorrect.\n-- \nmark@bixby.org\nRemainder of .sig suppressed to conserve scarce California electrons...\n", "msg_date": "Fri, 09 Mar 2001 11:58:55 -0800", "msg_from": "Mark Bixby <mark@bixby.org>", "msg_from_op": true, "msg_subject": "Re: porting question: funky uid names?" }, { "msg_contents": "Mark Bixby <mark@bixby.org> writes:\n> I just hacked src/test/regress/run_check.sh to invoke initdb with\n> --show. The user name/id is behaving \"correctly\" for an MPE machine:\n\n> SUPERUSERNAME: MGR.BIXBY\n> SUPERUSERID: 484\n\nOkay, so much for that theory.\n\nCan you set a breakpoint at elog() and provide a stack backtrace so we\ncan see where this is happening? I can't think where else in the code\nmight be affected, but obviously the problem is somewhere else...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 09 Mar 2001 15:21:22 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: porting question: funky uid names? " }, { "msg_contents": "Mark Bixby writes:\n\n> Creating global relations in /blah/blah/blah\n> ERROR: pg_atoi: error in \"BIXBY\": can't parse \"BIXBY\"\n> ERROR: pg_atoi: error in \"BIXBY\": can't parse \"BIXBY\"\n> syntax error 25 : -> .\n\nI'm curious about that last line. Is that the shell complaining?\n\nThe offending command seems to be\n\ninsert OID = 0 ( POSTGRES PGUID t t t t _null_ _null_ )\n\nin the file global1.bki.source. (This is the file the creates the global\nrelations.) The POSTGRES and PGUID quantities are substituted when initdb\nruns:\n\ncat \"$GLOBAL\" \\\n | sed -e \"s/POSTGRES/$POSTGRES_SUPERUSERNAME/g\" \\\n -e \"s/PGUID/$POSTGRES_SUPERUSERID/g\" \\\n | \"$PGPATH\"/postgres $BACKENDARGS template1\n\nFor some reason the line probably ends up being\n\ninsert OID = 0 ( MGR BIXBY 484 t t t t _null_ _null_ )\n ^\nwhich causes the observed failure to parse BIXBY as user id. This brings\nus back to why the dot disappears, which seems to be related to the error\nmessage\n\nsyntax error 25 : -> .\n ^^^\n\nCan you try using a different a sed command (e.g, GNU sed)?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Fri, 9 Mar 2001 21:57:45 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: porting question: funky uid names?" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> cat \"$GLOBAL\" \\\n> | sed -e \"s/POSTGRES/$POSTGRES_SUPERUSERNAME/g\" \\\n> -e \"s/PGUID/$POSTGRES_SUPERUSERID/g\" \\\n> | \"$PGPATH\"/postgres $BACKENDARGS template1\n\n> For some reason the line probably ends up being\n\n> insert OID = 0 ( MGR BIXBY 484 t t t t _null_ _null_ )\n> ^\n> which causes the observed failure to parse BIXBY as user id.\n\nGood thought. Just looking at this, I wonder if we shouldn't flip the\norder of the sed patterns --- as is, won't it mess up if the superuser\nname contains PGUID?\n\nA further exercise would be to make it not foul up if the superuser name\ncontains '/'. I'd be kind of inclined to use ':' for the pattern\ndelimiter, since in normal Unix practice usernames can't contain colons\n(cf. passwd file format). Of course one doesn't generally put a slash\nin a username either, but I think it's physically possible to do it...\n\nBut none of these fully explain Mark's problem. If we knew where the\n\"syntax error 25 : -> .\" came from, we'd be closer to an answer.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Fri, 09 Mar 2001 16:14:10 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: porting question: funky uid names? " }, { "msg_contents": "\n\nTom Lane wrote:\n> But none of these fully explain Mark's problem. If we knew where the\n> \"syntax error 25 : -> .\" came from, we'd be closer to an answer.\n\nAfter scanning the source for \"syntax error\", line 126 of\nbackend/bootstrap/bootscanner.l seems to be the likely culprit.\n-- \nmark@bixby.org\nRemainder of .sig suppressed to conserve scarce California electrons...\n", "msg_date": "Fri, 09 Mar 2001 13:27:21 -0800", "msg_from": "Mark Bixby <mark@bixby.org>", "msg_from_op": true, "msg_subject": "Re: porting question: funky uid names?" }, { "msg_contents": "Mark Bixby <mark@bixby.org> writes:\n> Tom Lane wrote:\n>> But none of these fully explain Mark's problem. If we knew where the\n>> \"syntax error 25 : -> .\" came from, we'd be closer to an answer.\n\n> After scanning the source for \"syntax error\", line 126 of\n> backend/bootstrap/bootscanner.l seems to be the likely culprit.\n\nOh, of course: foo.bar is not a single token to the boot scanner.\nIt needs to be in quotes. Try this patch (line numbers are for 7.1\nbut probably OK for 7.0.*)\n\n*** src/include/catalog/pg_shadow.h~ Wed Jan 24 16:01:30 2001\n--- src/include/catalog/pg_shadow.h Fri Mar 9 16:57:53 2001\n***************\n*** 73,78 ****\n * user choices.\n * ----------------\n */\n! DATA(insert OID = 0 ( POSTGRES PGUID t t t t _null_ _null_ ));\n\n #endif /* PG_SHADOW_H */\n--- 73,78 ----\n * user choices.\n * ----------------\n */\n! DATA(insert OID = 0 ( \"POSTGRES\" PGUID t t t t _null_ _null_ ));\n\n #endif /* PG_SHADOW_H */\n\n\nYou'll need to rebuild global.bki (over in src/backend/catalog)\nafterwards, but the executables don't change.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 09 Mar 2001 17:08:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: porting question: funky uid names? " }, { "msg_contents": "\n\nTom Lane wrote:\n> \n> Mark Bixby <mark@bixby.org> writes:\n> > I just hacked src/test/regress/run_check.sh to invoke initdb with\n> > --show. The user name/id is behaving \"correctly\" for an MPE machine:\n> \n> > SUPERUSERNAME: MGR.BIXBY\n> > SUPERUSERID: 484\n> \n> Okay, so much for that theory.\n> \n> Can you set a breakpoint at elog() and provide a stack backtrace so we\n> can see where this is happening? I can't think where else in the code\n> might be affected, but obviously the problem is somewhere else...\n\nHere's a stack trace from the native MPE debugger (we don't have gdb support\nyet). I'm assuming that all results after the initdb failure should be\nsuspect, and that's possibly why pg_log wasn't created. I haven't tried\ntroubleshooting the pg_log problem yet until after I resolve the uid names\nissue.\n\n=============== Initializing check database instance ================\nDEBUG/iX C.25.06 \n\nDEBUG Intrinsic at: 129.0009d09c ?$START$\n$1 ($4b) nmdebug > b elog\nadded: NM [1] PROG 129.001ad7d8 elog\n$2 ($4b) nmdebug > c\nBreak at: NM [1] PROG 129.001ad7d8 elog\n$3 ($4b) nmdebug > tr\n PC=129.001ad7d8 elog\n* 0) SP=41843ef0 RP=129.0018f7a4 pg_atoi+$b4\n 1) SP=41843ef0 RP=129.00182994 int4in+$14\n 2) SP=41843e70 RP=129.0018296c ?int4in+$8\n export stub: 129.001aed28 $CODE$+$138\n 3) SP=41843e30 RP=129.001af428 fmgr+$98\n 4) SP=41843db0 RP=129.000c3354 InsertOneValue+$264\n 5) SP=41843cf0 RP=129.000c05d4 Int_yyparse+$924\n 6) SP=41843c70 RP=129.00000000 \n (end of NM stack)\n$4 ($4b) nmdebug > c\n=============== Starting regression postmaster ================\nRegression postmaster is running - PID=125239393 PGPORT=65432\n=============== Creating regression database... ================\nNOTICE: mdopen: couldn't open\n/BIXBY/PUB/src/postgresql-7.0.3-mpe/src/test/regr\ness/tmp_check/data/pg_log: No such file or directory\nNOTICE: mdopen: couldn't open\n/BIXBY/PUB/src/postgresql-7.0.3-mpe/src/test/regr\ness/tmp_check/data/pg_log: No such file or directory\npsql: FATAL 1: cannot open relation pg_log\ncreatedb: database creation failed\ncreatedb failed\nmake: *** [runcheck] Error 1\n-- \nmark@bixby.org\nRemainder of .sig suppressed to conserve scarce California electrons...\n", "msg_date": "Fri, 09 Mar 2001 14:10:24 -0800", "msg_from": "Mark Bixby <mark@bixby.org>", "msg_from_op": true, "msg_subject": "Re: porting question: funky uid names?" }, { "msg_contents": "\n\nTom Lane wrote:\n> Oh, of course: foo.bar is not a single token to the boot scanner.\n> It needs to be in quotes. Try this patch (line numbers are for 7.1\n> but probably OK for 7.0.*)\n> \n...snip...\n> --- src/include/catalog/pg_shadow.h Fri Mar 9 16:57:53 2001\n...snip...\n> ! DATA(insert OID = 0 ( \"POSTGRES\" PGUID t t t t _null_ _null_ ));\n> \n> #endif /* PG_SHADOW_H */\n> \n> You'll need to rebuild global.bki (over in src/backend/catalog)\n> afterwards, but the executables don't change.\n\nI modified pg_shadow.h as instructed and ran a make from src, and that rebuilt\nglobal1.bki.source in src/backend/catalog.\n\nHowever, when I did make runtest, it appears to install from\nsrc/backend/global1.bki.source which was still the old version. I modified\nthat old version by hand and reran make runtest. The uid name error has been\nsolved. Thanks!\n\nSo why is there a backend/global1.bki.source *and* a\nbackend/catalog/global1.bki.source?\n\nBut now runcheck dies during the install of PL/pgSQL, with createlang\ncomplaining about a missing lib/plpgsql.sl.\n\nI did do an MPE implementation of dynloader.c, but I was under the dim\nimpression this was only used for user-added functions, not core\nfunctionality. Am I mistaken? Are you dynaloading core functionality too?\n\nIt seems that plpgsql.sl didn't get built. Might be an autoconf issue, since\nquite frequently config scripts don't know about shared libraries on MPE. I\nwill investigate this further.\n--\nmark@bixby.org\nRemainder of .sig suppressed to conserve scarce California electrons...\n", "msg_date": "Fri, 09 Mar 2001 14:59:35 -0800", "msg_from": "Mark Bixby <mark@bixby.org>", "msg_from_op": true, "msg_subject": "Re: porting question: funky uid names?" }, { "msg_contents": "\n\nMark Bixby wrote:\n> It seems that plpgsql.sl didn't get built. Might be an autoconf issue, since\n> quite frequently config scripts don't know about shared libraries on MPE. I\n> will investigate this further.\n\nAh. I found src/Makefile.shlib and added the appropriate stuff.\n\nWoohoo! We have test output! The regression README was clear about how some\nplatform dependent errors can be expected, and how to code for these\ndifferences in the expected outputs.\n\nNow I'm off to examine the individual failures....\n\nMULTIBYTE=;export MULTIBYTE; \\\n/bin/sh ./run_check.sh hppa1.0-hp-mpeix\n=============== Removing old ./tmp_check directory ... ================\n=============== Create ./tmp_check directory ================\n=============== Installing new build into ./tmp_check ================\n=============== Initializing check database instance ================\n=============== Starting regression postmaster ================\nRegression postmaster is running - PID=125042790 PGPORT=65432\n=============== Creating regression database... ================\nCREATE DATABASE\n=============== Installing PL/pgSQL... ================\n=============== Running regression queries... ================\nparallel group1 (12 tests) ...\n boolean text name oid float4 varchar char int4 int2 float8 int8 \nnume\nric \n test boolean ... ok\n test char ... ok\n test name ... ok\n test varchar ... ok\n test text ... ok\n test int2 ... ok\n test int4 ... ok\n test int8 ... ok\n test oid ... ok\n test float4 ... ok\n test float8 ... FAILED\n test numeric ... ok\nsequential test strings ... ok\nsequential test numerology ... ok\nparallel group2 (15 tests) ...\n comments path polygon lseg point box reltime interval tinterval \ncircle\n inet timestamp type_sanity opr_sanity oidjoins \n test point ... ok\n test lseg ... ok\n test box ... ok\n test path ... ok\n test polygon ... ok\n test circle ... ok\n test interval ... FAILED\n test timestamp ... FAILED\n test reltime ... ok\n test tinterval ... ok\n test inet ... ok\n test comments ... ok\n test oidjoins ... ok\n test type_sanity ... ok\n test opr_sanity ... ok\nsequential test abstime ... ok\nsequential test geometry ... FAILED\nsequential test horology ... FAILED\nsequential test create_function_1 ... ok\nsequential test create_type ... ok\nsequential test create_table ... ok\nsequential test create_function_2 ... ok\nsequential test copy ... ok\nparallel group3 (6 tests) ...\n create_aggregate create_operator triggers constraints create_misc \ncreate_i\nndex \n test constraints ... ok\n test triggers ... ok\n test create_misc ... ok\n test create_aggregate ... ok\n test create_operator ... ok\n test create_index ... ok\nsequential test create_view ... ok\nsequential test sanity_check ... ok\nsequential test errors ... ok\nsequential test select ... ok\nparallel group4 (16 tests) ...\n arrays union select_having transactions portals join select_implicit \nsel\nect_distinct_on subselect case random select_distinct select_into \naggregat\nes hash_index btree_index \n test select_into ... ok\n test select_distinct ... ok\n test select_distinct_on ... ok\n test select_implicit ... ok\n test select_having ... ok\n test subselect ... ok\n test union ... ok\n test case ... ok\n test join ... ok\n test aggregates ... ok\n test transactions ... ok\n test random ... ok\n test portals ... ok\n test arrays ... ok\n test btree_index ... ok\n test hash_index ... ok\nsequential test misc ... ok\nparallel group5 (5 tests) ...\n portals_p2 foreign_key rules alter_table select_views \n test select_views ... ok\n test alter_table ... ok\n test portals_p2 ... ok\n test rules ... ok\n test foreign_key ... ok\nparallel group6 (3 tests) ...\n temp limit plpgsql \n test limit ... ok\n test plpgsql ... FAILED\n test temp ... ok\n=============== Terminating regression postmaster ================\nACTUAL RESULTS OF REGRESSION TEST ARE NOW IN FILES run_check.out\nAND regress.out\n\nTo run the optional big test(s) too, type 'make bigcheck'\nThese big tests can take over an hour to complete\nThese actually are: numeric_big\n-- \nmark@bixby.org\nRemainder of .sig suppressed to conserve scarce California electrons...\n", "msg_date": "Fri, 09 Mar 2001 16:20:42 -0800", "msg_from": "Mark Bixby <mark@bixby.org>", "msg_from_op": true, "msg_subject": "Re: porting question: funky uid names?" }, { "msg_contents": "Mark Bixby <mark@bixby.org> writes:\n> So why is there a backend/global1.bki.source *and* a\n> backend/catalog/global1.bki.source?\n\nYou don't want to know ;-) ... it's all cleaned up for 7.1 anyway.\nI think in 7.0 you have to run make install in src/backend to get the\n.bki files installed.\n\n> But now runcheck dies during the install of PL/pgSQL, with createlang\n> complaining about a missing lib/plpgsql.sl.\n\n> I did do an MPE implementation of dynloader.c, but I was under the dim\n> impression this was only used for user-added functions, not core\n> functionality. Am I mistaken? Are you dynaloading core functionality too?\n\nNo, but the regress tests try to test plpgsql too ... you should be able\nto dike out the createlang call and have all tests except the plpgsql\nregress test work.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 09 Mar 2001 20:37:00 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: porting question: funky uid names? " }, { "msg_contents": "\n\nTom Lane wrote:\n> > But now runcheck dies during the install of PL/pgSQL, with createlang\n> > complaining about a missing lib/plpgsql.sl.\n> \n> > I did do an MPE implementation of dynloader.c, but I was under the dim\n> > impression this was only used for user-added functions, not core\n> > functionality. Am I mistaken? Are you dynaloading core functionality too?\n> \n> No, but the regress tests try to test plpgsql too ... you should be able\n> to dike out the createlang call and have all tests except the plpgsql\n> regress test work.\n\nIs it possible to re-run failing regression tests individually? It took\nsomewhere between 30-45 minutes for me to run the entire suite, and if I have\nto run the whole thing every time when I'm only trying to fix just a single\ntest, that will get old pretty fast, and so will I. ;-)\n\nThanks.\n-- \nmark@bixby.org\nRemainder of .sig suppressed to conserve scarce California electrons...\n", "msg_date": "Fri, 09 Mar 2001 17:41:14 -0800", "msg_from": "Mark Bixby <mark@bixby.org>", "msg_from_op": true, "msg_subject": "Re: porting question: funky uid names?" }, { "msg_contents": "Mark Bixby <mark@bixby.org> writes:\n> Is it possible to re-run failing regression tests individually?\n\nI believe so, but it's not very convenient in the \"runcheck\" mode, since\nthat normally wants to make a fresh install and start a temporary\npostmaster. Instead, do a real install, start a real postmaster, and\ndo \"make runtest\" to create the regression DB in the real installation.\nThen you can basically just do \"psql regression <foo.sql\" --- look at\nthe regression driver script to get the details of what switches to\npass and how to do the output comparison.\n\nThere are some order dependencies among the tests, but I think all the\nones you were having trouble with should be able to work this way in\nan end-state regression DB.\n\nAlso, rerunning the whole suite is much quicker this way, since you\ndon't have to go through install/initdb/start postmaster each time.\n\nBTW, the results you posted looked good --- with the exception of\nplpgsql, the failing tests all seemed to be ones that are notorious\nfor platform-dependent output.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 09 Mar 2001 20:49:36 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: porting question: funky uid names? " } ]
[ { "msg_contents": "> > Even with true fdatasync it's not obviously good for \n> > performance - it takes too long time to write 16Mb files\n> > and fills OS buffer cache with trash-:(\n> \n> True. But at least the write is (hopefully) being done at a\n> non-performance-critical time.\n\nThere is no such hope: XLogWrite may be called from XLogFlush\n(at commit time and from bufmgr on replacements) *and* from XLogInsert\n- ie new log file may be required at any time.\n\n> > Probably, we need in separate process like LGWR (log \n> > writer) in Oracle.\n> \n> I think the create-ahead feature in the checkpoint maker should be\n> on by default.\n\nI'm not sure - it increases disk requirements.\n\n> > I considered this mostly as hint for OS about how log file should be\n> > allocated (to decrease fragmentation). Not sure how OSes \n> > use such hints but seek+write costs nothing.\n> \n> AFAIK, extant Unixes will not regard this as a hint at all; they'll\n> think it is a great opportunity to not store zeroes :-(.\n\nYes, but if I would write file system then I wouldn't allocate space\nfor file block by block - I would try to pre-allocate more than required\nby write(). So I hoped that seek+write is hint for OS: \"Hey, I need in\n16Mb file - try to make it as continuous as possible\". Don't know does\nit work, though -:)\n\n> One reason that I like logfile fill to be done separately is that it's\n> easier to convince ourselves that failure (due to out of disk space)\n> need not require elog(STOP) than if we have the same failure during\n> XLogWrite. You are right that we don't have time to consider \n> each STOP in the WAL code, but I think we should at least look at\n> that case...\n\nWhat problem with elog(STOP) in the absence of disk space?\nI think running out of disk is bad enough to stop DB operations.\n\nVadim\n", "msg_date": "Fri, 9 Mar 2001 12:00:49 -0800 ", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: WAL does not recover gracefully from out-of-disk-sp\n\t ace" }, { "msg_contents": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n>> True. But at least the write is (hopefully) being done at a\n>> non-performance-critical time.\n\n> There is no such hope: XLogWrite may be called from XLogFlush\n> (at commit time and from bufmgr on replacements) *and* from XLogInsert\n> - ie new log file may be required at any time.\n\nSure, but if we have create-ahead enabled then there's a good chance of\nthe log files being made by the checkpoint process, rather than by\nworking backends. In that case the prefill is not time critical.\n\nIn any case, my tests so far show that prefilling and then writing with\nO_SYNC or better O_DSYNC is in fact faster than not prefilling; this\nmatches pretty well the handwaving argument I gave Andreas this morning.\n(With fsync() or fdatasync() it seems we're at the mercy of inefficient\nkernel algorithms, a factor I didn't consider before.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 09 Mar 2001 15:30:45 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: WAL does not recover gracefully from out-of-disk-sp ace " } ]
[ { "msg_contents": "> The reason I'm inclined to question this is that what we want \n> is not an fsync per write but an fsync per transaction, and we can't \n> easily buffer all of a transaction's XLOG writes...\n\nWAL keeps records in WAL buffers (wal-buffers parameter may be used to\nincrease # of buffers), so we can make write()-s buffered.\n\nSeems that my Solaris has fdatasync, so I'll test different approaches...\n\nVadim\n", "msg_date": "Fri, 9 Mar 2001 12:08:54 -0800 ", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: AW: AW: WAL does not recover gracefully from out-of -dis k-sp ace " }, { "msg_contents": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n> Seems that my Solaris has fdatasync, so I'll test different approaches...\n\nA Sun guy told me that Solaris does this just the same way that HPUX\ndoes it: fsync() scans all kernel buffers for the file, but O_SYNC\ndoesn't, because it knows it only needs to sync the blocks covered\nby the write(). He didn't say about fdatasync/O_DSYNC but I bet the\nsame difference exists for those two.\n\nThe Linux 2.4 kernel allegedly is set up so that fsync() is smart enough\nto only look at dirty buffers, not all the buffers of the file. So\nthe performance tradeoffs would be different there. But on HPUX and\nprobably Solaris, O_DSYNC is likely to be a big win, unless we can find\na way to stop the kernel from buffering so much of the WAL files.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 09 Mar 2001 15:17:05 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: AW: WAL does not recover gracefully from out-of -dis k-sp ace" } ]
[ { "msg_contents": "> > But needed if we want to get rid of vacuum and have savepoints.\n> \n> Hmm. How do you implement savepoints ? When there is rollback \n> to savepoint do you use xlog to undo all changes which the particular \n> transaction has done ? Hmmm it seems nice ... these resords are locked by \n> such transaction so that it can safely undo them :-)\n> Am I right ?\n\nYes, but there is no savepoints in 7.1 - hopefully in 7.2\n\n> But how can you use xlog to get rid of vacuum ? Do you treat \n> all delete log records as candidates for free space ?\n\nVaccum removes deleted records *and* records inserted by aborted\ntransactions - last ones will be removed by UNDO.\n\nVadim\n", "msg_date": "Fri, 9 Mar 2001 12:11:36 -0800 ", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: WAL & SHM principles" } ]
[ { "msg_contents": "Jeff Lu writes:\n\n> $ make intrasend\n> gcc -o /c/inetpub/wwwroot/cgi-bin/intrasend.exe intrasend.c\n> intrautils.c -I/usr/\n> local/pgsql/include -L/usr/local/pgsql/lib\n\n-lpq\n\n> intrautils.c:7: warning: initialization makes integer from pointer without a\n> cas\n> t\n> /c/TEMP/ccXES02E.o(.text+0x32c):intrasend.c: undefined reference to\n> `PQconnectdb'\n[...]\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Fri, 9 Mar 2001 21:58:43 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: undefined reference pq" }, { "msg_contents": "Hi,\n\nCan some one tell me what am I missing?\n\n$ make intrasend\ngcc -o /c/inetpub/wwwroot/cgi-bin/intrasend.exe intrasend.c\nintrautils.c -I/usr/\nlocal/pgsql/include -L/usr/local/pgsql/lib\nintrautils.c:7: warning: initialization makes integer from pointer without a\ncas\nt\n/c/TEMP/ccXES02E.o(.text+0x32c):intrasend.c: undefined reference to\n`PQconnectdb\n'\n/c/TEMP/ccXES02E.o(.text+0x346):intrasend.c: undefined reference to\n`PQstatus'\n/c/TEMP/ccXES02E.o(.text+0x37b):intrasend.c: undefined reference to\n`PQerrorMess\nage'\n/c/TEMP/ccXES02E.o(.text+0x3d1):intrasend.c: undefined reference to `PQexec'\n/c/TEMP/ccXES02E.o(.text+0x3eb):intrasend.c: undefined reference to\n`PQresultSta\ntus'\n/c/TEMP/ccXES02E.o(.text+0x41d):intrasend.c: undefined reference to\n`PQclear'\n/c/TEMP/ccXES02E.o(.text+0x42f):intrasend.c: undefined reference to\n`PQfinish'\n/c/TEMP/ccXES02E.o(.text+0x457):intrasend.c: undefined reference to\n`PQntuples'\n/c/TEMP/ccXES02E.o(.text+0x47c):intrasend.c: undefined reference to\n`PQgetvalue'\n\n/c/TEMP/ccXES02E.o(.text+0x4a3):intrasend.c: undefined reference to\n`PQclear'\n/c/TEMP/ccXES02E.o(.text+0x4b5):intrasend.c: undefined reference to\n`PQfinish'\ncollect2: ld returned 1 exit status\nmake: *** [intrasend] Error 1\n\n\n", "msg_date": "Tue, 20 Mar 2001 22:21:38 -0500", "msg_from": "\"Jeff Lu\" <jklcom@mindspring.com>", "msg_from_op": false, "msg_subject": "undefined reference pq" } ]
[ { "msg_contents": "Now you're talking about i18n, maybe someone could think about input and\noutput of dates in local language.\n\nAs fas as I can tell, PostgreSQL will only use English for dates, eg January,\nFebruary and weekdays, Monday, Tuesday etc. Not the local name.\n\n-- \nKaare Rasmussen --Linux, spil,-- Tlf: 3816 2582\nKaki Data tshirts, merchandize Fax: 3816 2501\nHowitzvej 75 �ben 14.00-18.00 Email: kar@webline.dk\n2000 Frederiksberg L�rdag 11.00-17.00 Web: www.suse.dk\n", "msg_date": "Fri, 9 Mar 2001 22:58:02 +0100", "msg_from": "Kaare Rasmussen <kar@webline.dk>", "msg_from_op": true, "msg_subject": "Internationalized dates (was Internationalized error messages)" }, { "msg_contents": "On Fri, Mar 09, 2001 at 10:58:02PM +0100, Kaare Rasmussen wrote:\n> Now you're talking about i18n, maybe someone could think about input and\n> output of dates in local language.\n> \n> As fas as I can tell, PostgreSQL will only use English for dates, eg January,\n> February and weekdays, Monday, Tuesday etc. Not the local name.\n\n May be add special mask to to_char() and use locales for this, but I not\nsure. It isn't easy -- arbitrary size of strings, to_char's cache problems\n-- more and more difficult is parsing input with locales usage. \nThe other thing is speed...\n\n A solution is use number based dates without names :-(\n \n\t\t\tKarel\n\nPS. what other SQL engines, support it?\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Mon, 12 Mar 2001 11:11:46 +0100", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: Internationalized dates (was Internationalized error messages)" }, { "msg_contents": "On Mon, Mar 12, 2001 at 11:11:46AM +0100, Karel Zak wrote:\n> On Fri, Mar 09, 2001 at 10:58:02PM +0100, Kaare Rasmussen wrote:\n> > Now you're talking about i18n, maybe someone could think about input and\n> > output of dates in local language.\n> > \n> > As fas as I can tell, PostgreSQL will only use English for dates, eg January,\n> > February and weekdays, Monday, Tuesday etc. Not the local name.\n> \n> May be add special mask to to_char() and use locales for this, but I not\n> sure. It isn't easy -- arbitrary size of strings, to_char's cache problems\n> -- more and more difficult is parsing input with locales usage. \n> The other thing is speed...\n> \n> A solution is use number based dates without names :-(\n\nISO has published a standard on date/time formats, ISO 8601. \nDates look like \"2001-03-22\". Times look like \"12:47:63\". \nThe only unfortunate feature is their standard format for a \ndate/time: \"2001-03-22T12:47:63\". To me the ISO date format\nis far better than something involving month names. \n\nI'd like to see ISO 8601 as the default data format.\n\n--\nNathan Myers\nncm@zembu.com\n", "msg_date": "Mon, 12 Mar 2001 13:54:30 -0800", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: Internationalized dates (was Internationalized error messages)" }, { "msg_contents": "> > A solution is use number based dates without names :-(\n> ISO has published a standard on date/time formats, ISO 8601.\n> Dates look like \"2001-03-22\". Times look like \"12:47:63\".\n> The only unfortunate feature is their standard format for a\n> date/time: \"2001-03-22T12:47:63\". To me the ISO date format\n> is far better than something involving month names.\n> I'd like to see ISO 8601 as the default data format.\n\nYou got your wish when 7.0 was released; the default date/time format is\n\"ISO\" which of course can be adjusted at build or run time.\n\nThe default date/time formats are compliant with ISO-8601 (or are at\nleast intended to be so). The detail regarding \"T\" as the time\ndesignator mentioned above is covered in 8601 and our usage, omitting\nthe \"T\", is allowed by the standard. At least as long as you agree that\nit is OK! The wording is actually:\n\n... By mutual agreement of the partners in information interchange, the\ncharacter [T] may be omitted...\n\nPresumably this can be covered under our documenting the behavior (and\nby compliance with common and expected usage), rather than requiring\n100% concurrence by all end users of the system ;)\n\n - Thomas\n", "msg_date": "Tue, 13 Mar 2001 02:46:11 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: Internationalized dates (was Internationalized error messages)" } ]
[ { "msg_contents": "> I tried this on HPUX 10.20, which has not only O_SYNC but also O_DSYNC\n> (defined to do the equivalent of fdatasync()), and got truly \n> fascinating results. Apparently, on this platform these flags change\n> the kernel's buffering behavior! Observe:\n\nSolaris 2.6 fascinates even more!!!\n\n> $ gcc -Wall -O -DINIT_WRITE -DUSE_ODSYNC tfsync.c\n> $ time a.out\n> \n> real 0m21.40s\n> user 0m0.02s\n> sys 0m0.60s\n\nbash-2.02# gcc -Wall -O -DINIT_WRITE -DUSE_ODSYNC tfsync.c \nbash-2.02# time a.out \n\nreal 0m4.242s\nuser 0m0.000s\nsys 0m0.450s\n\nIt's hard to believe... Writing with DSYNC takes the same time as\nfile initialization - ~2 sec.\nAlso, there is no difference if using 64k blocks.\nINIT_WRITE + OSYNC gives 52 sec for 8k blocks and 5.7 sec\nfor 256k ones, but INIT_WRITE + DSYNC doesn't depend on block\nsize.\nModern IDE drive? -:))\n\nProbably we should change code to use O_DSYNC if defined even without\nchanging XLogWrite to write more than 1 block at once (if requested)?\n\nAs for O_SYNC:\n\nbash-2.02# gcc -Wall -O -DINIT_WRITE tfsync.c \nbash-2.02# time a.out \n\nreal 0m54.786s\nuser 0m0.010s\nsys 0m10.820s\nbash-2.02# gcc -Wall -O -DINIT_WRITE -DUSE_OSYNC tfsync.c \nbash-2.02# time a.out \n\nreal 0m52.406s\nuser 0m0.020s\nsys 0m0.650s\n\nNot big win. Solaris has more optimized search for dirty blocks\nthan Tom' HP and Andreas' platform?\n\nVadim\n", "msg_date": "Fri, 9 Mar 2001 14:49:38 -0800 ", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: AW: AW: AW: WAL does not recover gracefully from ou\n\tt-of -dis k-sp ace" } ]
[ { "msg_contents": "> $ gcc -Wall -O -DINIT_WRITE tfsync.c\n> $ time a.out\n> \n> real 1m15.11s\n> user 0m0.04s\n> sys 0m32.76s\n> \n> Note the large amount of system time here, and the fact that the extra\n> time in INIT_WRITE is all system time. I have previously \n> observed that fsync() on HPUX 10.20 appears to iterate through every\n> kernel disk buffer belonging to the file, presumably checking their \n> dirtybits one by one. The INIT_WRITE form loses because each fsync in\n> the second loop has to iterate through a full 16Mb worth of buffers,\n> whereas without INIT_WRITE there will only be as many buffers as the\n> amount of file we've filled so far. (On this platform, it'd probably\n> be a win to use log segments smaller than 16Mb...) It's interesting\n> that there's no visible I/O cost here for the extra write pass ---\n> the extra I/O must be completely overlapped with the extra system time.\n\nTom, could you run this test for different block sizes?\nUp to 32*8k?\nJust curious when you get something close to\n\n> $ gcc -Wall -O -DINIT_WRITE -DUSE_ODSYNC tfsync.c\n> $ time a.out\n> \n> real 0m21.40s\n> user 0m0.02s\n> sys 0m0.60s\n\nVadim\n", "msg_date": "Fri, 9 Mar 2001 14:57:15 -0800 ", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: AW: AW: AW: WAL does not recover gracefully from ou\n\tt-of -dis k-sp ace" } ]
[ { "msg_contents": "Assume a configuration problem that causes standalone backends to fail\nwithout doing anything. (I happened across this by tweaking global.bki\nin such a way that the superuser name entered into pg_shadow was\ndifferent from what getpwname returns. I don't have a real-world\nexample, but I'm sure there are some.) Unless the failure is so bad\nas to provoke a coredump, the backend will print a FATAL error message\nand then exit with exit status 0, because that's what it's supposed to\ndo under the postmaster.\n\nUnfortunately, given the exit status 0, initdb doesn't notice anything\nwrong. And since initdb carefully stuffs ALL stdout and stderr output\nfrom its standalone-backend calls into /dev/null, the user will never\nnotice anything wrong either, unless he's attuned enough to realize that\ninitdb should've taken longer.\n\nI think one part of the fix is to modify elog() so that a FATAL exit\nresults in exit status 1, not 0, if not IsUnderPostmaster. But this\nwill not help the user of initdb, who will still have no clue why\nthe initdb is failing, even if he turns on debug output from initdb.\n\nI tried modifying initdb along the lines of removing \"-o /dev/null\"\nfrom PGSQL_OPT, and then writing (eg)\n\necho \"CREATE TRIGGER pg_sync_pg_pwd AFTER INSERT OR UPDATE OR DELETE ON pg_shadow\" \\\n \"FOR EACH ROW EXECUTE PROCEDURE update_pg_pwd()\" \\\n | \"$PGPATH\"/postgres $PGSQL_OPT template1 2>&1 >/dev/null \\\n | grep -v ^DEBUG || exit_nicely\n\nso that all non-DEBUG messages from the standalone backend would appear\nin initdb's output. However, this does not work because then the ||\ntests the exit status of grep, not postgres. I don't think\n\n\t(postgres || exit_nicely) | grep\n\nwould work either --- the exit will occur in a subprocess.\n\nAt the very least we should hack initdb so that --debug removes\n\"-o /dev/null\" from PGSQL_OPT, but can you see any way to provide\nfiltered stderr output from the backend in the normal mode of operation?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 09 Mar 2001 20:30:59 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Interesting failure mode for initdb" }, { "msg_contents": "Tom Lane writes:\n\n> I think one part of the fix is to modify elog() so that a FATAL exit\n> results in exit status 1, not 0, if not IsUnderPostmaster.\n\nRight.\n\n> At the very least we should hack initdb so that --debug removes\n> \"-o /dev/null\" from PGSQL_OPT, but can you see any way to provide\n> filtered stderr output from the backend in the normal mode of operation?\n\nI've removed some of the >/dev/null's and the only undesired output I get\nis of this form:\n\nEnabling unlimited row width for system tables.\n\nPOSTGRES backend interactive interface\n$Revision: 1.208 $ $Date: 2001/02/24 02:04:51 $\n\nbackend> backend>\nPOSTGRES backend interactive interface\n$Revision: 1.208 $ $Date: 2001/02/24 02:04:51 $\n\nbackend> backend> Creating system views.\n\nPOSTGRES backend interactive interface\n$Revision: 1.208 $ $Date: 2001/02/24 02:04:51 $\n\nISTM that the backend shouldn't print a prompt when it's non-interactive.\nThen maybe we don't need to filter the output at all.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Sat, 10 Mar 2001 10:38:36 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Interesting failure mode for initdb" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> I've removed some of the >/dev/null's and the only undesired output I get\n> is of this form:\n\n> POSTGRES backend interactive interface\n> $Revision: 1.208 $ $Date: 2001/02/24 02:04:51 $\n\n> backend> backend>\n> POSTGRES backend interactive interface\n> $Revision: 1.208 $ $Date: 2001/02/24 02:04:51 $\n\nThat stuff comes out on stdout; all of the interesting stuff is on\nstderr. I don't have a problem with routing stdout to /dev/null.\n\n> ISTM that the backend shouldn't print a prompt when it's non-interactive.\n\nMore trouble than it's worth, I think ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 10 Mar 2001 10:55:24 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Interesting failure mode for initdb " }, { "msg_contents": "I said:\n> That stuff comes out on stdout; all of the interesting stuff is on\n> stderr.\n\nActually, given the -o option all of the interesting stuff will go to\nwherever -o says.\n\nAt this stage of the release cycle I suppose we must resist the\ntemptation to define -o '|command' as doing a popen(), but maybe\nfor 7.2 something could be done with \"-o '|grep -v ^DEBUG'\" ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 10 Mar 2001 11:03:55 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Interesting failure mode for initdb " } ]
[ { "msg_contents": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n> Tom, could you run this test for different block sizes?\n> Up to 32*8k?\n>> \n>> You mean changing the amount written per write(), while holding the\n>> total file size constant, right?\n\n> Yes. Currently XLogWrite writes 8k blocks one by one. From what I've seen\n> on Solaris we can use O_DSYNC there without changing XLogWrite to\n> write() more than 1 block (if > 1 block is available for writing).\n> But on other platforms write(BLOCKS_TO_WRITE * 8k) + fsync() probably will\n> be\n> faster than BLOCKS_TO_WRITE * write(8k) (for file opened with O_DSYNC)\n> if BLOCKS_TO_WRITE > 1.\n> I just wonder with what BLOCKS_TO_WRITE we'll see same times for both\n> approaches.\n\nOkay, I changed the program to\n\tchar zbuffer[8192 * BLOCKS];\n(all else the same)\n\nand on HPUX 10.20 I get\n\n$ gcc -Wall -O -DINIT_WRITE -DUSE_DSYNC -DBLOCKS=1 tfsync.c\n$ time a.out\n\nreal 1m18.48s\nuser 0m0.04s\nsys 0m34.69s\n$ gcc -Wall -O -DINIT_WRITE -DUSE_DSYNC -DBLOCKS=4 tfsync.c\n$ time a.out\n\nreal 0m35.10s\nuser 0m0.01s\nsys 0m9.08s\n$ gcc -Wall -O -DINIT_WRITE -DUSE_DSYNC -DBLOCKS=8 tfsync.c\n$ time a.out\n\nreal 0m29.75s\nuser 0m0.01s\nsys 0m5.23s\n$ gcc -Wall -O -DINIT_WRITE -DUSE_DSYNC -DBLOCKS=32 tfsync.c\n$ time a.out\n\nreal 0m22.77s\nuser 0m0.01s\nsys 0m1.80s\n$ gcc -Wall -O -DINIT_WRITE -DUSE_DSYNC -DBLOCKS=64 tfsync.c\n$ time a.out\n\nreal 0m22.08s\nuser 0m0.01s\nsys 0m1.25s\n\n\n$ gcc -Wall -O -DINIT_WRITE -DUSE_ODSYNC -DBLOCKS=1 tfsync.c\n$ time a.out\n\nreal 0m20.64s\nuser 0m0.02s\nsys 0m0.67s\n$ gcc -Wall -O -DINIT_WRITE -DUSE_ODSYNC -DBLOCKS=4 tfsync.c\n$ time a.out\n\nreal 0m20.72s\nuser 0m0.01s\nsys 0m0.57s\n$ gcc -Wall -O -DINIT_WRITE -DUSE_ODSYNC -DBLOCKS=32 tfsync.c\n$ time a.out\n\nreal 0m20.59s\nuser 0m0.01s\nsys 0m0.61s\n$ gcc -Wall -O -DINIT_WRITE -DUSE_ODSYNC -DBLOCKS=64 tfsync.c\n$ time a.out\n\nreal 0m20.86s\nuser 0m0.01s\nsys 0m0.69s\n\nSo I also see that there is no benefit to writing more than one block at\na time with ODSYNC. And even at half a meg per write, DSYNC is slower\nthan ODSYNC with 8K per write! Note the fairly high system-time\nconsumption for DSYNC, too. I think this is not so much a matter of a\nreally good ODSYNC implementation, as a really bad DSYNC one ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 09 Mar 2001 21:20:09 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: AW: AW: AW: WAL does not recover gracefully from ou t-of -dis\n\tk-sp ace" }, { "msg_contents": "More numbers, these from a Powerbook G3 laptop running Linux 2.2:\n\n[tgl@g3 tmp]$ uname -a\nLinux g3 2.2.18-4hpmac #1 Thu Dec 21 15:16:15 MST 2000 ppc unknown\n\n[tgl@g3 tmp]$ gcc -Wall -O -DINIT_WRITE -DUSE_DSYNC -DBLOCKS=1 tfsync.c\n[tgl@g3 tmp]$ time ./a.out\n\nreal\t0m32.418s\nuser\t0m0.020s\nsys\t0m14.020s\n\n[tgl@g3 tmp]$ gcc -Wall -O -DINIT_WRITE -DUSE_DSYNC -DBLOCKS=4 tfsync.c\n[tgl@g3 tmp]$ time ./a.out\n\nreal\t0m10.894s\nuser\t0m0.000s\nsys\t0m4.030s\n\n[tgl@g3 tmp]$ gcc -Wall -O -DINIT_WRITE -DUSE_DSYNC -DBLOCKS=8 tfsync.c\n[tgl@g3 tmp]$ time ./a.out\n\nreal\t0m7.211s\nuser\t0m0.000s\nsys\t0m2.200s\n\n[tgl@g3 tmp]$ gcc -Wall -O -DINIT_WRITE -DUSE_DSYNC -DBLOCKS=32 tfsync.c\n[tgl@g3 tmp]$ time ./a.out\n\nreal\t0m4.441s\nuser\t0m0.020s\nsys\t0m0.870s\n\n[tgl@g3 tmp]$ gcc -Wall -O -DINIT_WRITE -DUSE_DSYNC -DBLOCKS=64 tfsync.c\n[tgl@g3 tmp]$ time ./a.out\n\nreal\t0m4.488s\nuser\t0m0.000s\nsys\t0m0.640s\n\n[tgl@g3 tmp]$ gcc -Wall -O -DINIT_WRITE -DUSE_ODSYNC -DBLOCKS=1 tfsync.c\n[tgl@g3 tmp]$ time ./a.out\n\nreal\t0m3.725s\nuser\t0m0.000s\nsys\t0m0.310s\n\n[tgl@g3 tmp]$ gcc -Wall -O -DINIT_WRITE -DUSE_ODSYNC -DBLOCKS=4 tfsync.c\n[tgl@g3 tmp]$ time ./a.out\n\nreal\t0m3.785s\nuser\t0m0.000s\nsys\t0m0.290s\n\n[tgl@g3 tmp]$ gcc -Wall -O -DINIT_WRITE -DUSE_ODSYNC -DBLOCKS=64 tfsync.c\n[tgl@g3 tmp]$ time ./a.out\n\nreal\t0m3.753s\nuser\t0m0.010s\nsys\t0m0.300s\n\n\nStarting to look like we should just use ODSYNC where available, and\nforget about dumping more per write ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 09 Mar 2001 21:41:24 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: AW: AW: AW: WAL does not recover gracefully from ou t-of -dis\n\tk-sp ace" }, { "msg_contents": "On Saturday 10 March 2001 08:41, Tom Lane wrote:\n> More numbers, these from a Powerbook G3 laptop running Linux 2.2:\n\nEeegghhh. Sorry... But where did you get O_DSYNC on Linux?????\nMaybe here?\n\nbits/fcntl.h: # define O_DSYNC O_SYNC\n\nThere is no any O_DSYNC in the kernel... Even in the latest 2.4.x.\n\n> [tgl@g3 tmp]$ uname -a\n> Linux g3 2.2.18-4hpmac #1 Thu Dec 21 15:16:15 MST 2000 ppc unknown\n>\n> [tgl@g3 tmp]$ gcc -Wall -O -DINIT_WRITE -DUSE_DSYNC -DBLOCKS=1 tfsync.c\n> [tgl@g3 tmp]$ time ./a.out\n>\n> real\t0m32.418s\n> user\t0m0.020s\n> sys\t0m14.020s\n>\n> [tgl@g3 tmp]$ gcc -Wall -O -DINIT_WRITE -DUSE_DSYNC -DBLOCKS=4 tfsync.c\n> [tgl@g3 tmp]$ time ./a.out\n>\n> real\t0m10.894s\n> user\t0m0.000s\n> sys\t0m4.030s\n>\n> [tgl@g3 tmp]$ gcc -Wall -O -DINIT_WRITE -DUSE_DSYNC -DBLOCKS=8 tfsync.c\n> [tgl@g3 tmp]$ time ./a.out\n>\n> real\t0m7.211s\n> user\t0m0.000s\n> sys\t0m2.200s\n>\n> [tgl@g3 tmp]$ gcc -Wall -O -DINIT_WRITE -DUSE_DSYNC -DBLOCKS=32 tfsync.c\n> [tgl@g3 tmp]$ time ./a.out\n>\n> real\t0m4.441s\n> user\t0m0.020s\n> sys\t0m0.870s\n>\n> [tgl@g3 tmp]$ gcc -Wall -O -DINIT_WRITE -DUSE_DSYNC -DBLOCKS=64 tfsync.c\n> [tgl@g3 tmp]$ time ./a.out\n>\n> real\t0m4.488s\n> user\t0m0.000s\n> sys\t0m0.640s\n>\n> [tgl@g3 tmp]$ gcc -Wall -O -DINIT_WRITE -DUSE_ODSYNC -DBLOCKS=1 tfsync.c\n> [tgl@g3 tmp]$ time ./a.out\n>\n> real\t0m3.725s\n> user\t0m0.000s\n> sys\t0m0.310s\n>\n> [tgl@g3 tmp]$ gcc -Wall -O -DINIT_WRITE -DUSE_ODSYNC -DBLOCKS=4 tfsync.c\n> [tgl@g3 tmp]$ time ./a.out\n>\n> real\t0m3.785s\n> user\t0m0.000s\n> sys\t0m0.290s\n>\n> [tgl@g3 tmp]$ gcc -Wall -O -DINIT_WRITE -DUSE_ODSYNC -DBLOCKS=64 tfsync.c\n> [tgl@g3 tmp]$ time ./a.out\n>\n> real\t0m3.753s\n> user\t0m0.010s\n> sys\t0m0.300s\n>\n>\n> Starting to look like we should just use ODSYNC where available, and\n> forget about dumping more per write ...\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: dyp@perchine.com\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------\n", "msg_date": "Sat, 10 Mar 2001 13:17:11 +0600", "msg_from": "Denis Perchine <dyp@perchine.com>", "msg_from_op": false, "msg_subject": "Re: AW: AW: AW: WAL does not recover gracefully from ou t-of -dis\n\tk-sp ace" }, { "msg_contents": "Denis Perchine <dyp@perchine.com> writes:\n> On Saturday 10 March 2001 08:41, Tom Lane wrote:\n>> More numbers, these from a Powerbook G3 laptop running Linux 2.2:\n\n> Eeegghhh. Sorry... But where did you get O_DSYNC on Linux?????\n\n> bits/fcntl.h: # define O_DSYNC O_SYNC\n\nHm, must be. Okay, so those two sets of numbers should be taken as\nfsync() and O_SYNC respectively. Still the conclusion seems pretty\nclear: the open() options are way more efficient than calling fsync()\nseparately.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 10 Mar 2001 11:12:08 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: AW: AW: AW: WAL does not recover gracefully from ou t-of -dis\n\tk-sp ace" } ]
[ { "msg_contents": "> $ gcc -Wall -O -DINIT_WRITE -DUSE_DSYNC -DBLOCKS=1 tfsync.c\n ^^^^^^^^^^^\nYou should use -DUSE_OSYNC to test O_SYNC.\nSo you've tested N * write() + fsync(), exactly what I've asked -:)\n\n> So I also see that there is no benefit to writing more than \n> one block at a time with ODSYNC. And even at half a meg per write,\n> DSYNC is slower than ODSYNC with 8K per write! Note the fairly high\n> system-time consumption for DSYNC, too. I think this is not so much\n> a matter of a really good ODSYNC implementation, as a really bad DSYNC\n> one ...\n\nSo seems we can use O_DSYNC without losing log write performance\ncomparing with write() + fsync. Though, we didn't tested write() +\nfdatasync()\nyet...\n\nVadim\n", "msg_date": "Fri, 9 Mar 2001 18:31:36 -0800 ", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: AW: AW: AW: WAL does not recover gracefully from ou\n\tt-of -dis k-sp ace" }, { "msg_contents": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n>> $ gcc -Wall -O -DINIT_WRITE -DUSE_DSYNC -DBLOCKS=1 tfsync.c\n> ^^^^^^^^^^^\n> You should use -DUSE_OSYNC to test O_SYNC.\n\nOoops ... let's hear it for cut-and-paste, and for sharp-eyed readers!\n\nJust for completeness, here are the results for O_SYNC:\n\n$ gcc -Wall -O -DINIT_WRITE -DUSE_OSYNC -DBLOCKS=1 tfsync.c\n$ time a.out\n\nreal 0m43.44s\nuser 0m0.02s\nsys 0m0.74s\n$ gcc -Wall -O -DINIT_WRITE -DUSE_OSYNC -DBLOCKS=4 tfsync.c\n$ time a.out\n\nreal 0m26.38s\nuser 0m0.01s\nsys 0m0.59s\n$ gcc -Wall -O -DINIT_WRITE -DUSE_OSYNC -DBLOCKS=8 tfsync.c\n$ time a.out\n\nreal 0m23.86s\nuser 0m0.01s\nsys 0m0.59s\n\n$ gcc -Wall -O -DINIT_WRITE -DUSE_OSYNC -DBLOCKS=64 tfsync.c\n$ time a.out\n\nreal 0m22.93s\nuser 0m0.01s\nsys 0m0.66s\n\nBetter than fsync(), but still not up to O_DSYNC.\n\n> So seems we can use O_DSYNC without losing log write performance\n> comparing with write() + fsync. Though, we didn't tested write() +\n> fdatasync() yet...\n\nGood point, we should check fdatasync() too --- although I have no\nmachines where it's different from fsync().\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 09 Mar 2001 21:53:12 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: AW: AW: WAL does not recover gracefully from ou t-of -dis\n\tk-sp ace" } ]
[ { "msg_contents": "> Starting to look like we should just use ODSYNC where available, and\n> forget about dumping more per write ...\n\nI'll run these tests on RedHat 7.0 tomorrow.\n\nVadim\n", "msg_date": "Fri, 9 Mar 2001 18:45:36 -0800 ", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: AW: AW: AW: WAL does not recover gracefully from ou\n\tt-of -dis k-sp ace" } ]
[ { "msg_contents": "> > So seems we can use O_DSYNC without losing log write performance\n> > comparing with write() + fsync. Though, we didn't tested write() +\n> > fdatasync() yet...\n> \n> Good point, we should check fdatasync() too --- although I have no\n> machines where it's different from fsync().\n\nI've tested it on Solaris - not better than O_DSYNC (expected, taking\nin account that O_DSYNC results don't depend on block counts).\n\nOk, I've made changes in xlog.c and run tests: 50 clients inserted\n(int4, text[1-256]) into 50 tables,\n-B 16384, -wal_buffers 256, -wal_files 0.\n\nFSYNC: 257tps\nO_DSYNC: 333tps \n\nJust(?) 30% faster, -:(\nBut I had no ability to place log on separate disk, yet...\n\nVadim\n", "msg_date": "Fri, 9 Mar 2001 20:15:14 -0800 ", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: AW: AW: AW: WAL does not recover gracefully from ou\n\tt-of -dis k-sp ace" }, { "msg_contents": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n> Ok, I've made changes in xlog.c and run tests:\n\nCould you send me your diffs?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 09 Mar 2001 23:33:07 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: AW: AW: WAL does not recover gracefully from ou t-of -dis\n\tk-sp ace" }, { "msg_contents": "> > Ok, I've made changes in xlog.c and run tests:\n> \n> Could you send me your diffs?\n\nSorry, Monday only.\n\nVadim\n\n\n", "msg_date": "Sat, 10 Mar 2001 23:24:19 -0800", "msg_from": "\"Vadim Mikheev\" <vmikheev@sectorbase.com>", "msg_from_op": false, "msg_subject": "Re: AW: AW: AW: WAL does not recover gracefully from ou t-of -dis\n\tk-sp ace" } ]
[ { "msg_contents": "Is this page \n\n http://members.fortunecity.com/nymia/postgres/dox/backend/html/\n\ncommon knowledge? It appears to be an automatically-generated\ncross-reference documentation web site. My impression is that\nappropriately-marked comments in the code get extracted to the \nweb pages, too, so it is also a way to automate internal \ndocumentation.\n\nNathan Myers\nncm@zembu.com\n", "msg_date": "Sat, 10 Mar 2001 14:11:44 -0800", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": true, "msg_subject": "doxygen & PG" }, { "msg_contents": "ncm@zembu.com (Nathan Myers) writes:\n> Is this page \n> http://members.fortunecity.com/nymia/postgres/dox/backend/html/\n> common knowledge?\n\nInteresting, but bizarrely incomplete. (Yeah, we have only ~100\nstruct types ... sure ...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 10 Mar 2001 18:29:37 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: doxygen & PG " }, { "msg_contents": "On Sat, Mar 10, 2001 at 06:29:37PM -0500, Tom Lane wrote:\n> ncm@zembu.com (Nathan Myers) writes:\n> > Is this page \n> > http://members.fortunecity.com/nymia/postgres/dox/backend/html/\n> > common knowledge?\n> \n> Interesting, but bizarrely incomplete. (Yeah, we have only ~100\n> struct types ... sure ...)\n\nIt does say \"version 0.0.1\". \n\nWhat was interesting to me is that the interface seems a lot more \nhelpful than the current CVS web gateway. If it were to be completed, \nand could be kept up to date automatically, something like it could \nbe very useful.\n\nNathan Myers\nncm@zembu.com\n", "msg_date": "Sat, 10 Mar 2001 15:48:34 -0800", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": true, "msg_subject": "Re: doxygen & PG" }, { "msg_contents": "The site mentioned was created by me. I used doxygen to create those html\nfiles. And it's just the first stab. It doesn't have have doxygen tags yet\nthat's why it looks like that.\n\nThe reason why I made it was to make it easier for me ( and others as well )\nto read the code though. So far, I've learned a lot using this technique.\n\nThere is another one I'm working on and it's at\nhttp://members.fortunecity.com/nymia/vsta/boot_layout.html\n\n\n\n----- Original Message -----\nFrom: Nathan Myers <ncm@zembu.com>\nTo: <pgsql-hackers@postgresql.org>\nSent: Saturday, March 10, 2001 3:48 PM\nSubject: Re: [HACKERS] doxygen & PG\n\n\n> On Sat, Mar 10, 2001 at 06:29:37PM -0500, Tom Lane wrote:\n> > ncm@zembu.com (Nathan Myers) writes:\n> > > Is this page\n> > > http://members.fortunecity.com/nymia/postgres/dox/backend/html/\n> > > common knowledge?\n> >\n> > Interesting, but bizarrely incomplete. (Yeah, we have only ~100\n> > struct types ... sure ...)\n>\n> It does say \"version 0.0.1\".\n>\n> What was interesting to me is that the interface seems a lot more\n> helpful than the current CVS web gateway. If it were to be completed,\n> and could be kept up to date automatically, something like it could\n> be very useful.\n>\n> Nathan Myers\n> ncm@zembu.com\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n", "msg_date": "Sat, 10 Mar 2001 16:51:22 -0800", "msg_from": "\"Nymia\" <nymia@qwest.net>", "msg_from_op": false, "msg_subject": "Re: doxygen & PG" } ]
[ { "msg_contents": "At the end of backend/utils/adt/datetime.c, there is some fairly ugly\ncode that is conditionally compiled on\n\n#if defined(linux) && defined(__powerpc__)\n\nDo we still need this? The standard versions of TIMESTAMP_IS_CURRENT\nand TIMESTAMP_IS_EPOCH appear to work just fine on my Powerbook G3\nrunning Linux 2.2.18 (LinuxPPC 2000 Q4 distro).\n\nI see from the CVS logs that Tatsuo originally introduced this code\non 1997/07/29 (at the time it lived in dt.c and was called\ndatetime_is_current & datetime_is_epoch). I suppose that it must have\nbeen meant to work around some bug in old versions of gcc for PPC.\nBut it seems to me to be a net decrease in portability --- it's assuming\nthat the symbolic constants DBL_MIN and -DBL_MIN will produce particular\nbit patterns --- so I'd like to remove it unless someone knows of a\nrecent Linux/PPC release that still needs it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 10 Mar 2001 21:29:55 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Do we still need PowerPC-specific timestamp_is_current/epoch?" }, { "msg_contents": "> At the end of backend/utils/adt/datetime.c, there is some fairly ugly\n> code that is conditionally compiled on\n> \n> #if defined(linux) && defined(__powerpc__)\n> \n> Do we still need this? The standard versions of TIMESTAMP_IS_CURRENT\n> and TIMESTAMP_IS_EPOCH appear to work just fine on my Powerbook G3\n> running Linux 2.2.18 (LinuxPPC 2000 Q4 distro).\n> \n> I see from the CVS logs that Tatsuo originally introduced this code\n> on 1997/07/29 (at the time it lived in dt.c and was called\n> datetime_is_current & datetime_is_epoch). I suppose that it must have\n> been meant to work around some bug in old versions of gcc for PPC.\n\nYes.\n\n> But it seems to me to be a net decrease in portability --- it's assuming\n> that the symbolic constants DBL_MIN and -DBL_MIN will produce particular\n> bit patterns --- so I'd like to remove it unless someone knows of a\n> recent Linux/PPC release that still needs it.\n\nLet me check if my Linux/PPC still needs the workaround.\nBTW, what about MkLinux? Anybody tried recent DR5 release?\n--\nTatsuo Ishii\n", "msg_date": "Sun, 11 Mar 2001 12:30:10 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Do we still need PowerPC-specific\n timestamp_is_current/epoch?" }, { "msg_contents": "> At the end of backend/utils/adt/datetime.c, there is some fairly ugly\n> code that is conditionally compiled on\n> \n> #if defined(linux) && defined(__powerpc__)\n> \n> Do we still need this? The standard versions of TIMESTAMP_IS_CURRENT\n> and TIMESTAMP_IS_EPOCH appear to work just fine on my Powerbook G3\n> running Linux 2.2.18 (LinuxPPC 2000 Q4 distro).\n> \n> I see from the CVS logs that Tatsuo originally introduced this code\n> on 1997/07/29 (at the time it lived in dt.c and was called\n> datetime_is_current & datetime_is_epoch). I suppose that it must have\n> been meant to work around some bug in old versions of gcc for PPC.\n> But it seems to me to be a net decrease in portability --- it's assuming\n> that the symbolic constants DBL_MIN and -DBL_MIN will produce particular\n> bit patterns --- so I'd like to remove it unless someone knows of a\n> recent Linux/PPC release that still needs it.\n\nAfter further research, I remembered that we used to have \"DB_MIN\ncheck\" in configure back to 6.4.2:\n\nAC_MSG_CHECKING(for good DBL_MIN)\nAC_TRY_RUN([#include <stdlib.h>\n#include <math.h>\n#ifdef HAVE_FLOAT_H\n# include <float.h>\n#endif\nmain() { double d = DBL_MIN; if (d != DBL_MIN) exit(-1); else exit(0); }],\n\tAC_MSG_RESULT(yes),\n\t[AC_DEFINE(HAVE_DBL_MIN_PROBLEM) AC_MSG_RESULT(no)],\n\tAC_MSG_RESULT(assuming ok on target machine))\n\nI don't know wht it was removed, but I think we'd better to revive the\nchecking and replace\n\n#if defined(linux) && defined(__powerpc__)\n\nwith\n\n#ifdef HAVE_DBL_MIN_PROBLEM\n\nWhat do you think?\n--\nTatsuo Ishii\n", "msg_date": "Tue, 13 Mar 2001 09:57:12 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Do we still need PowerPC-specific\n timestamp_is_current/epoch?" }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> After further research, I remembered that we used to have \"DB_MIN\n> check\" in configure back to 6.4.2:\n> I don't know wht it was removed,\n\nHmm. Digging in the CVS logs shows that it was removed by Bruce in\nconfigure.in version 1.262, 1999/07/18, with the unedifying log message\n\"configure cleanup\".\n\nA guess is that he took it out because it wasn't being used anywhere.\n\n> but I think we'd better to revive the checking and replace\n> #if defined(linux) && defined(__powerpc__)\n> with\n> #ifdef HAVE_DBL_MIN_PROBLEM\n> What do you think?\n\nI think that is a bad idea, since that code is guaranteed to fail on any\nmachine where the representation of double is at all different from a\nPPC's. (Even if you are willing to assume that the entire world uses\nIEEE floats these days, what of endianness?)\n\nWe could revive the configure test and do\n\n#if defined(HAVE_DBL_MIN_PROBLEM) && defined(__powerpc__)\n\nHowever, I really wonder whether there is any point. It may be worth\nnoting that the original version of the patch read \"#if ... defined(PPC)\".\nIt's quite likely that the current test, \"... defined(__powerpc__)\",\ndoesn't even fire on the old compiler that the patch is intended for.\nIf so, this is dead code and has been since release 6.5.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 12 Mar 2001 20:47:20 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Do we still need PowerPC-specific timestamp_is_current/epoch? " }, { "msg_contents": "> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > After further research, I remembered that we used to have \"DB_MIN\n> > check\" in configure back to 6.4.2:\n> > I don't know wht it was removed,\n> \n> Hmm. Digging in the CVS logs shows that it was removed by Bruce in\n> configure.in version 1.262, 1999/07/18, with the unedifying log message\n> \"configure cleanup\".\n> \n> A guess is that he took it out because it wasn't being used anywhere.\n> \n> > but I think we'd better to revive the checking and replace\n> > #if defined(linux) && defined(__powerpc__)\n> > with\n> > #ifdef HAVE_DBL_MIN_PROBLEM\n> > What do you think?\n> \n> I think that is a bad idea, since that code is guaranteed to fail on any\n> machine where the representation of double is at all different from a\n> PPC's. (Even if you are willing to assume that the entire world uses\n> IEEE floats these days, what of endianness?)\n> \n> We could revive the configure test and do\n> \n> #if defined(HAVE_DBL_MIN_PROBLEM) && defined(__powerpc__)\n> \n> However, I really wonder whether there is any point. It may be worth\n> noting that the original version of the patch read \"#if ... defined(PPC)\".\n> It's quite likely that the current test, \"... defined(__powerpc__)\",\n> doesn't even fire on the old compiler that the patch is intended for.\n> If so, this is dead code and has been since release 6.5.\n\nOk, let's remove the code in datetime.c and see anybody would come up\nand complain...\n--\nTatsuo Ishii\n", "msg_date": "Tue, 13 Mar 2001 17:50:46 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Do we still need PowerPC-specific\n timestamp_is_current/epoch?" }, { "msg_contents": "> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > After further research, I remembered that we used to have \"DB_MIN\n> > check\" in configure back to 6.4.2:\n> > I don't know wht it was removed,\n> \n> Hmm. Digging in the CVS logs shows that it was removed by Bruce in\n> configure.in version 1.262, 1999/07/18, with the unedifying log message\n> \"configure cleanup\".\n> \n> A guess is that he took it out because it wasn't being used anywhere.\n\nYes, I checked all configure flags and removed the ones not being used.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 13 Mar 2001 10:32:24 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Do we still need PowerPC-specific timestamp_is_current/epoch?" } ]
[ { "msg_contents": "I have applied the following patch to make SIGTERM backend exit clearer\nin the the server logs. \"The system\" is not really shutting down, but\n\"the backend\" is shutting down.\n\nShould we be showing the PID's in the server logs more. Do we need to\nenable that somewhere? Seems they are very hard to follow without\nPID's.\n\n---------------------------------------------------------------------------\n\n\nIndex: src/backend/tcop/postgres.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/tcop/postgres.c,v\nretrieving revision 1.208\ndiff -c -r1.208 postgres.c\n*** src/backend/tcop/postgres.c\t2001/02/24 02:04:51\t1.208\n--- src/backend/tcop/postgres.c\t2001/03/11 19:04:47\n***************\n*** 1022,1028 ****\n \t\tProcDiePending = false;\n \t\tQueryCancelPending = false;\t/* ProcDie trumps QueryCancel */\n \t\tImmediateInterruptOK = false; /* not idle anymore */\n! \t\telog(FATAL, \"The system is shutting down\");\n \t}\n \tif (QueryCancelPending)\n \t{\n--- 1022,1028 ----\n \t\tProcDiePending = false;\n \t\tQueryCancelPending = false;\t/* ProcDie trumps QueryCancel */\n \t\tImmediateInterruptOK = false; /* not idle anymore */\n! \t\telog(FATAL, \"Backend shutting down\");\n \t}\n \tif (QueryCancelPending)\n \t{\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 11 Mar 2001 14:09:06 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "SIGTERM/FATAL error" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I have applied the following patch to make SIGTERM backend exit clearer\n> in the the server logs. \"The system\" is not really shutting down, but\n> \"the backend\" is shutting down.\n\nThis is a non-improvement. Please reverse it. SIGTERM would only be\nsent to a backend if the database system were in fact shutting down.\n\n> Should we be showing the PID's in the server logs more. Do we need to\n> enable that somewhere?\n\nThere's already an option for that, but it should not be forced on since\nit will be redundant for those using syslog.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 11 Mar 2001 19:49:14 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SIGTERM/FATAL error " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I have applied the following patch to make SIGTERM backend exit clearer\n> > in the the server logs. \"The system\" is not really shutting down, but\n> > \"the backend\" is shutting down.\n> \n> This is a non-improvement. Please reverse it. SIGTERM would only be\n> sent to a backend if the database system were in fact shutting down.\n\nReversed.\n\nBut why say the system is shutting down if the backend is shutting down.\nSeems the postmaster should say system shutting down and each backend\nshould say it is shutting itself down. The way it is now, don't we get\na \"system shutting down\" message for every running backend?\n\n\n> \n> > Should we be showing the PID's in the server logs more. Do we need to\n> > enable that somewhere?\n> \n> There's already an option for that, but it should not be forced on since\n> it will be redundant for those using syslog.\n\nDoes syslog show the pid?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 11 Mar 2001 19:53:15 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: SIGTERM/FATAL error" }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I have applied the following patch to make SIGTERM backend exit clearer\n> > in the the server logs. \"The system\" is not really shutting down, but\n> > \"the backend\" is shutting down.\n> \n> This is a non-improvement. Please reverse it. SIGTERM would only be\n> sent to a backend if the database system were in fact shutting down.\n\nAlso, what signal should people send to a backend to kill just that\nbackend? In my reading of the code, I see:\n\n pqsignal(SIGTERM, die); /* cancel current query and exit */\n pqsignal(SIGQUIT, die); /* could reassign this sig for another use */\n\nAre either of them safe?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 11 Mar 2001 19:57:10 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: SIGTERM/FATAL error" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> This is a non-improvement. Please reverse it. SIGTERM would only be\n>> sent to a backend if the database system were in fact shutting down.\n\n> But why say the system is shutting down if the backend is shutting down.\n> Seems the postmaster should say system shutting down and each backend\n> should say it is shutting itself down. The way it is now, don't we get\n> a \"system shutting down\" message for every running backend?\n\nYou are failing to consider that the primary audience for this error\nmessage is not the system log, but the clients of the backends. They\nare going to see only one message, and they are going to want to know\n*why* their backend shut down.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 11 Mar 2001 20:06:31 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SIGTERM/FATAL error " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> This is a non-improvement. Please reverse it. SIGTERM would only be\n> >> sent to a backend if the database system were in fact shutting down.\n> \n> > But why say the system is shutting down if the backend is shutting down.\n> > Seems the postmaster should say system shutting down and each backend\n> > should say it is shutting itself down. The way it is now, don't we get\n> > a \"system shutting down\" message for every running backend?\n> \n> You are failing to consider that the primary audience for this error\n> message is not the system log, but the clients of the backends. They\n> are going to see only one message, and they are going to want to know\n> *why* their backend shut down.\n\nOops, I get it now. Makes perfect sense. Thanks.\n\nI am using the SIGTERM in my administration application to allow\nadministrators to kill individual backends. That is why I noticed the\nmessage.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 11 Mar 2001 20:08:43 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: SIGTERM/FATAL error" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Also, what signal should people send to a backend to kill just that\n> backend?\n\nI don't know that we do or should recommend such a thing at all ...\nbut SIGTERM should work if anything does (and it is, not coincidentally,\nthe default kind of signal for kill(1)).\n\n> In my reading of the code, I see:\n\n> pqsignal(SIGTERM, die); /* cancel current query and exit */\n> pqsignal(SIGQUIT, die); /* could reassign this sig for another use */\n\nThis is already obsolete ;=) ... I'm just waiting on Vadim's approval of\nmy xlog mods before committing a change in SIGQUIT handling --- see\ndiscussion a couple days ago.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 11 Mar 2001 20:10:26 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SIGTERM/FATAL error " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Also, what signal should people send to a backend to kill just that\n> > backend?\n> \n> I don't know that we do or should recommend such a thing at all ...\n> but SIGTERM should work if anything does (and it is, not coincidentally,\n> the default kind of signal for kill(1)).\n> \n> > In my reading of the code, I see:\n> \n> > pqsignal(SIGTERM, die); /* cancel current query and exit */\n> > pqsignal(SIGQUIT, die); /* could reassign this sig for another use */\n> \n> This is already obsolete ;=) ... I'm just waiting on Vadim's approval of\n> my xlog mods before committing a change in SIGQUIT handling --- see\n> discussion a couple days ago.\n\nYes, I knew that was coming, so I sayed with SIGTERM because it should\nwork on 7.0 and 7.1.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 11 Mar 2001 20:11:59 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: SIGTERM/FATAL error" }, { "msg_contents": "Tom Lane wrote:\n> \n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> This is a non-improvement. Please reverse it. SIGTERM would only be\n> >> sent to a backend if the database system were in fact shutting down.\n> \n> > But why say the system is shutting down if the backend is shutting down.\n> > Seems the postmaster should say system shutting down and each backend\n> > should say it is shutting itself down. The way it is now, don't we get\n> > a \"system shutting down\" message for every running backend?\n> \n> You are failing to consider that the primary audience for this error\n> message is not the system log, but the clients of the backends. They\n> are going to see only one message, and they are going to want to know\n> *why* their backend shut down.\n> \n\nHow could the backend know why it is shut down ?\nIs it inhibited to kill a backend individually ?\nWhat is a real syetem shut down message ? \nI agree with Bruce to change the backend shut down\nmessage.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Mon, 12 Mar 2001 10:33:06 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: SIGTERM/FATAL error" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I am using the SIGTERM in my administration application to allow\n> administrators to kill individual backends. That is why I noticed the\n> message.\n\nHm. Of course the backend cannot tell the difference between this use\nof SIGTERM and its normal use for system shutdown. Maybe we could\ncome up with a compromise message --- although I suspect a compromise\nwould just be more confusing.\n\nA more significant issue is whether it's really a good idea to start\nencouraging dbadmins to go around killing individual backends. I think\nthis is likely to be a Bad Idea (tm). We have no experience (that I know\nof) with applying SIGTERM for any other purpose than system shutdown or\nforced restart. Are you really prepared to guarantee that shared memory\nwill always be left in a consistent state? That there will be no locks\nleft locked, etc?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 11 Mar 2001 20:54:56 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SIGTERM/FATAL error " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I am using the SIGTERM in my administration application to allow\n> > administrators to kill individual backends. That is why I noticed the\n> > message.\n> \n> Hm. Of course the backend cannot tell the difference between this use\n> of SIGTERM and its normal use for system shutdown. Maybe we could\n> come up with a compromise message --- although I suspect a compromise\n> would just be more confusing.\n\nHow about \"Connection terminated by administrator\", or something like\nthat.\n\n\n> \n> A more significant issue is whether it's really a good idea to start\n> encouraging dbadmins to go around killing individual backends. I think\n> this is likely to be a Bad Idea (tm). We have no experience (that I know\n> of) with applying SIGTERM for any other purpose than system shutdown or\n> forced restart. Are you really prepared to guarantee that shared memory\n> will always be left in a consistent state? That there will be no locks\n> left locked, etc?\n\nNot sure. My admin tool is more proof of concept at this point. I\nthink ultimately we will need to allow administrators to such individual\nbackend terminations.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 11 Mar 2001 20:59:57 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: SIGTERM/FATAL error" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Not sure. My admin tool is more proof of concept at this point. I\n> think ultimately we will need to allow administrators to such individual\n> backend terminations.\n\nI hope the tool is set up to encourage them to try something safer\n(ie, CANCEL QUERY) first...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 11 Mar 2001 21:11:22 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SIGTERM/FATAL error " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Not sure. My admin tool is more proof of concept at this point. I\n> > think ultimately we will need to allow administrators to such individual\n> > backend terminations.\n> \n> I hope the tool is set up to encourage them to try something safer\n> (ie, CANCEL QUERY) first...\n\nYes, the CANCEL button appears before the TERMINATE button.\n\nOn SIGTERM, I think we are fooling ourselves if we think people aren't\nSIGTERM'ing individual backends. Terminating individual db connections\nis a very common job for an administrator. If SIGTERM doesn't cause\nproper shutdown for individual backends, I think it should.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 11 Mar 2001 21:36:57 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: SIGTERM/FATAL error" }, { "msg_contents": "At 08:59 PM 11-03-2001 -0500, Bruce Momjian wrote:\n>How about \"Connection terminated by administrator\", or something like\n>that.\n\nI prefer something closer to the truth.\n\ne.g.\n\"Received SIGTERM, cancelling query and exiting\"\n(assuming it actually cancels the query).\n\nBut maybe I'm weird.\n\nCheerio,\nLink.\n\n", "msg_date": "Mon, 12 Mar 2001 16:14:19 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Re: SIGTERM/FATAL error" }, { "msg_contents": "\nTom is there new wording we can agree on?\n\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Not sure. My admin tool is more proof of concept at this point. I\n> > think ultimately we will need to allow administrators to such individual\n> > backend terminations.\n> \n> I hope the tool is set up to encourage them to try something safer\n> (ie, CANCEL QUERY) first...\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 13 Mar 2001 10:05:47 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: SIGTERM/FATAL error" } ]
[ { "msg_contents": "I have completed version 0.1 of pgtop, a PostgreSQL session monitor. \nScreenshot attached.\n\nI show the currently running query by using gdb to attach to a running\nbackend. The backend must have debug symbols and the 'postgres' binary\nmust be in the current patch. I check for both of these in the program.\n\nI implement Cancel using SIGINT and Terminate using SIGTERM.\n\nI haven't done SORT yet.\n\nYou can get it from:\n\n\tftp://candle.pha.pa.us/pub/postgresql/pgtop.tcl\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026", "msg_date": "Sun, 11 Mar 2001 15:14:07 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "pgtop version 0.1" } ]
[ { "msg_contents": "Hello,\n\nWe have built a database application build around PostgreSQL v7.03. \nAnd we're very happy with PostgreSQL functionality & performance.\n\nIn a live situation 50-100 clients are inserting and updating database\nrecords. It is very important that we have all the data available on\nthe back-up servers in case the main server fails.\n\nAfter looking briefly at the the RServ 0.1 replication software we decided\nit was not an option (for now) (it had serious problems in our tests,\nthere is no documentation available and inquiries about it to PosgreSQL,\nInc. remain unanswered unfortunately).\n\nSo we decided to use a compromise and do frequent pg_dump runs from the\nbackup machines to the main server. It all seems to work fine.\n\nBut I still have some worries: is it OK to run pg_dump on *live* databases ?\nDo you get consistent dumps from it when there is inserting and updating\ngoing on ? \n\nIn the v7.0.3 documentation I can't find any info on this issue.\nBut in the v7.1 docu:\n\n http://www.de.postgresql.org/devel-corner/docs/admin/backup.html\n\nI read:\n\n \"Dumps created by pg_dump are internally consistent, that is, updates\n to the database while pg_dump is running will not be in the dump.\"\n \nThat looks great. But we are using v7.03. Does this mean that the v7.1\ndocumentation is just more detailed and that the same applies to v7.03 ?\nOr is this a feature that is available only with v7.1 but not with v7.03 ?\n\nAny help is much appreciated !\n\n Friendly greetings,\n Rob van Nieuwkerk\n", "msg_date": "Sun, 11 Mar 2001 21:21:58 +0100 (CET)", "msg_from": "Rob van Nieuwkerk <robn@verdi.et.tudelft.nl>", "msg_from_op": true, "msg_subject": "pg_dump consistent on live database with 7.0.3 ?" }, { "msg_contents": "Rob van Nieuwkerk writes:\n\n> But I still have some worries: is it OK to run pg_dump on *live* databases ?\n\nYes.\n\n> Do you get consistent dumps from it when there is inserting and updating\n> going on ?\n\nYes.\n\n> \"Dumps created by pg_dump are internally consistent, that is, updates\n> to the database while pg_dump is running will not be in the dump.\"\n>\n> That looks great. But we are using v7.03. Does this mean that the v7.1\n> documentation is just more detailed and that the same applies to v7.03 ?\n\nYes.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Sun, 11 Mar 2001 22:10:39 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: pg_dump consistent on live database with 7.0.3 ?" } ]
[ { "msg_contents": "I am trying to debug my socket-level interface to the backend, which\nimplements the 6.4 protocol. It works for general queries, but I have\na problem with large objects.\n\nlo_create and lo_unlink seem to work OK; I get an oid which looks ok\nand there is a corresponding xinv??? file in the base/ directory.\nlo_open returns 0 as a file descriptor. However, following up with one\nof the other lo functions which take descriptor arguments (such as\nlo_write or lo_tell) fails with\n\n ERROR: lo_tell: invalid large object descriptor (0)\n\nLooking at be-fsstubs.c it seems that this arises when cookies[fd] is\nNULL. I don't know what this might come from: the lo_tell is sent\nright after the lo_open, on the same connection.\n\nRunning the sample lo program in C works, so I suppose the problem\nmust come from the bytes I'm sending. Any ideas what could cause this? \n\n\nPostgreSQL 7.0.3 on sparc-sun-solaris2.5.1, compiled by gcc 2.95.2\n\n-- \nEric Marsden <URL:http://www.laas.fr/~emarsden/>\n", "msg_date": "11 Mar 2001 22:24:43 +0100", "msg_from": "Eric Marsden <emarsden@mail.dotcom.fr>", "msg_from_op": true, "msg_subject": "problem with fe/be protocol and large objects" }, { "msg_contents": "On Monday 12 March 2001 03:24, Eric Marsden wrote:\n> I am trying to debug my socket-level interface to the backend, which\n> implements the 6.4 protocol. It works for general queries, but I have\n> a problem with large objects.\n>\n> lo_create and lo_unlink seem to work OK; I get an oid which looks ok\n> and there is a corresponding xinv??? file in the base/ directory.\n> lo_open returns 0 as a file descriptor. However, following up with one\n> of the other lo functions which take descriptor arguments (such as\n> lo_write or lo_tell) fails with\n>\n> ERROR: lo_tell: invalid large object descriptor (0)\n\nYou should do ANY operations with LOs in transaction.\n\n> Looking at be-fsstubs.c it seems that this arises when cookies[fd] is\n> NULL. I don't know what this might come from: the lo_tell is sent\n> right after the lo_open, on the same connection.\n>\n> Running the sample lo program in C works, so I suppose the problem\n> must come from the bytes I'm sending. Any ideas what could cause this?\n>\n>\n> PostgreSQL 7.0.3 on sparc-sun-solaris2.5.1, compiled by gcc 2.95.2\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: dyp@perchine.com\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------\n", "msg_date": "Tue, 13 Mar 2001 21:15:17 +0600", "msg_from": "Denis Perchine <dyp@perchine.com>", "msg_from_op": false, "msg_subject": "Re: problem with fe/be protocol and large objects" }, { "msg_contents": "Eric Marsden <emarsden@mail.dotcom.fr> writes:\n> lo_open returns 0 as a file descriptor. However, following up with one\n> of the other lo functions which take descriptor arguments (such as\n> lo_write or lo_tell) fails with\n\n> ERROR: lo_tell: invalid large object descriptor (0)\n\nAre you remembering to wrap this sequence in a transaction block\n(begin/end)? LO descriptors are only valid till end of transaction.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 13 Mar 2001 10:52:27 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: problem with fe/be protocol and large objects " }, { "msg_contents": ">>>>> \"tl\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n ecm> ERROR: lo_tell: invalid large object descriptor (0)\n\n tl> Are you remembering to wrap this sequence in a transaction block\n tl> (begin/end)? LO descriptors are only valid till end of\n tl> transaction.\n\nthat was it, thanks. The code used to work with PostgreSQL 6.3, and I\nhadn't seen the relevant warning in the programmer's guide.\n\n-- \nEric Marsden <URL:http://www.laas.fr/~emarsden/>\n", "msg_date": "13 Mar 2001 17:31:33 +0100", "msg_from": "Eric Marsden <emarsden@mail.dotcom.fr>", "msg_from_op": true, "msg_subject": "Re: problem with fe/be protocol and large objects" } ]
[ { "msg_contents": "doxygen is a great tool to make code documentation...\n\nIt is particulary easy for developpers to add /** **/ comments instead of\n/* */ comments that will be extracted by doxygen to make documentation on\nfunctions, structures,...\n\nthen you run doxygen from cron and you get automatic API documentation. May\nbe it is a job for the webmaster of the PG web site.\n\nI strongly advise all developers to use it...\n\nFranck Martin\nNetwork and Database Development Officer\nSOPAC South Pacific Applied Geoscience Commission\nFiji\nE-mail: franck@sopac.org <mailto:franck@sopac.org> \nWeb site: http://www.sopac.org/\n<http://www.sopac.org/> Support FMaps: http://fmaps.sourceforge.net/\n<http://fmaps.sourceforge.net/> \n\nThis e-mail is intended for its addresses only. Do not forward this e-mail\nwithout approval. The views expressed in this e-mail may not be necessarily\nthe views of SOPAC.\n\n\n\n-----Original Message-----\nFrom: Nymia [mailto:nymia@qwest.net]\nSent: Sunday, 11 March 2001 12:51 \nTo: pgsql-hackers@postgresql.org\nSubject: Re: [HACKERS] doxygen & PG\n\n\nThe site mentioned was created by me. I used doxygen to create those html\nfiles. And it's just the first stab. It doesn't have have doxygen tags yet\nthat's why it looks like that.\n\nThe reason why I made it was to make it easier for me ( and others as well )\nto read the code though. So far, I've learned a lot using this technique.\n\nThere is another one I'm working on and it's at\nhttp://members.fortunecity.com/nymia/vsta/boot_layout.html\n\n\n\n----- Original Message -----\nFrom: Nathan Myers <ncm@zembu.com>\nTo: <pgsql-hackers@postgresql.org>\nSent: Saturday, March 10, 2001 3:48 PM\nSubject: Re: [HACKERS] doxygen & PG\n\n\n> On Sat, Mar 10, 2001 at 06:29:37PM -0500, Tom Lane wrote:\n> > ncm@zembu.com (Nathan Myers) writes:\n> > > Is this page\n> > > http://members.fortunecity.com/nymia/postgres/dox/backend/html/\n> > > common knowledge?\n> >\n> > Interesting, but bizarrely incomplete. (Yeah, we have only ~100\n> > struct types ... sure ...)\n>\n> It does say \"version 0.0.1\".\n>\n> What was interesting to me is that the interface seems a lot more\n> helpful than the current CVS web gateway. If it were to be completed,\n> and could be kept up to date automatically, something like it could\n> be very useful.\n>\n> Nathan Myers\n> ncm@zembu.com\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: if posting/reading through Usenet, please send an appropriate\nsubscribe-nomail command to majordomo@postgresql.org so that your\nmessage can get through to the mailing list cleanly\n", "msg_date": "Mon, 12 Mar 2001 10:04:38 +1200", "msg_from": "Franck Martin <Franck@sopac.org>", "msg_from_op": true, "msg_subject": "RE: doxygen & PG" } ]
[ { "msg_contents": "I have renamed pgtop.tcl to pgmonitor. I think the new name is clearer.\n\n\tftp://candle.pha.pa.us/pub/postgresql/pgmonitor\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 11 Mar 2001 21:39:08 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Performance monitor renamed" } ]
[ { "msg_contents": "\n\tWhile testing some existing database applications on 7.1beta4 on\nmy Sparc 20 running Debian GNU/Linux 2.2, I got the following error on\nattempting to do a vacuum of a table:\n\nNOTICE: FlushRelationBuffers(jobs, 1399): block 953 is referenced (private 0, global 1)\nERROR! Can't vacuum table Jobs! ERROR: VACUUM (repair_frag): FlushRelationBuffers returned -2\n\nThe first line is the error message from pgsql, while the second line is\nthe error message from my application (using perl Pg module) reporting the\nerror message returned. It appears that this should only be a warning\n(i.e. NOTICE, not FATAL or ERROR), but it caused the Pg module to throw an\nerror anyway. My application of course checks for errors, see the error\nthrown by Pg and dies assuming the error was fatal.\n\tThis error occurred after a load of about 50k records into the\nreferenced table, a load of 50k records total into a few other tables, and\nthen a few clean up queries. The part of the application I was testing is \na database load from another (old, closed source) database. The vacuum\nwas at the end of the of the database load, as part of final cleanup\nroutines.\n\tSo, is this a problem with pgsql in general, specific to\nLinux/Sparc, or a bug in Pg causing it to be too paranoid? Thanks.\n\n---------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n---------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | http://www.rkirkpat.net/ |\n---------------------------------------------------------------------------\n\n\n", "msg_date": "Mon, 12 Mar 2001 07:06:08 -0700 (MST)", "msg_from": "Ryan Kirkpatrick <pgsql@rkirkpat.net>", "msg_from_op": true, "msg_subject": "Vaccuum Failure w/7.1beta4 on Linux/Sparc" }, { "msg_contents": "Ryan Kirkpatrick <pgsql@rkirkpat.net> writes:\n> \tWhile testing some existing database applications on 7.1beta4 on\n> my Sparc 20 running Debian GNU/Linux 2.2, I got the following error on\n> attempting to do a vacuum of a table:\n\n> NOTICE: FlushRelationBuffers(jobs, 1399): block 953 is referenced (private 0, global 1)\n> ERROR! Can't vacuum table Jobs! ERROR: VACUUM (repair_frag): FlushRelationBuffers returned -2\n\nThis is undoubtedly a backend bug. Can you generate a reproducible test\ncase?\n\n> \tSo, is this a problem with pgsql in general, specific to\n> Linux/Sparc, or a bug in Pg causing it to be too paranoid? Thanks.\n\nPg did get an ERROR from the vacuum command (note second line). Yes,\nthere is paranoia right up the line here, but I think that's a good\nthing. Somewhere someone is failing to release a buffer refcount,\nand we don't know what other consequences that bug might have. Better\nto err on the side of caution.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 12 Mar 2001 09:54:26 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Vaccuum Failure w/7.1beta4 on Linux/Sparc " }, { "msg_contents": "On Mon, 12 Mar 2001, Tom Lane wrote:\n\n> Ryan Kirkpatrick <pgsql@rkirkpat.net> writes:\n> > \tWhile testing some existing database applications on 7.1beta4 on\n> > my Sparc 20 running Debian GNU/Linux 2.2, I got the following error on\n> > attempting to do a vacuum of a table:\n> \n> > NOTICE: FlushRelationBuffers(jobs, 1399): block 953 is referenced (private 0, global 1)\n> > ERROR! Can't vacuum table Jobs! ERROR: VACUUM (repair_frag): FlushRelationBuffers returned -2\n> \n> This is undoubtedly a backend bug. Can you generate a reproducible test\n> case?\n\n\tI will work on it... The code that eventually caused it does a lot\nof different things so it will take me a little while to pair it down to\na small, self-contained test case. I should have it by this weekend.\n\tAlso, two other details I forgot to put in my first email:\n\na) Running 'vaccumdb -t Jobs {dbname}' about 24 hours after the error (the\nbackend had been completely idle during this time), ran successfully\nwithout error.\n\nb) The disk space where the pgsql database is located is NFS mounted from\nmy Alpha (running Linux of course :). [0] Might this cause the error?\n\n[0] Yes, I know running pgsql on an NFS mount is probably not the greatest\nidea, but the system only has 1GB of local disk space (almost all used for\nthe system) and is running as development server only. No valuable data is\nentrusted to it. Hopefully I will have more local disk space in the near\nfuture.\n\n> Pg did get an ERROR from the vacuum command (note second line). Yes,\n> there is paranoia right up the line here, but I think that's a good\n> thing. Somewhere someone is failing to release a buffer refcount,\n> and we don't know what other consequences that bug might have. Better\n> to err on the side of caution.\n\n\tA resonable amount of paranoia is indeed always healthy. :) Just\nwanted to know if this might have been a known and harmless warning. I\nguess not. I will work on a test case and get back hopefully by the\nweekend. Thanks for your help.\n\n---------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n---------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | http://www.rkirkpat.net/ |\n---------------------------------------------------------------------------\n\n", "msg_date": "Mon, 12 Mar 2001 20:39:54 -0700 (MST)", "msg_from": "Ryan Kirkpatrick <pgsql@rkirkpat.net>", "msg_from_op": true, "msg_subject": "Re: Vaccuum Failure w/7.1beta4 on Linux/Sparc " }, { "msg_contents": "On Mon, 12 Mar 2001, Ryan Kirkpatrick wrote:\n\n> \tWhile testing some existing database applications on 7.1beta4 on\n> my Sparc 20 running Debian GNU/Linux 2.2, I got the following error on\n> attempting to do a vacuum of a table:\n> \n> NOTICE: FlushRelationBuffers(jobs, 1399): block 953 is referenced (private 0, global 1)\n> ERROR! Can't vacuum table Jobs! ERROR: VACUUM (repair_frag): FlushRelationBuffers returned -2\n\n\tI moved the data directory to a local parition (from the NFS\nmounted one it was on) and reran my application. It worked fine this time,\nvaccuming tables with out errors and the above error was never seen. Looks\nlike pgsql is not NFS safe, or at least with Linux's implementation. \n\tThis is good news in that it is not a serious issue, but bad news\nin that now I really do have to hurry up and get more local space for this\nbox to do anything useful with it. :)\n\tThanks for everyone's help. TTYL.\n\n---------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n---------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | http://www.rkirkpat.net/ |\n---------------------------------------------------------------------------\n\n", "msg_date": "Sun, 25 Mar 2001 16:45:09 -0700 (MST)", "msg_from": "Ryan Kirkpatrick <pgsql@rkirkpat.net>", "msg_from_op": true, "msg_subject": "Re: Vaccuum Failure w/7.1beta4 on Linux/Sparc -- FALSE ALARM" }, { "msg_contents": "Ryan Kirkpatrick <pgsql@rkirkpat.net> writes:\n> On Mon, 12 Mar 2001, Ryan Kirkpatrick wrote:\n>> While testing some existing database applications on 7.1beta4 on\n>> my Sparc 20 running Debian GNU/Linux 2.2, I got the following error on\n>> attempting to do a vacuum of a table:\n>> \n>> NOTICE: FlushRelationBuffers(jobs, 1399): block 953 is referenced (private 0, global 1)\n>> ERROR! Can't vacuum table Jobs! ERROR: VACUUM (repair_frag): FlushRelationBuffers returned -2\n\nThis is probably explained by the problem we found a few days ago with\nBufferSync acquiring locks it shouldn't.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 Mar 2001 00:16:35 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: Vaccuum Failure w/7.1beta4 on Linux/Sparc -- FALSE ALARM " }, { "msg_contents": "On Mon, 26 Mar 2001, Tom Lane wrote:\n\n> Ryan Kirkpatrick <pgsql@rkirkpat.net> writes:\n> > On Mon, 12 Mar 2001, Ryan Kirkpatrick wrote:\n> >> While testing some existing database applications on 7.1beta4 on\n> >> my Sparc 20 running Debian GNU/Linux 2.2, I got the following error on\n> >> attempting to do a vacuum of a table:\n> >> \n> >> NOTICE: FlushRelationBuffers(jobs, 1399): block 953 is referenced (private 0, global 1)\n> >> ERROR! Can't vacuum table Jobs! ERROR: VACUUM (repair_frag): FlushRelationBuffers returned -2\n> \n> This is probably explained by the problem we found a few days ago with\n> BufferSync acquiring locks it shouldn't.\n\n\tYea, it was. I just tried RC1 on the Sparc with my application,\nwith the data directory NFS mounted, and it ran without errors\nnow. Thanks. :)\n\n---------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n---------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | http://www.rkirkpat.net/ |\n---------------------------------------------------------------------------\n\n", "msg_date": "Tue, 27 Mar 2001 20:32:02 -0700 (MST)", "msg_from": "Ryan Kirkpatrick <pgsql@rkirkpat.net>", "msg_from_op": true, "msg_subject": "Re: Re: Vaccuum Failure w/7.1beta4 on Linux/Sparc -- FALSE\n ALARM" } ]
[ { "msg_contents": "At 06:11 06/03/01 -0500, Vince Vielhaber wrote:\n\n>This just came to the webmaster mailbox:\n\nSorry for the delay, busy week...\n\n\n>-------\n>Most of the top banner links on http://jdbc.postgresql.org (like\n>Documentation, Tutorials, Resources, Development) throw up 404s if\n>followed. Thought you ought to know.\n>\n>Still trying to find the correct driverClass/connectString for the\n>Postgres JDBC driver...\n\nThat should be on the site already (infact its been on there for about 3 \nyears now ;-)\n\n>-------\n>\n>Who maintains this site? It's certainly not me. From looking\n>at the page I'm guessing Peter Mount, can we get some kind of\n>prominent contact info on it? I've had a few emails on it so\n>far.\n\nBottom of every page (part of the template) is both my name and email \naddress ;-)\n\nPeter\n\n\n>Vince.\n>--\n>==========================================================================\n>Vince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n> 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n> Online Campground Directory http://www.camping-usa.com\n> Online Giftshop Superstore http://www.cloudninegifts.com\n>==========================================================================\n>\n>\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n", "msg_date": "Mon, 12 Mar 2001 14:35:41 +0000", "msg_from": "Peter Mount <peter@retep.org.uk>", "msg_from_op": true, "msg_subject": "Re: Banner links not working (fwd)" }, { "msg_contents": "On Mon, 12 Mar 2001, Peter Mount wrote:\n\n> Bottom of every page (part of the template) is both my name and email\n> address ;-)\n\nCan we slightly enlarge the font?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Mon, 12 Mar 2001 11:41:57 -0500 (EST)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: Banner links not working (fwd)" }, { "msg_contents": "At 11:41 12/03/01 -0500, Vince Vielhaber wrote:\n>On Mon, 12 Mar 2001, Peter Mount wrote:\n>\n> > Bottom of every page (part of the template) is both my name and email\n> > address ;-)\n>\n>Can we slightly enlarge the font?\n\nCan do. What size do you think is best?\n\nI've always used size=1 for that line...\n\nPeter\n\n\n>Vince.\n>--\n>==========================================================================\n>Vince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n> 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n> Online Campground Directory http://www.camping-usa.com\n> Online Giftshop Superstore http://www.cloudninegifts.com\n>==========================================================================\n\n", "msg_date": "Mon, 12 Mar 2001 20:05:26 +0000", "msg_from": "Peter Mount <peter@retep.org.uk>", "msg_from_op": true, "msg_subject": "Re: Banner links not working (fwd)" }, { "msg_contents": "On Mon, 12 Mar 2001, Peter Mount wrote:\n\n> At 11:41 12/03/01 -0500, Vince Vielhaber wrote:\n> >On Mon, 12 Mar 2001, Peter Mount wrote:\n> >\n> > > Bottom of every page (part of the template) is both my name and email\n> > > address ;-)\n> >\n> >Can we slightly enlarge the font?\n>\n> Can do. What size do you think is best?\n>\n> I've always used size=1 for that line...\n\nI think a 3 is standard size, so at least a 2 should be plenty.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Mon, 12 Mar 2001 15:55:01 -0500 (EST)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: Banner links not working (fwd)" }, { "msg_contents": "On Mon, Mar 12, 2001 at 08:05:26PM +0000, Peter Mount wrote:\n> At 11:41 12/03/01 -0500, Vince Vielhaber wrote:\n> >On Mon, 12 Mar 2001, Peter Mount wrote:\n> >\n> > > Bottom of every page (part of the template) is both my name and email\n> > > address ;-)\n> >\n> >Can we slightly enlarge the font?\n> \n> Can do. What size do you think is best?\n> \n> I've always used size=1 for that line...\n\nAbsolute font sizes in HTML are always a mistake. size=\"-1\" would do.\n\n--\nNathan Myers\nncm@zembu.com\n", "msg_date": "Mon, 12 Mar 2001 13:38:09 -0800", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: Banner links not working (fwd)" } ]
[ { "msg_contents": "\n> Ok, I've made changes in xlog.c and run tests: 50 clients inserted\n> (int4, text[1-256]) into 50 tables,\n> -B 16384, -wal_buffers 256, -wal_files 0.\n> \n> FSYNC: 257tps\n> O_DSYNC: 333tps \n> \n> Just(?) 30% faster, -:(\n\nFirst of all, if you ask me, that is one hell of an improvement :-)\nIt shows, that WAL write was actually the bottleneck in this particular case.\nThe bottleneck may now have shifted to some other resource.\n\nIt would probably also be good, to actually write more than one \npage with one call instead of the current \"for (;XLByteLT...)\" loop\nin XLogWrite. The reasoning is, 1. that for each call to write, the OS\ntakes your timeslice away, allowing other backends to work, \nand thus reposition the disk head (for selects). \nand second measurements with tfsync.c:\n\nzeu@a82101002:~> xlc -O2 tfsync.c -DINIT_WRITE -DUSE_ODSYNC -DBUFFERS=1 -o tfsync\nzeu@a82101002:~> time tfsync\nreal 0m26.174s\nuser 0m0.040s\nsys 0m2.920s\nzeu@a82101002:~> xlc -O2 tfsync.c -DINIT_WRITE -DUSE_ODSYNC -DBUFFERS=8 -o tfsync\nzeu@a82101002:~> time tfsync\nreal 0m8.950s\nuser 0m0.010s\nsys 0m2.020s\n\nAndreas\n\nPS: to Tom, on AIX O_SYNC and O_DSYNC does not make a difference with tfsync.c,\nboth are comparable to your O_DSYNC measurements, maybe this is because of the \njfs journal, where only one write to journal is necessary for all fs work (inode...).\n", "msg_date": "Mon, 12 Mar 2001 16:25:28 +0100", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: AW: AW: AW: WAL does not recover gracefully from ou\n\tt-of -dis k-sp ace" } ]
[ { "msg_contents": "\n\n\nPeter Eisentraut writes:\n\n> Michal Maru�ka writes:\n\n> > What about (optionally) printing the type of the column data?\n\n> > io | tu | tipo | data\n> > int | int | int2 | date\n> > --------+-------+------+------------\n> > 102242 | 26404 | 1203 | 2000-11-22\n> > (1 row)\n\n> I've been meaning to implement this for a while. Now that someone is\n> seemingly interested I might prioritize it.\n\n\n\nI have realized that the querytree is too much of information (imagine UNION queries).\n\nSo I think this feature (types of columns) is very good if accompanied with\ntools to declare easily some/many clone types: eg int-> ID, int2 -> height .... as\nthe type is a nice invariant.\n\n", "msg_date": "Tue, 13 Mar 2001 00:15:16 +0100 (MET)", "msg_from": "\"Michal Maru���ka\" <mmc@maruska.dyndns.org>", "msg_from_op": true, "msg_subject": "Re: psql missing feature" } ]
[ { "msg_contents": "Hi Philip,\n\nI have not updated from CVS in a few days, but I suspect you haven't\nnoticed this yet: given a mixed-case table name and a scenario that\nrequires emitting UPDATE pg_class commands, pg_dump puts out\nthings like\n\nUPDATE \"pg_class\" SET \"reltriggers\" = 0 WHERE \"relname\" ~* '\"Table\"';\n\nBEGIN TRANSACTION;\nCREATE TEMP TABLE \"tr\" (\"tmp_relname\" name, \"tmp_reltriggers\" smallint);\nINSERT INTO \"tr\" SELECT C.\"relname\", count(T.\"oid\") FROM \"pg_class\" C, \"pg_trigger\" T WHERE C.\"oid\" = T.\"tgrelid\" AND C.\"relname\" ~* '\"Table\"' GROUP BY 1;\nUPDATE \"pg_class\" SET \"reltriggers\" = TMP.\"tmp_reltriggers\" FROM \"tr\" TMP WHERE\n\"pg_class\".\"relname\" = TMP.\"tmp_relname\";\nDROP TABLE \"tr\";\nCOMMIT TRANSACTION;\n\nOf course those ~* '\"Table\"' clauses aren't going to work too well; the\nidentifier should NOT be double-quoted inside the pattern.\n\nActually, this should not be using ~* in the first place --- why isn't\nit just using WHERE relname = 'Table' ??? Seems like it's not cool to\ngratuitously reset the trigger counts on other tables that contain Table\nas a substring of their names.\n\nAnd while we're at it, the temp table hasn't been necessary for a\nrelease or three. That whole transaction should be replaced by\n\nUPDATE pg_class SET reltriggers =\n\t(SELECT count(*) FROM pg_trigger where pg_class.oid = tgrelid)\nWHERE relname = 'Table';\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 12 Mar 2001 18:41:41 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Small bug in pg_dump" }, { "msg_contents": "At 18:41 12/03/01 -0500, Tom Lane wrote:\n>\n>UPDATE pg_class SET reltriggers =\n>\t(SELECT count(*) FROM pg_trigger where pg_class.oid = tgrelid)\n>WHERE relname = 'Table';\n>\n\nFixed & done...\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Wed, 14 Mar 2001 00:19:43 +1100", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: Small bug in pg_dump" }, { "msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> Fixed & done...\n\nOnly part of the way there: pg_dump is still pretty seriously broken for\nmixed-case table names. Observe:\n\nregression=# create table \"Foo\" (z int);\nCREATE\nregression=# \\q\n$ pg_dump -a -t '\"Foo\"' regression\n--\n-- Selected TOC Entries:\n--\n--\n-- Data for TOC Entry ID 1 (OID 1845087) TABLE DATA \"Foo\"\n--\n\n\\connect - postgres\n-- Disable triggers\nUPDATE \"pg_class\" SET \"reltriggers\" = 0 WHERE \"relname\" = '\"Foo\"';\n\nCOPY \"Foo\" FROM stdin;\n\\.\n-- Enable triggers\nUPDATE pg_class SET reltriggers = (SELECT count(*) FROM pg_trigger where pg_class.oid = tgrelid) WHERE relname = '\"Foo\"';\n\n$\n\nThese UPDATEs will certainly not work. On digging into the code, the\nproblem seems to be one of premature optimization: fmtId() is applied\nto the table name when the ArchiveEntry is created, rather than when\nprinting out from an ArchiveEntry. So there is no way to get the\nundecorated table name for use in these commands.\n\nIt seems to me that you should remove the fmtId() from calls to\nArchiveEntry. Then add it back where an archive entry's name is\nbeing printed (and quoting is appropriate). It might even make\nsense for an ArchiveEntry to store both forms of the name, and then\nusing code could just select the form wanted instead of calling\nfmtId repeatedly. Not sure.\n\nBTW, making the -t switch compare to the unquoted name would probably\nalso fix the bizarre need for '\"Foo\"' exhibited above.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 14 Mar 2001 19:10:43 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Re: Small bug in pg_dump " }, { "msg_contents": "At 19:10 14/03/01 -0500, Tom Lane wrote:\n>It might even make\n>sense for an ArchiveEntry to store both forms of the name, and then\n>using code could just select the form wanted instead of calling\n>fmtId repeatedly. Not sure.\n>\n>BTW, making the -t switch compare to the unquoted name would probably\n>also fix the bizarre need for '\"Foo\"' exhibited above.\n\nI think these are both fixed now; the SQL in the ArchiveEntry call still\nuses the formatted names, but the name in the TOC entry is unformatted in\nall cases except functions now. The TOC entry name is used in the -t switch\nand in disabling triggers etc.\n\nThis does make me wonder (again) about some kind of pg_dump regression\ntest. ISTM that a test should be doable by building a DB from data files,\ndumping it, restoring it, then using COPY to extract the data back to files\n(and probably doing a sort on the output). We could also store a BLOB or\ntwo. Then we compare the initial data files with the final ones. This will\ntest the integrity of the data & BLOB dump/restore. We then also need to\ntest the metadata integrity somehow, probably by dumping & restoring the\nregression DB, but we'd need to modify the pg_dump output somewhat, I think.\n\n\n\n\n\n \n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Mon, 19 Mar 2001 13:50:41 +1100", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: Re: Small bug in pg_dump " }, { "msg_contents": "> This does make me wonder (again) about some kind of pg_dump regression\n> test. ISTM that a test should be doable by building a DB from data files,\n> dumping it, restoring it, then using COPY to extract the data back to files\n> (and probably doing a sort on the output). We could also store a BLOB or\n> two. Then we compare the initial data files with the final ones. This will\n> test the integrity of the data & BLOB dump/restore. We then also need to\n> test the metadata integrity somehow, probably by dumping & restoring the\n> regression DB, but we'd need to modify the pg_dump output somewhat, I think.\n\nYes, I have often caught dump bugs by doing a COPY/restore/COPY and\ncomparing the two COPY files. Not sure if that is what you meant.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 19 Mar 2001 09:59:03 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: Small bug in pg_dump" } ]
[ { "msg_contents": "but it's hard to notice eg misprints in 44K file -:)\nI think we should apply patches and hard test\nrecovering for a few days (power off/pg_ctl -m i stop\nwith dozens update transactions).\n\nVadim\n", "msg_date": "Mon, 12 Mar 2001 15:57:55 -0800", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "xlog patches reviewed" }, { "msg_contents": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n> but it's hard to notice eg misprints in 44K file -:)\n> I think we should apply patches and hard test\n> recovering for a few days (power off/pg_ctl -m i stop\n> with dozens update transactions).\n\nOK. I haven't finished putting together an xlog-reset utility quite\nyet, but I will go ahead and apply what I have.\n\nCAUTION TO ONLOOKERS: if you update from CVS after I make this patch,\nyou will need to initdb!! Wait around for the log-reset utility if you\nare running a database you don't want to initdb. I should have that in\nanother day or so.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 12 Mar 2001 19:20:41 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: xlog patches reviewed " }, { "msg_contents": "On Mon, 12 Mar 2001, Mikheev, Vadim wrote:\n\n> but it's hard to notice eg misprints in 44K file -:)\n> I think we should apply patches and hard test\n> recovering for a few days (power off/pg_ctl -m i stop\n> with dozens update transactions).\n\nif this is the case, can we look at applying that patch tonight, give ppl\ntill Friday to test and put out a RC1 depending on the results?\n\n\n", "msg_date": "Mon, 12 Mar 2001 20:24:37 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: xlog patches reviewed" }, { "msg_contents": "The Hermit Hacker writes:\n\n> if this is the case, can we look at applying that patch tonight, give ppl\n> till Friday to test and put out a RC1 depending on the results?\n\nThis should probably be called beta6, given that there is a lot of new\ncode in it.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Tue, 13 Mar 2001 20:31:54 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: xlog patches reviewed" } ]
[ { "msg_contents": "> > FSYNC: 257tps\n> > O_DSYNC: 333tps \n> > \n> > Just(?) 30% faster, -:(\n> \n> First of all, if you ask me, that is one hell of an improvement :-)\n\nOf course -:) But tfsync tests were more promising -:)\nProbably we should update XLogWrite to write() more than 1 block,\nbut Tom should apply his patches first (btw, did you implement\n\"log file size\" condition for checkpoints, Tom?).\n\nVadim\n", "msg_date": "Mon, 12 Mar 2001 16:02:49 -0800", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: AW: AW: AW: WAL does not recover gracefully from ou\n\tt-of -dis k-sp ace" }, { "msg_contents": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n> Probably we should update XLogWrite to write() more than 1 block,\n> but Tom should apply his patches first (btw, did you implement\n> \"log file size\" condition for checkpoints, Tom?).\n\nYes I did. There's a variable now to specify a checkpoint every N\nlog segments --- I figured that was good enough resolution, and it\nallowed the test to be made only when we're rolling over to a new\nsegment, so it's not in a time-critical path.\n\nIf you're happy with what I did so far, I'll go ahead and commit.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 12 Mar 2001 19:15:32 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: AW: AW: WAL does not recover gracefully from ou t-of -dis\n\tk-sp ace" } ]
[ { "msg_contents": ">> > It is possible to build a logging system so that you \n>> > mostly don't care when the data blocks get written;\n>> > a particular data block on disk is considered garbage\n>> > until the next checkpoint, so that you\n>> >\n> > How to know if a particular data page was modified if there is no\n> > log record for that modification?\n> > (Ie how to know where is garbage? -:))\n> \n> You could store a log sequence number in the data page header \n> that indicates the log address of the last log record that was\n> applied to the page.\n\nWe do. But how to know at the time of recovery that there is\na page in multi-Gb index file with tuple pointing to uninserted\ntable row?\nWell, actually we could make some improvements in this area:\na buffer without \"first after checkpoint\" modification could be\nwritten without flushing log records: entire block will be\nrewritten on recovery. Not sure how much we get, though -:)\n\nVadim\n", "msg_date": "Mon, 12 Mar 2001 16:25:17 -0800", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: WAL & SHM principles" } ]
[ { "msg_contents": "> > but it's hard to notice eg misprints in 44K file -:)\n> > I think we should apply patches and hard test\n> > recovering for a few days (power off/pg_ctl -m i stop\n> > with dozens update transactions).\n> \n> if this is the case, can we look at applying that patch \n> tonight, give ppl till Friday to test and put out a RC1\n> depending on the results?\n\nI think so.\n\nVadim\n", "msg_date": "Mon, 12 Mar 2001 16:27:50 -0800", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: xlog patches reviewed" }, { "msg_contents": ">> if this is the case, can we look at applying that patch \n>> tonight, give ppl till Friday to test and put out a RC1\n>> depending on the results?\n\nPatch committed. There are still some loose ends to clean up:\n\n* I need to finish making an xlog-reset utility for contrib.\n\n* I stubbed out shmctl(IPC_STAT) in the BeOS and QNX4 emulations of\n SysV shared memory (src/backend/port/beos/shm.c,\n src/backend/port/qnx4/shm.c). This means that the new code to detect\n postmaster-dead-but-old-backends-still-running will never detect any\n problem on those platforms. Perhaps people who use those platforms\n can test and contribute real implementations?\n\nHowever, these shouldn't affect testing.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 12 Mar 2001 21:03:09 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: xlog patches reviewed " }, { "msg_contents": "On Mon, 12 Mar 2001, Tom Lane wrote:\n\n> >> if this is the case, can we look at applying that patch\n> >> tonight, give ppl till Friday to test and put out a RC1\n> >> depending on the results?\n>\n> Patch committed. There are still some loose ends to clean up:\n>\n> * I need to finish making an xlog-reset utility for contrib.\n>\n> * I stubbed out shmctl(IPC_STAT) in the BeOS and QNX4 emulations of\n> SysV shared memory (src/backend/port/beos/shm.c,\n> src/backend/port/qnx4/shm.c). This means that the new code to detect\n> postmaster-dead-but-old-backends-still-running will never detect any\n> problem on those platforms. Perhaps people who use those platforms\n> can test and contribute real implementations?\n>\n> However, these shouldn't affect testing.\n\nGreat, then let's go with a RC1 on Friday and see if we can get 7.1 out in\n'02 :)\n\n\n", "msg_date": "Mon, 12 Mar 2001 22:29:26 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: xlog patches reviewed " } ]
[ { "msg_contents": "There is another loose end that I forgot I needed to discuss with you.\n\nxlog.c's ReadRecord formerly contained code that would zero out the rest\nof the log segment (and delete the next log segment, if any) upon\ndetecting a missing or corrupted xlog record. I removed that code\nbecause I considered it horribly dangerous where it was. If there is\nanything wrong with either the xlog or pg_control's pointers to it,\nthat code was quite capable of wiping out all hope of recovery *and*\nall evidence of what went wrong.\n\nI think it's really bad to automatically destroy log data, especially\nwhen we do not yet know if we are capable of recovering. If we need\nthis functionality, it should be invoked only at the completion of\nStartupXLOG, after we have finished the recovery phase. However,\nI'd be a lot happier if we could avoid wholesale zeroing at all.\n\nI presume the point of this code was that if we recover and then suffer\na later crash at a point where we've just written an xlog record that\nexactly fills an xlog page, a subsequent scan of the log might continue\non from that point and pick up xlog records from the prior (failed)\nsystem run. Is there a way to guard against that scenario without\nhaving to zero out data during recovery?\n\nOne thought that comes to mind is to store StartUpID in XLOG page\nheaders, and abort log scanning if we come to a page with StartUpID\nless than what came before. Is that secure/sufficient? Is there\na better way?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 12 Mar 2001 22:50:20 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "xlog loose ends, continued" } ]
[ { "msg_contents": "I wrote a couple days ago:\n\n: BTW, can we really trust checkpoint to mean that all data file changes\n: are down on disk? I see that the actual implementation of checkpoint is\n: \n: \twrite out all dirty shmem buffers;\n: \tsync();\n: \tif (IsUnderPostmaster)\n: \t\tsleep(2);\n: \tsync();\n: \twrite checkpoint record to XLOG;\n: \tfsync XLOG;\n: \n: Now HP's man page for sync() says\n: \n: The writing, although scheduled, is not necessarily complete upon\n: return from sync.\n\nThe more I think about this, the more disturbed I get. It seems clear\nthat this sequence is capable of writing out the checkpoint record\nbefore all dirty data pages have reached disk. If we suffer a crash\nbefore the data pages do reach disk, then on restart we will not realize\nwe need to redo the changes to those pages. This seems an awfully large\nhole for what is claimed to be a bulletproof xlog technology.\n\nI feel that checkpoint should not use sync(2) at all, but should instead\ndepend on fsync'ing the data files --- since fsync doesn't return until\nthe write is done, this is considerably more secure. (Of course disk\ndrive write reordering could still mess you up, but at least\nkernel-level failures won't put your data at risk.)\n\nOne way to do this would be to maintain a hashtable in shared memory\nof data files that have been written to since the last checkpoint.\nWe'd need to set a limit on the size of the hashtable (say a few hundred\nentries) --- if it overflows, remove the oldest entry and fsync that\nfile before forgetting it. However that seems moderately complex,\nand probably too risky to do just before release. Spinlock contention\non the hashtable could be a problem too.\n\nI thought about having checkpoint physically scan the $PGDATA/base/*\ndirectories and fsync every file found in them, but that seems mighty\nslow and ugly.\n\nIs there another way?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 12 Mar 2001 23:11:01 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "xlog checkpoint depends on sync() ... seems unsafe" } ]
[ { "msg_contents": "> I presume the point of this code was that if we recover and \n> then suffer\n> a later crash at a point where we've just written an xlog record that\n> exactly fills an xlog page, a subsequent scan of the log \n> might continue\n> on from that point and pick up xlog records from the prior (failed)\n> system run. Is there a way to guard against that scenario without\n> having to zero out data during recovery?\n> \n> One thought that comes to mind is to store StartUpID in XLOG page\n> headers, and abort log scanning if we come to a page with StartUpID\n> less than what came before. Is that secure/sufficient? Is there\n> a better way?\n\nThis code was from the old days when there was no CRC in log records.\nShould we try to read log up to the *physical end* - ie end of last\nlog file - regardless invalid CRC-s/zero pages with attempt to\nre-apply interim valid records? (Or do we already do this?)\nThis way we'll know where is actual end of log (last valid record)\nto begin production from there. (Unfortunately, we'll have to read\nempty files pre-created by checkpointer -:().\nAnyway I like idea of StartUpID in page headers - this will help\nif some log files disappeared. Should we add CRC to page header?\nHm, maybe XLogFileInit should initialize files with StartUpID & CRC\nin pages? We would avoid reading empty files.\n\nVadim\n", "msg_date": "Mon, 12 Mar 2001 20:24:26 -0800", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: xlog loose ends, continued" }, { "msg_contents": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n> This code was from the old days when there was no CRC in log records.\n\nAh, right. The CRC makes things safer ... but there's still a risk\nthat old log pages could look like a valid continuation.\n\n> Should we try to read log up to the *physical end* - ie end of last\n> log file - regardless invalid CRC-s/zero pages with attempt to\n> re-apply interim valid records? (Or do we already do this?)\n\nThat doesn't seem like a good idea --- once we fail to read an XLOG\nrecord, it's probably best to stop there rather than continue on.\nI think we want to try for a consistent recovery to a past point in\ntime (ie, wherever the xlog gap is) not a partial recovery to a later\ntime.\n\n> Anyway I like idea of StartUpID in page headers - this will help\n> if some log files disappeared. Should we add CRC to page header?\n\nThat seems like overkill. I was hoping to keep the page header overhead\nat eight bytes. We could do that either by storing just the two LSBs\nof StartUpID (and doing the sequence checking mod 64k) or by reducing\nthe magic number to two bytes so there's room for four bytes of\nStartUpID. I think I like the first alternative better --- comments?\n\n> Hm, maybe XLogFileInit should initialize files with StartUpID & CRC\n> in pages? We would avoid reading empty files.\n\nWe already stop when we hit a zeroed page (because it's not got the\nright magic number). That seems sufficient.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 12 Mar 2001 23:41:02 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: xlog loose ends, continued " } ]
[ { "msg_contents": "> The more I think about this, the more disturbed I get. It seems clear\n> that this sequence is capable of writing out the checkpoint record\n> before all dirty data pages have reached disk. If we suffer a crash\n> before the data pages do reach disk, then on restart we will \n> not realize we need to redo the changes to those pages.\n> This seems an awfully large hole for what is claimed to be\n> a bulletproof xlog technology.\n> \n> I feel that checkpoint should not use sync(2) at all, but \n> should instead depend on fsync'ing the data files --- since\n> fsync doesn't return until the write is done, this is considerably\n> more secure.\n\nI never was happy about sync() of course. This is just another reason\nto re-write smgr. I don't know how useful is second sync() call, but\non Solaris (and I believe on many other *NIXes) rc0 calls it\nthree times, -:) Why?\nMaybe now, with two checkpoints in log, we should start redo from\noldest one? This will increase recovery time of course -:(\n\nVadim\n", "msg_date": "Mon, 12 Mar 2001 20:45:27 -0800", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: xlog checkpoint depends on sync() ... seems unsafe" }, { "msg_contents": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n> Maybe now, with two checkpoints in log, we should start redo from\n> oldest one? This will increase recovery time of course -:(\n\nYeah, and it doesn't even solve the problem: consider a crash just\nafter we've written a shutdown checkpoint record. On restart,\nwe won't think we need to redo anything at all.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 12 Mar 2001 23:48:53 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: xlog checkpoint depends on sync() ... seems unsafe " }, { "msg_contents": "On Mon, 12 Mar 2001, Mikheev, Vadim wrote:\n\n> to re-write smgr. I don't know how useful is second sync() call, but\n> on Solaris (and I believe on many other *NIXes) rc0 calls it\n> three times, -:) Why?\n\nThe idea is, that by the time the last sync has run, the first sync will\nbe done flushing the buffers to disk. - this is what we were told by the\nIBM engineers when I worked tier-2/3 AIX support at IBM.\n\n\n-- \nDominic J. Eidson\n \"Baruk Khazad! Khazad ai-menu!\" - Gimli\n-------------------------------------------------------------------------------\nhttp://www.the-infinite.org/ http://www.the-infinite.org/~dominic/\n\n", "msg_date": "Mon, 12 Mar 2001 22:57:21 -0600 (CST)", "msg_from": "\"Dominic J. Eidson\" <sauron@the-infinite.org>", "msg_from_op": false, "msg_subject": "Re: RE: xlog checkpoint depends on sync() ... seems unsafe" } ]
[ { "msg_contents": "> > Should we try to read log up to the *physical end* - ie end of last\n> > log file - regardless invalid CRC-s/zero pages with attempt to\n> > re-apply interim valid records? (Or do we already do this?)\n> \n> That doesn't seem like a good idea --- once we fail to read an XLOG\n> record, it's probably best to stop there rather than continue on.\n> I think we want to try for a consistent recovery to a past point in\n> time (ie, wherever the xlog gap is) not a partial recovery to a later\n> time.\n\nNo way for consistent recovery if there is gap in log due to\ndisk write re-ordering anyway (and we can't know what was\nthe reason of the gap). I thought that you wanted apply as much of log\nas we can. If you don't then I missed your point in first message:\n\n> xlog.c's ReadRecord formerly contained code that would zero \n> out the rest of the log segment (and delete the next log segment,\n> if any) upon detecting a missing or corrupted xlog record.\n> I removed that code because I considered it horribly dangerous\n> where it was. If there is anything wrong with either the xlog or\n> pg_control's pointers to it, that code was quite capable of wiping\n> out all hope of recovery *and* all evidence of what went wrong.\n ^^^^^^^^^^^^^^^^^^^^^^^^\n\nSo, if we are not going to re-apply as much valid records as\nwe can read from log then zeroing is no more dangerous than\nSUI in headers. But I totaly agreed that SUI is much better.\n\n> > Anyway I like idea of StartUpID in page headers - this will help\n> > if some log files disappeared. Should we add CRC to page header?\n> \n> That seems like overkill. I was hoping to keep the page \n> header overhead at eight bytes. We could do that either by storing just\n> the two LSBs of StartUpID (and doing the sequence checking mod 64k) or\n> by reducing the magic number to two bytes so there's room for four bytes\nof\n> StartUpID. I think I like the first alternative better --- comments?\n\nI don't think a few additional bytes in header is a problem.\nBTW, why not use CRC32 in header instead of magic?\nOr just StartUpID instead of magic if you don't want to calculate\nCRC for header - xlp_magic doesn't seem to be more useful than SUI.\n\n> > Hm, maybe XLogFileInit should initialize files with StartUpID & CRC\n> > in pages? We would avoid reading empty files.\n> \n> We already stop when we hit a zeroed page (because it's not got the\n> right magic number). That seems sufficient.\n\nWhat if the next page after zeroed one is correct (due to write\nre-ordering)?\n(But I take back SUI+CRC in XLogFileInit - useless -:))\n\nVadim\n", "msg_date": "Mon, 12 Mar 2001 21:38:07 -0800", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: xlog loose ends, continued " }, { "msg_contents": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n>> That doesn't seem like a good idea --- once we fail to read an XLOG\n>> record, it's probably best to stop there rather than continue on.\n>> I think we want to try for a consistent recovery to a past point in\n>> time (ie, wherever the xlog gap is) not a partial recovery to a later\n>> time.\n\n> No way for consistent recovery if there is gap in log due to\n> disk write re-ordering anyway (and we can't know what was\n> the reason of the gap). I thought that you wanted apply as much of log\n> as we can. If you don't then I missed your point in first message:\n\n>> xlog.c's ReadRecord formerly contained code that would zero \n>> out the rest of the log segment (and delete the next log segment,\n>> if any) upon detecting a missing or corrupted xlog record.\n>> I removed that code because I considered it horribly dangerous\n>> where it was. If there is anything wrong with either the xlog or\n>> pg_control's pointers to it, that code was quite capable of wiping\n>> out all hope of recovery *and* all evidence of what went wrong.\n> ^^^^^^^^^^^^^^^^^^^^^^^^\n\nWhat I was thinking about in that last paragraph was manual analysis and\nrecovery. I don't think it's a good idea for automatic system startup\nto skip over gaps in the log.\n\n> So, if we are not going to re-apply as much valid records as\n> we can read from log then zeroing is no more dangerous than\n> SUI in headers. But I totaly agreed that SUI is much better.\n\nOkay, I will change the page headers to include SUI (or the low-order\nbits of it anyway), and make ReadRecord stop if it notices a backwards\njump in SUI.\n\n> I don't think a few additional bytes in header is a problem.\n> BTW, why not use CRC32 in header instead of magic?\n\nThere is so little changeable information in a page header that a CRC\nwouldn't be much more than an eight-byte magic number. And we don't\nneed eight bytes worth of magic number (even four is more than enough,\nreally). So I'd just as soon keep the headers simple and small.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 13 Mar 2001 12:05:42 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: xlog loose ends, continued " } ]
[ { "msg_contents": "\n> Anyway I like idea of StartUpID in page headers - this will help\n\nCan you please describe StartUpID for me ? \nIdeal would be a stamp that has the last (smallest open) XID, or something else \nthat has more or less timestamp characteristics (without the actual need of wallclock)\nin regard to the WAL.\nThis could then also be used to scan all pages for modification since\nlast backup, to make incremental backups possible. (note, that incremental\nbackup is not WAL backup)\n\nAndreas\n", "msg_date": "Tue, 13 Mar 2001 11:52:30 +0100", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: RE: xlog loose ends, continued" }, { "msg_contents": "> > Anyway I like idea of StartUpID in page headers - this will help\n> \n> Can you please describe StartUpID for me ? \n> Ideal would be a stamp that has the last (smallest open) XID, or something else \n> that has more or less timestamp characteristics (without the actual need of wallclock)\n> in regard to the WAL.\n\nStartUpID counts database startups and so has timestamp characteristics.\nActually, idea is to use SUI in future to allow reusing XIDs after startup: seeing\nold SUI in data pages we'll know that all transaction on this page was committed\n\"long ago\" (ie visible from MVCC POV). This requires UNDO, of course.\n\n> This could then also be used to scan all pages for modification since\n> last backup, to make incremental backups possible. (note, that incremental\n> backup is not WAL backup)\n\nWe can scan log itself to get all pages modified since last backup or whatever\npoint we want - thanks to your idea about data pages backup.\n\nVadim\n\n\n", "msg_date": "Tue, 13 Mar 2001 08:02:00 -0800", "msg_from": "\"Vadim Mikheev\" <vmikheev@sectorbase.com>", "msg_from_op": false, "msg_subject": "Re: RE: xlog loose ends, continued" } ]
[ { "msg_contents": "I don't want to look a gift horse in the mouth, but it seems to me that the \nperformance monitor should wait until the now-famous query tree redesign \nwhich will allow for sets from functions. I realize that the shared memory \nrequirements might be a bit large, but somehow Oracle accomplishes this \nnicely, with some > 50 views (V$ACCESS through V$WAITSTAT) which can be \nqueried, usually via SQL*DBA, for performance statistics. More then 50 \nperformance views may be over-kill, but having the ability to fetch the \nperformance statistics with normal queries sure is nice. Perhaps a \npostmaster option which would enable/disable the use of accumulating \nperformance statistics in shared memory might ease the hesitation against \nit?\n\nMike Mascari\nmascarm@mascari.com\n\n-----Original Message-----\nFrom:\tDenis Perchine [SMTP:dyp@perchine.com]\n\nThat's bad. Cause it will be unuseful for people having databases far \naway...\nLike me... :-((( Another point is that it is a little bit strange to have\nX-Window on machine with database server... At least if it is not for play, \nbut production one...\n\nAlso there should be a possibility of remote monitoring of the database. \nBut\nthat's just dream... :-)))\n\n--\nSincerely Yours,\nDenis Perchine\n\n", "msg_date": "Tue, 13 Mar 2001 05:58:12 -0500", "msg_from": "Mike Mascari <mascarm@mascari.com>", "msg_from_op": true, "msg_subject": "RE: Performance monitor" }, { "msg_contents": "> I don't want to look a gift horse in the mouth, but it seems to me that the \n> performance monitor should wait until the now-famous query tree redesign \n> which will allow for sets from functions. I realize that the shared memory \n> requirements might be a bit large, but somehow Oracle accomplishes this \n> nicely, with some > 50 views (V$ACCESS through V$WAITSTAT) which can be \n> queried, usually via SQL*DBA, for performance statistics. More then 50 \n> performance views may be over-kill, but having the ability to fetch the \n> performance statistics with normal queries sure is nice. Perhaps a \n> postmaster option which would enable/disable the use of accumulating \n> performance statistics in shared memory might ease the hesitation against \n> it?\n\nI don't think query design is an issue here. We can already create\nviews to do such things. Right now, pgmonitor simply uses 'ps'. and\nuses gdb to attach to the running process and show the query being\nexecuted. For 7.2, I hope to improve it. I like the shared memory\nideas, and the ability to use a query rather than accessing shared\nmemory directly.\n\nSeems we should have each backend store query/stat information in shared\nmemory, and create special views to access that information. We can\nrestrict such views to the postgres super-user.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 13 Mar 2001 10:00:35 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Performance monitor" } ]
[ { "msg_contents": "The logical operators '&', '|', '<<' and '>>' as documented on the page\nhttp://www.postgresql.org/devel-corner/docs/postgres/functions.html don't\nappear to work as advertised.\n\ndarcy=# SELECT 91 & 15;\nERROR: Unable to identify an operator '&' for types 'int4' and 'int4'\n You will have to retype this query using an explicit cast\n\nShould this be fixed or should the documentation be changed?\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Tue, 13 Mar 2001 07:18:36 -0500 (EST)", "msg_from": "darcy@druid.net (D'Arcy J.M. Cain)", "msg_from_op": true, "msg_subject": "Logical operators don't work" }, { "msg_contents": "On Tue, Mar 13, 2001 at 07:18:36AM -0500, D'Arcy J.M. Cain wrote:\n> The logical operators '&', '|', '<<' and '>>' as documented on the page\n> http://www.postgresql.org/devel-corner/docs/postgres/functions.html don't\n> appear to work as advertised.\n> \n> darcy=# SELECT 91 & 15;\n> ERROR: Unable to identify an operator '&' for types 'int4' and 'int4'\n> You will have to retype this query using an explicit cast\n> \n> Should this be fixed or should the documentation be changed?\n\nWhen did you do initdb? If it was more than couple of months\nago you may not have them in your system catalogs?\n\nmarko=# SELECT 4 & 4;\n ?column? \n ----------\n 4\n(1 row)\n\nmarko=# SELECT 4 << 4;\n ?column? \n----------\n 64\n(1 row)\n\n\n-- \nmarko\n\n", "msg_date": "Tue, 13 Mar 2001 14:43:42 +0200", "msg_from": "Marko Kreen <marko@l-t.ee>", "msg_from_op": false, "msg_subject": "Re: Logical operators don't work" } ]
[ { "msg_contents": "\nAre there plans to make 'createdb' support template0 before 7.1? If so,\nI'll amend backup.sgml...\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Wed, 14 Mar 2001 00:32:34 +1100", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": true, "msg_subject": "createdb and template0?" }, { "msg_contents": "At 00:32 14/03/01 +1100, Philip Warner wrote:\n>\n>Are there plans to make 'createdb' support template0 before 7.1? If so,\n>I'll amend backup.sgml...\n>\n\nJust looked into CVS, and support seems to be there...\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Wed, 14 Mar 2001 00:44:07 +1100", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": true, "msg_subject": "Re: createdb and template0?" } ]
[ { "msg_contents": "> > It will be tck/tk, so I guess X only.\n> \n> Good point. A typical DB server -- where is performance important -- \n> has install Xwin?\n> \n> BTW, I hate Oracle 8.x.x because has X+java based installer, but some\n> my servers hasn't monitor and keyboard let alone to Xwin.\n> \n> What implement performance monitor as client/server application where\n> client is some shared lib? This solution allows to create more clients\n> for more differents GUI. I know... it's easy planning, but the other \n> thing is programming it :-)\n\nMy idea is that they can telnet into the server machine and do remote-X\nwith the application. Just set the DISPLAY variable and it should work.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 13 Mar 2001 09:56:56 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Performance monitor" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n\n> \n> My idea is that they can telnet into the server machine and do remote-X\n> with the application. Just set the DISPLAY variable and it should work.\n> \n\nWell, actually you would want to tunnel your X session through ssh if\nsecurity of the database server is of any importance... But this works\nfine on high bandwidth connections, but X is a real pain if you are sitting\nwith low bandwidth(e.g. cellphone connection when you're in the middle of\nnowhere on your favorite vacation ;-)\n\nregards, \n\n\tGunnar\n", "msg_date": "13 Mar 2001 16:36:25 +0100", "msg_from": "Gunnar R|nning <gunnar@candleweb.no>", "msg_from_op": false, "msg_subject": "Re: Performance monitor" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> My idea is that they can telnet into the server machine and do remote-X\n> with the application. Just set the DISPLAY variable and it should work.\n\nRemote X pretty well sucks in the real world. Aside from speed issues\nthere is the little problem of firewalls filtering out X connections.\n\nIf you've got ssh running then you can tunnel the X connection through\nthe ssh connection, which fixes the firewall problem, but it makes the\nspeed problem worse. And getting ssh plus X forwarding working is not\nsomething I want to have to hassle with when my remote database is down.\n\nIf you are thinking of telnet-based remote admin then I suggest you get\nout your curses man page and do up a curses GUI. (No smiley... I'd\nseriously prefer that to something that depends on remote X.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 13 Mar 2001 11:01:07 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Performance monitor " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > My idea is that they can telnet into the server machine and do remote-X\n> > with the application. Just set the DISPLAY variable and it should work.\n> \n> Remote X pretty well sucks in the real world. Aside from speed issues\n> there is the little problem of firewalls filtering out X connections.\n> \n> If you've got ssh running then you can tunnel the X connection through\n> the ssh connection, which fixes the firewall problem, but it makes the\n> speed problem worse. And getting ssh plus X forwarding working is not\n> something I want to have to hassle with when my remote database is down.\n> \n> If you are thinking of telnet-based remote admin then I suggest you get\n> out your curses man page and do up a curses GUI. (No smiley... I'd\n> seriously prefer that to something that depends on remote X.)\n\nAren't there tools to allow tcl/tk on non-X displays. I thought SCO had\nsomething.\n\nFYI, about the getrusage() idea, seems that only works for the current\nprocess or it its children, so each backend would have to update its own\nstatistics. Seems expensive compared to having 'ps do it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 13 Mar 2001 11:04:25 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Performance monitor" } ]
[ { "msg_contents": "\n> > > Anyway I like idea of StartUpID in page headers - this will help\n> > \n> > Can you please describe StartUpID for me ? \n> > Ideal would be a stamp that has the last (smallest open) XID, or something else \n> > that has more or less timestamp characteristics (without the actual need of wallclock)\n> > in regard to the WAL.\n> \n> StartUpID counts database startups and so has timestamp characteristics.\n> Actually, idea is to use SUI in future to allow reusing XIDs after startup: seeing\n> old SUI in data pages we'll know that all transaction on this page was committed\n> \"long ago\" (ie visible from MVCC POV). This requires UNDO, of course.\n\nFirst thanx for the description, but db startup would only count to 5-7 per year :-),\nis that sufficient ? It hardly sounds like anything useful to include in page header.\nWhat about the xlog id, that is also used for xlog file name, but I still think a xid would \nbe the best candidate.\n\nAndreas\n", "msg_date": "Tue, 13 Mar 2001 17:14:57 +0100", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: RE: xlog loose ends, continued" }, { "msg_contents": "> > StartUpID counts database startups and so has timestamp characteristics.\n> > Actually, idea is to use SUI in future to allow reusing XIDs after startup: seeing\n> > old SUI in data pages we'll know that all transaction on this page was committed\n> > \"long ago\" (ie visible from MVCC POV). This requires UNDO, of course.\n> \n> First thanx for the description, but db startup would only count to 5-7 per year :-),\n> is that sufficient ? It hardly sounds like anything useful to include in page header.\n\nIt will be sufficient if DB will not use all 2^32 XIDs without shutdown.\nRemoving pg_log *segments* for old XIDs is another story.\n\n> What about the xlog id, that is also used for xlog file name, but I still think a xid would \n> be the best candidate.\n\nlogid would be ok too, xid is not - we have to shorten xids lifetime in near future.\n\nVadim\n\n\n", "msg_date": "Tue, 13 Mar 2001 08:39:52 -0800", "msg_from": "\"Vadim Mikheev\" <vmikheev@sectorbase.com>", "msg_from_op": false, "msg_subject": "Re: RE: xlog loose ends, continued" } ]
[ { "msg_contents": "\n> > > StartUpID counts database startups and so has timestamp characteristics.\n> > > Actually, idea is to use SUI in future to allow reusing XIDs after startup: seeing\n> > > old SUI in data pages we'll know that all transaction on this page was committed\n> > > \"long ago\" (ie visible from MVCC POV). This requires UNDO, of course.\n> > \n> > First thanx for the description, but db startup would only count to 5-7 per year :-),\n> > is that sufficient ? It hardly sounds like anything useful to include in page header.\n> \n> It will be sufficient if DB will not use all 2^32 XIDs without shutdown.\n\nI liked the xid wraparound idea, won't that be sufficient here too ?\nI don't like the idea to reuse a xid sooner than absolutely necessary.\nThis would complicate the search for potentially inconsistent pages\nafter crash.\n\nAndreas\n", "msg_date": "Tue, 13 Mar 2001 17:57:17 +0100", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: RE: xlog loose ends, continued" } ]
[ { "msg_contents": "As long as I'm about to change the xlog page headers, I have another\nlittle idea. Wouldn't it be a good idea to allow three backup pages\nper xlog record, not only two? It seems like three pages would be\na natural requirement for logging operations like index page splits.\n\nWe could support as many as four pages per record, but that would mean\nhaving no free global bits in the info byte, which might be a bad idea.\nI think three page bits and one free bit is a good compromise.\nThoughts?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 13 Mar 2001 13:09:43 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Another little xlog change idea" } ]
[ { "msg_contents": "> >> xlog.c's ReadRecord formerly contained code that would zero \n> >> out the rest of the log segment (and delete the next log segment,\n> >> if any) upon detecting a missing or corrupted xlog record.\n> >> I removed that code because I considered it horribly dangerous\n> >> where it was. If there is anything wrong with either the xlog or\n> >> pg_control's pointers to it, that code was quite capable of wiping\n> >> out all hope of recovery *and* all evidence of what went wrong.\n> > ^^^^^^^^^^^^^^^^^^^^^^^^\n> \n> What I was thinking about in that last paragraph was manual \n> analysis and recovery. I don't think it's a good idea for automatic\n> system startup to skip over gaps in the log.\n\nBut if we'll not try to read after gap then after restart system will\nnot notice gap and valid records after it and just rewrite log space\nwith new records. Not much chance for manual analysis - ppl will\nnot report any problems.\n\nVadim\n", "msg_date": "Tue, 13 Mar 2001 10:15:02 -0800", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: xlog loose ends, continued " }, { "msg_contents": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n>> What I was thinking about in that last paragraph was manual \n>> analysis and recovery. I don't think it's a good idea for automatic\n>> system startup to skip over gaps in the log.\n\n> But if we'll not try to read after gap then after restart system will\n> not notice gap and valid records after it and just rewrite log space\n> with new records. Not much chance for manual analysis - ppl will\n> not report any problems.\n\nThat'll be true in any case, unless we refuse to start up at all upon\ndetecting xlog corruption (which doesn't seem like the way to fly).\nNot sure what we can do about that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 13 Mar 2001 13:20:14 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: xlog loose ends, continued " }, { "msg_contents": "Maybe there should be an error message like :\n\n\"PostgreSQL has detected severe xlog corruption. Please fix this with\npg_recover (or similar) manually before restarting the database\"?\n\nGuess I'm suggesting a separate xlog recovery tool for \"bad cases\" of\nxlog corruption, so decisions can by manually made by a DBA where\nnecessary. Not everything has to be automatic I'm thinking. There are\nprobably times where the dBA would prefer behaviour that doesn't seem\nintuitive anyway.\n\nRegards and best wishes,\n\nJustin Clift\n\nTom Lane wrote:\n> \n> \"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n> >> What I was thinking about in that last paragraph was manual\n> >> analysis and recovery. I don't think it's a good idea for automatic\n> >> system startup to skip over gaps in the log.\n> \n> > But if we'll not try to read after gap then after restart system will\n> > not notice gap and valid records after it and just rewrite log space\n> > with new records. Not much chance for manual analysis - ppl will\n> > not report any problems.\n> \n> That'll be true in any case, unless we refuse to start up at all upon\n> detecting xlog corruption (which doesn't seem like the way to fly).\n> Not sure what we can do about that.\n> \n> regards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n", "msg_date": "Wed, 14 Mar 2001 12:10:52 +1100", "msg_from": "Justin Clift <aa2@bigpond.net.au>", "msg_from_op": false, "msg_subject": "Re: Re: xlog loose ends, continued" } ]
[ { "msg_contents": "> As long as I'm about to change the xlog page headers, I have another\n> little idea. Wouldn't it be a good idea to allow three backup pages\n> per xlog record, not only two? It seems like three pages would be\n> a natural requirement for logging operations like index page splits.\n\nOn index splits we don't backup left and right whole pages - just part\nof pages filled by index tuples. This saves us ~ 8K for indices with\nsmall keys.\n\n> We could support as many as four pages per record, but that would mean\n> having no free global bits in the info byte, which might be a \n> bad idea. I think three page bits and one free bit is a good compromise.\n> Thoughts?\n\nYou almost forget that we are in beta and should fix bugs, not implement\nfeatures which useless in current version -:) Add comments and that will\nbe enough, no?\n\nVadim\n", "msg_date": "Tue, 13 Mar 2001 10:25:41 -0800", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: Another little xlog change idea" } ]
[ { "msg_contents": "> >> What I was thinking about in that last paragraph was manual \n> >> analysis and recovery. I don't think it's a good idea for automatic\n> >> system startup to skip over gaps in the log.\n> \n> > But if we'll not try to read after gap then after restart \n> > system will not notice gap and valid records after it and\n> > just rewrite log space with new records. Not much chance for\n> > manual analysis - ppl will not report any problems.\n> \n> That'll be true in any case, unless we refuse to start up at all upon\n> detecting xlog corruption (which doesn't seem like the way to fly).\n> Not sure what we can do about that.\n\nWhat I would refuse in the event of log corruption is continuing\nnormal database operations. It's ok to dump such database for manual\nrecovery, but continuing to use it is VERY BAD THING. The fact that\nusers will use inconsistent DB may become big headache for us - just\nimagine compains about index scans returning incorrect results\n(index tuples pointing to free heap space was left and then that space\nwas used for tuple with different keys).\n\nFailing to restart was bad but silent restart in the event of log\ncorruption is bad too. In first case we had at least chance to\ndiscover original problem.\n\nVadim\n", "msg_date": "Tue, 13 Mar 2001 10:39:47 -0800", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: xlog loose ends, continued " }, { "msg_contents": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n>> That'll be true in any case, unless we refuse to start up at all upon\n>> detecting xlog corruption (which doesn't seem like the way to fly).\n>> Not sure what we can do about that.\n\n> What I would refuse in the event of log corruption is continuing\n> normal database operations.\n\nHmm. We could do that if we had some notion of a read-only operating\nmode, perhaps. But we don't have one now and I don't want to add it\nfor 7.1. Can we agree to look at this more for 7.2?\n\nIf we did have that, it would make sense to scan the rest of the log\n(after the last valid XLOG record) to see if we find any more records.\nIf we do then --- whether they're valid or not --- we have a corrupted\nDB and we should go into the read-only state. But for the moment I\nthink it's best not to make such a scan.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 13 Mar 2001 13:53:42 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: xlog loose ends, continued " }, { "msg_contents": "> >> That'll be true in any case, unless we refuse to start up at all upon\n> >> detecting xlog corruption (which doesn't seem like the way to fly).\n> >> Not sure what we can do about that.\n> > What I would refuse in the event of log corruption is continuing\n> > normal database operations.\n> Hmm. We could do that if we had some notion of a read-only operating\n> mode, perhaps. But we don't have one now and I don't want to add it\n> for 7.1. Can we agree to look at this more for 7.2?\n\nI'd like to have a readonly mode driven by integrity requirements for\ncorruption recovery for database tables, for replication, and (in the\nfuture) for distributed databases, so perhaps we can do a trial\nimplementation fairly soon. Not sure how it would impact the backend(s),\nbut istm that we might be able to do a first implementation for 7.1.x.\nI'll bring it up again when appropriate...\n\n - Thomas\n", "msg_date": "Wed, 14 Mar 2001 00:39:08 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: xlog loose ends, continued" }, { "msg_contents": "\"Mikheev, Vadim\" wrote:\n> \n> > >> What I was thinking about in that last paragraph was manual\n> > >> analysis and recovery. I don't think it's a good idea for automatic\n> > >> system startup to skip over gaps in the log.\n> >\n> > > But if we'll not try to read after gap then after restart\n> > > system will not notice gap and valid records after it and\n> > > just rewrite log space with new records. Not much chance for\n> > > manual analysis - ppl will not report any problems.\n> >\n> > That'll be true in any case, unless we refuse to start up at all upon\n> > detecting xlog corruption (which doesn't seem like the way to fly).\n> > Not sure what we can do about that.\n> \n> What I would refuse in the event of log corruption is continuing\n> normal database operations. \n\nLog corruption is never an unique cause of a recovery failure.\nIf there's a bug in redo stuff the result would also be a recovery\nfailure. Currently the redo stuff has to accomplish redo operations\ncompletely. No matter how trivial the bug may be, it's always serious\nunfortunately.\n\n> It's ok to dump such database for manual\n> recovery, but continuing to use it is VERY BAD THING. The fact that\n> users will use inconsistent DB may become big headache for us - just\n\n> imagine compains about index scans returning incorrect results\n> (index tuples pointing to free heap space was left and then that space\n> was used for tuple with different keys).\n> \n\nHmm this seems nothing worse than 7.0.\nI would complain if postmaster couldn't restart due to this reason.\nIIRC few ppl mind the (even system) index corruption.\n \n> Failing to restart was bad but silent restart in the event of log\n> corruption is bad too.\n\nAgreed.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Wed, 14 Mar 2001 11:42:11 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: RE: xlog loose ends, continued" }, { "msg_contents": "Tom Lane wrote:\n> \n> \"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n> >> That'll be true in any case, unless we refuse to start up at all upon\n> >> detecting xlog corruption (which doesn't seem like the way to fly).\n> >> Not sure what we can do about that.\n> \n> > What I would refuse in the event of log corruption is continuing\n> > normal database operations.\n> \n> Hmm. We could do that if we had some notion of a read-only operating\n> mode, perhaps. But we don't have one now and I don't want to add it\n> for 7.1. Can we agree to look at this more for 7.2?\n\nI'd love to see PostgreSQL have a read-only mode of some kind that let\nenquiry function against a possibly otherwise corrupted database,\nwithout the stress of worrying that you might be making things worse.\n\nI know other DB servers that have this sort of thing, and it has been a\nlife-saver for me on occasion to allow critical information to be\nextracted before you nuke it all and start over.\n\nCheers,\n\t\t\t\tAndrew.\n-- \n_____________________________________________________________________\n Andrew McMillan, e-mail: Andrew@catalyst.net.nz\nCatalyst IT Ltd, PO Box 10-225, Level 22, 105 The Terrace, Wellington\nMe: +64 (21) 635 694, Fax: +64 (4) 499 5596, Office: +64 (4) 499 2267\n", "msg_date": "Wed, 14 Mar 2001 18:13:35 +1300", "msg_from": "Andrew McMillan <andrew@catalyst.net.nz>", "msg_from_op": false, "msg_subject": "Re: Re: xlog loose ends, continued" } ]
[ { "msg_contents": "> > It will be sufficient if DB will not use all 2^32 XIDs \n> > without shutdown.\n> \n> I liked the xid wraparound idea, won't that be sufficient here too ?\n> I don't like the idea to reuse a xid sooner than absolutely necessary.\n\nWe need it to reduce pg_log size requirements.\n\n> This would complicate the search for potentially inconsistent pages\n> after crash.\n\nThere is no such search currently and I can't imagine how/when/for what\nto do such search.\n?\n\nVadim\n", "msg_date": "Tue, 13 Mar 2001 10:45:16 -0800", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: RE: xlog loose ends, continued" } ]
[ { "msg_contents": "> > What I would refuse in the event of log corruption is continuing\n> > normal database operations.\n> \n> Hmm. We could do that if we had some notion of a read-only operating\n> mode, perhaps. But we don't have one now and I don't want to add it\n> for 7.1. Can we agree to look at this more for 7.2?\n\nWe need not in full support of read-only mode - just set some flag in\nshmem and disallow write ops. I think 7.1.1 or so is good for that.\n\nVadim\n", "msg_date": "Tue, 13 Mar 2001 11:05:17 -0800", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: xlog loose ends, continued " } ]
[ { "msg_contents": "Can somone improve the wording?\n\n\tThe system is shutting down.\n\nwhen the backend receives a SIGTERM. Seems we need some wording that\ncan apply to db shutdown and individual backend termination by\nadministrators.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 13 Mar 2001 14:15:03 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Shutdown term" }, { "msg_contents": "Bruce Momjian writes:\n\n> Can somone improve the wording?\n>\n> \tThe system is shutting down.\n>\n> when the backend receives a SIGTERM. Seems we need some wording that\n> can apply to db shutdown and individual backend termination by\n> administrators.\n\n\tThe connection was terminated.\n\nAnd make the postmaster print out\n\n\tThe system is shutting down.\n\nbefore it sends out the SIGTERM's.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Tue, 13 Mar 2001 20:33:50 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Shutdown term" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> \tThe connection was terminated.\n> And make the postmaster print out\n> \tThe system is shutting down.\n> before it sends out the SIGTERM's.\n\nUnfortunately the postmaster is in no position to send any message to\nthe individual clients.\n\nMaybe we should forget the idea of having a single message to cover\nboth cases, and instead provide some flag in shared memory that the\npostmaster can set before it sends out SIGTERMs. Then the backends\nwould actually know why they got a SIGTERM and could emit\nmore-appropriate messages.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 13 Mar 2001 15:06:49 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Shutdown term " }, { "msg_contents": "> Peter Eisentraut <peter_e@gmx.net> writes:\n> > \tThe connection was terminated.\n> > And make the postmaster print out\n> > \tThe system is shutting down.\n> > before it sends out the SIGTERM's.\n> \n> Unfortunately the postmaster is in no position to send any message to\n> the individual clients.\n> \n> Maybe we should forget the idea of having a single message to cover\n> both cases, and instead provide some flag in shared memory that the\n> postmaster can set before it sends out SIGTERMs. Then the backends\n> would actually know why they got a SIGTERM and could emit\n> more-appropriate messages.\n\nSeems like overkill to me. We could have the postmaster use SIGQUIT for\ndb shutdown and leave SIGKILL for admin shutdown of individual backends.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 13 Mar 2001 15:31:36 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Shutdown term" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Seems like overkill to me. We could have the postmaster use SIGQUIT for\n> db shutdown and leave SIGKILL for admin shutdown of individual backends.\n\nWrong... at least not with the current definitions of those signals!\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 13 Mar 2001 15:34:26 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Shutdown term " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Seems like overkill to me. We could have the postmaster use SIGQUIT for\n> > db shutdown and leave SIGKILL for admin shutdown of individual backends.\n> \n> Wrong... at least not with the current definitions of those signals!\n\nI see you just changed them today:\n\n pqsignal(SIGTERM, die); /* cancel current query and exit */\n! pqsignal(SIGQUIT, quickdie); /* hard crash time */\n! pqsignal(SIGALRM, HandleDeadLock); /* check for deadlock after timeout *\n\nHow about:\n\n\tThe database is shutting down.\n\nand not mention server. The current 'system' has to be changed to\n'database server' anyway.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 13 Mar 2001 20:54:42 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Shutdown term" } ]
[ { "msg_contents": "\nHi,\n\nwhat's the postgresql equivalent of \n\nmysql_real_escape_string()\n\nto escape strings that are going to be passed to queries?\n\n(http://www.mysql.com/doc/n/o/node_641.html)\n\nThanks\n\nDaniel\n", "msg_date": "Tue, 13 Mar 2001 12:44:35 -0800", "msg_from": "Daniel Lopez <daniel@rawbyte.com>", "msg_from_op": true, "msg_subject": "Escaping strings" }, { "msg_contents": "\n> what's the postgresql equivalent of \n> \n> mysql_real_escape_string()\n> \n> to escape strings that are going to be passed to queries?\n\nThere doesn't seem to be a function to do this in libpq, which I find\nslightly odd.\n\nDBD::Pg has quote() function as per usual for perl's DBI, but that's\nnot a lot of help for C. For reference it only doubles single quote\ncharacters ' to '' and backslash characters \\ to \\\\.\n\nWhat I do -- and this may not be correct, so I encourage the more\nknowledgeable to speak up! -- is this:\n\n1. single quotes ' become '' (typical SQL)\n\n2. PostgreSQL supports backslash escape sequences, so unless your\n input uses these protect \\ as \\\\.\n\n3. I translate nul, formfeed, newline, and carriage return characters\n to \\0, \\f, \\n, and \\r respectively.\n\n In comparison mysql_real_escape_string() omits \\f but also escapes\n ^Z and \".\n\nFor binary data probably other control characters need to be escaped\nas well. I'm not clear on this yet, but with TOAST in 7.1 I'm sure\nthere'll be more interest in storing arbitary binary data.\n\nRegards,\n\nGiles\n\n\n\n", "msg_date": "Mon, 26 Mar 2001 07:01:33 +1000", "msg_from": "Giles Lean <giles@nemeton.com.au>", "msg_from_op": false, "msg_subject": "Re: Escaping strings " } ]
[ { "msg_contents": "Hi Guys,\n\nI'd just like to point out that for most secure installations, X is\nremoved from servers as part of the \"remove all software which isn't\nabsolutely needed.\"\n\nI know of Solaris machines which perform as servers with a total of 19\nOS packages installed, and then precompiled binaries of the server\nprograms are loaded onto these machines.\n\nRemoval of all not-absolutely-necessary software iss also the\nrecommended procedure by Sun for setting up server platforms.\n\nHaving something based on X will be useable by lots of people, just not\nby those who make the effort to take correct security precautions.\n\nRegards and best wishes,\n\nJustin Clift\n\nBruce Momjian wrote:\n> \n> > > It will be tck/tk, so I guess X only.\n> >\n> > Good point. A typical DB server -- where is performance important --\n> > has install Xwin?\n> >\n> > BTW, I hate Oracle 8.x.x because has X+java based installer, but some\n> > my servers hasn't monitor and keyboard let alone to Xwin.\n> >\n> > What implement performance monitor as client/server application where\n> > client is some shared lib? This solution allows to create more clients\n> > for more differents GUI. I know... it's easy planning, but the other\n> > thing is programming it :-)\n> \n> My idea is that they can telnet into the server machine and do remote-X\n> with the application. Just set the DISPLAY variable and it should work.\n> \n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n", "msg_date": "Wed, 14 Mar 2001 12:00:12 +1100", "msg_from": "Justin Clift <aa2@bigpond.net.au>", "msg_from_op": true, "msg_subject": "Re: Performance monitor" } ]
[ { "msg_contents": "contrib/pg_resetxlog is checked in. It's not really complete for the\ninteresting cases (ie, recovery from damaged files) but it does handle\nthe simple case of upgrading from 7.1beta5 to current sources.\n\nTo use: see contrib/pg_resetxlog/README.pg_resetxlog.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 13 Mar 2001 20:02:19 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "XLOG reset utility available" } ]
[ { "msg_contents": "\n> > > It will be sufficient if DB will not use all 2^32 XIDs \n> > > without shutdown.\n> > \n> > I liked the xid wraparound idea, won't that be sufficient here too ?\n> > I don't like the idea to reuse a xid sooner than absolutely necessary.\n> \n> We need it to reduce pg_log size requirements.\n\nYes, I know, that this would simplify pg_log size reduction, but imho we should \ntry hard to find other ways to reduce pg_log size.\n\nAndreas \n", "msg_date": "Wed, 14 Mar 2001 08:57:10 +0100", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: RE: xlog loose ends, continued" } ]
[ { "msg_contents": "> > Peter Eisentraut <peter_e@gmx.net> writes:\n> > > \tThe connection was terminated.\n\n\tThe connection has been terminated. ??\n\n> > > And make the postmaster print out\n> > > \tThe system is shutting down.\n> > > before it sends out the SIGTERM's.\n\nI like above. Imho it is sufficient if postmaster writes the \"The system is shutting down.\"\nto the log. Clients get the other message. That is how I interpreted Peter's message also.\n\nAndreas\n", "msg_date": "Wed, 14 Mar 2001 09:08:50 +0100", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: Shutdown term" }, { "msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> > > Peter Eisentraut <peter_e@gmx.net> writes:\n> > > > \tThe connection was terminated.\n> \n> \tThe connection has been terminated. ??\n> \n> > > > And make the postmaster print out\n> > > > \tThe system is shutting down.\n> > > > before it sends out the SIGTERM's.\n> \n> I like above. Imho it is sufficient if postmaster writes the \"The system is shutting down.\"\n> to the log. Clients get the other message. That is how I interpreted Peter's message also.\n\nOK, I phoned Tom and we agreed on this wording:\n\n\tThis connection has been terminated by the administrator\n\nComments?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 14 Mar 2001 10:13:52 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: Shutdown term" }, { "msg_contents": "\n>OK, I phoned Tom and we agreed on this wording:\n>\n> This connection has been terminated by the administrator\n>\n>Comments?\n\nThis connection has been terminated by an administrator\n(there may be more than one...) :)\n\nOther than that it's informative enough.\n\nOTOH, I had a small thought on this.\n\nIf you had a messaging scheme to print to clients when a signal was \nreceived, is there the possibility of more informative messages perhaps \nthat could be sent by the pg_ctl program through the postmaster (or \nbackends) on shutdowns? This would allow for some decent scripting. For \nexample, the database is shutdown without the system going down or the \nwhole system is going down for maintenance or scheduled reboot.\n\nIt may seem stupid but I was thinking the reason could be an argument to \nthe pg_ctl program with a default of (Database Shutdown).\n\npg_ctl stop --message=\"System going down for a reboot\"\nor\npg_ctl stop -msg \"System upgrade. System will be available again at 5:00am\"\n\nThe client would receive\nThe connection has been terminated\n[System Shutdown|Database Shutdown|Unknown Reason|\"some string as an argument\"]\n\nAlso, it allows for more informative messages.\n Scheduled downtime (System will be online again at {whenever})\n Idle Timeout\n You are using too much CPU...\n You are using too little CPU...\n\nThese message can be set by the scripts for \"run level\" changes and the like.\n\n\n\n", "msg_date": "Wed, 14 Mar 2001 10:44:00 -0600", "msg_from": "Thomas Swan <tswan-lst@ics.olemiss.edu>", "msg_from_op": false, "msg_subject": "Re: AW: Shutdown term" }, { "msg_contents": "Thomas Swan writes:\n\n> It may seem stupid but I was thinking the reason could be an argument to\n> the pg_ctl program with a default of (Database Shutdown).\n>\n> pg_ctl stop --message=\"System going down for a reboot\"\n> or\n> pg_ctl stop -msg \"System upgrade. System will be available again at 5:00am\"\n\nI foresee a PQmotd(PGconn *) function ... ;-)\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Wed, 14 Mar 2001 18:13:32 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: AW: Shutdown term" }, { "msg_contents": "At 3/14/2001 11:13 AM, Peter Eisentraut wrote:\n>Thomas Swan writes:\n>\n> > It may seem stupid but I was thinking the reason could be an argument to\n> > the pg_ctl program with a default of (Database Shutdown).\n> >\n> > pg_ctl stop --message=\"System going down for a reboot\"\n> > or\n> > pg_ctl stop -msg \"System upgrade. System will be available again at 5:00am\"\n>\n>I foresee a PQmotd(PGconn *) function ... ;-)\n\nWell, I also thought you could use the same method to do a warning.\n\npg_ctl --message=\"Database going offline in 5 minutes\"\n\nor something along those lines...\n\n", "msg_date": "Wed, 14 Mar 2001 12:30:59 -0600", "msg_from": "Thomas Swan <tswan@ics.olemiss.edu>", "msg_from_op": false, "msg_subject": "Re: AW: Shutdown term" } ]
[ { "msg_contents": "I have a server, with Postgres, and a squid running. The squid is a real \nCPU-mem eater, and in one of the inserts to the DB, I got this on the log \nfile and then the postmaster died.\n\n/dbs/postgres/bin/postmaster: reaping dead processes...\n/dbs/postgres/bin/postmaster: CleanupProc: pid 27317 exited with status 0\n2001-03-13 18:34:03 DEBUG: proc_exit(0)\n2001-03-13 18:34:03 DEBUG: shmem_exit(0)\n2001-03-13 18:34:03 DEBUG: exit(0)\n/dbs/postgres/bin/postmaster: reaping dead processes...\n/dbs/postgres/bin/postmaster: CleanupProc: pid 27557 exited with status 0\nCheckPoint Data Base: fork failed: Not enough space\ninvoking IpcMemoryCreate(size=1245184)\nFindExec: found \"/dbs/postgres/bin/postmaster\" using argv[0] \n\nI'm on Postgresql-7.1beta5 on Solaris 7, compiled with gcc.\n\nI think today we are going to reconfigure the squid so it doesn't eat so much \nmem (other things on that server don't work good either).\n\nAny idea on this? I think the the postmaster shouldn't die, at least it's \nwhat I first thought.\n\n-- \nSystem Administration: It's a dirty job, \nbut someone told I had to do it.\n-----------------------------------------------------------------\nMart�n Marqu�s\t\t\temail: \tmartin@math.unl.edu.ar\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n", "msg_date": "Wed, 14 Mar 2001 10:27:47 -0300", "msg_from": "\"Martin A. Marques\" <martin@math.unl.edu.ar>", "msg_from_op": true, "msg_subject": "database died" }, { "msg_contents": "\"Martin A. Marques\" <martin@math.unl.edu.ar> writes:\n> CheckPoint Data Base: fork failed: Not enough space\n> [ whereupon postmaster quits ]\n\n> Any idea on this? I think the the postmaster shouldn't die, at least it's \n> what I first thought.\n\nI agree. Dying if the startup subjob fails is one thing, but dying\nbecause a routine checkpoint fails is another. The code is treating\nthose two cases alike however ... will change it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 14 Mar 2001 10:02:10 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: database died " }, { "msg_contents": "El Mi� 14 Mar 2001 12:02, Tom Lane escribi�:\n> \"Martin A. Marques\" <martin@math.unl.edu.ar> writes:\n> > CheckPoint Data Base: fork failed: Not enough space\n> > [ whereupon postmaster quits ]\n> >\n> > Any idea on this? I think the the postmaster shouldn't die, at least it's\n> > what I first thought.\n>\n> I agree. Dying if the startup subjob fails is one thing, but dying\n> because a routine checkpoint fails is another. The code is treating\n> those two cases alike however ... will change it.\n\nJust happend again. At this moment the postgres on that machine is not in \nproduction, but should be in a short future. Is there a chance of getting \nsome kind of patch, or maybe changing some configuratin parameters of the OS \nor the postmaster?\nI'm getting to feel the pain on the back that one can get with Solaris.\n\nThanks for the feed back.\n\nSaludos... :-)\n\n-- \nSystem Administration: It's a dirty job, \nbut someone told I had to do it.\n-----------------------------------------------------------------\nMart�n Marqu�s\t\t\temail: \tmartin@math.unl.edu.ar\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n", "msg_date": "Wed, 14 Mar 2001 18:11:01 -0300", "msg_from": "\"Martin A. Marques\" <martin@math.unl.edu.ar>", "msg_from_op": true, "msg_subject": "Re: database died" }, { "msg_contents": "\"Martin A. Marques\" <martin@math.unl.edu.ar> writes:\n>> I agree. Dying if the startup subjob fails is one thing, but dying\n>> because a routine checkpoint fails is another. The code is treating\n>> those two cases alike however ... will change it.\n\n> Just happend again. At this moment the postgres on that machine is not in \n> production, but should be in a short future. Is there a chance of getting \n> some kind of patch, or maybe changing some configuratin parameters of the OS \n> or the postmaster?\n\nThe fix is in CVS, pull it out if you need it:\nhttp://www.postgresql.org/cgi/cvsweb.cgi/pgsql/src/backend/postmaster/postmaster.c\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 14 Mar 2001 17:10:34 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: database died " } ]
[ { "msg_contents": "> First day in week is Monday in ISO week.\n> Thomas, we have ISO week-of-year (IW in to_char or 'week' in date_part),\n> but we haven't ISO day-of-week (may be as 'ID' for to_char).\n> TODO for 7.2?\n> ..but in ISO is 0-6; 0=Mon\n\nI've been ignoring this until now, hoping no one would notice ;)\n\nUnix day-of-week starts on Sunday, not Monday, which is what\ndate_trunc('dow',...) returns. Presumably this is modeled on the\ntraditional notion (at least in the US; I suspect this is true in most\nEuropean countries at least) of Sunday being \"the first day of week\".\n\nThe implementation predates our support of ISO dates so it was not an\nissue then.\n\ndate_part() is modeled on Ingres' implementation, but my old Ingres\nmanual indicates that 'dow' is not one of the options.\n\nShould we change the definition of \"dow\", or implement another choice,\nsay \"idow\"?\n\nComments?\n\n - Thomas\n", "msg_date": "Wed, 14 Mar 2001 14:50:35 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": true, "msg_subject": "Re: Week number" }, { "msg_contents": "On Wed, Mar 14, 2001 at 02:50:35PM +0000, Thomas Lockhart wrote:\n> > First day in week is Monday in ISO week.\n> > Thomas, we have ISO week-of-year (IW in to_char or 'week' in date_part),\n> > but we haven't ISO day-of-week (may be as 'ID' for to_char).\n> > TODO for 7.2?\n> > ..but in ISO is 0-6; 0=Mon\n> \n> I've been ignoring this until now, hoping no one would notice ;)\n> \n> Unix day-of-week starts on Sunday, not Monday, which is what\n> date_trunc('dow',...) returns. Presumably this is modeled on the\n> traditional notion (at least in the US; I suspect this is true in most\n> European countries at least) of Sunday being \"the first day of week\".\n> \n> The implementation predates our support of ISO dates so it was not an\n> issue then.\n> \n> date_part() is modeled on Ingres' implementation, but my old Ingres\n> manual indicates that 'dow' is not one of the options.\n> \n> Should we change the definition of \"dow\", or implement another choice,\n> say \"idow\"?\n\n Yes, I agree with new \"idow\" for date_part() and 'ID' for to_char() stuff.\n\n\n My note grow up when I do SQL query that say something like: \n\n\"2001-03-12 is begin of week and it's second day of week\" .. this sound \nvery curious :-)\n\n\n test=# select to_char('2001-03-12'::date, 'IW Dth Day');\n to_char\n------------------\n 11 2nd Monday\n(1 row)\n\n\t\tKarel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Wed, 14 Mar 2001 16:10:01 +0100", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: Week number" }, { "msg_contents": "\n> traditional notion (at least in the US; I suspect this is true in most\n> European countries at least) of Sunday being \"the first day of week\".\n\nI believe that in most European countries, Monday is the first day of the \nweek.\n\n-- \nKaare Rasmussen --Linux, spil,-- Tlf: 3816 2582\nKaki Data tshirts, merchandize Fax: 3816 2501\nHowitzvej 75 �ben 14.00-18.00 Email: kar@webline.dk\n2000 Frederiksberg L�rdag 11.00-17.00 Web: www.suse.dk\n", "msg_date": "Wed, 14 Mar 2001 23:53:53 +0100", "msg_from": "Kaare Rasmussen <kar@webline.dk>", "msg_from_op": false, "msg_subject": "Re: Re: Week number" } ]
[ { "msg_contents": "\n> Unix day-of-week starts on Sunday, not Monday, which is what\n> date_trunc('dow',...) returns. Presumably this is modeled on the\n> traditional notion (at least in the US; I suspect this is true in most\n> European countries at least) of Sunday being \"the first day of week\".\n\nGermany and Austria have Monday as first day of week, I think most of \nEurope also.\n\nAndreas\n", "msg_date": "Wed, 14 Mar 2001 16:54:54 +0100", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: Re: Week number" }, { "msg_contents": "On Wed, Mar 14, 2001 at 04:54:54PM +0100, Zeugswetter Andreas SB wrote:\n> > Unix day-of-week starts on Sunday, not Monday, which is what\n> > date_trunc('dow',...) returns. Presumably this is modeled on the\n> > traditional notion (at least in the US; I suspect this is true in most\n> > European countries at least) of Sunday being \"the first day of week\".\n> \n> Germany and Austria have Monday as first day of week, I think most of \n> Europe also.\n\nit is all relative.\n\nmost western calendars that i have seen show \"Sun Mon Tue Wed Thu Fri Sat\".\n\nthe concept of \"first\" day of week is a bit muddied.\n\nmany christian-influenced places would consider Sunday to be the \"first\"\nday of the week, but monday being the \"first\" business day of the week.\n\ni have seen calendars which use \"Mon Tue Wed Thu Fri Sat Sun\", and i have\nworked with people where saturday was the first day of business. and also\nplaces where sunday is the first day of business.\n\nso, suffice to say, there is no \"proper\" first day of the week.\n\nas such, the unix day of week pegs sunday as day 0, your code should just\nuse that index. since almost all cultures have now adapted to a 7 day week\nand a 365 day year, there shouldn't bee too much confusion.\n\n-- \n[ Jim Mercer jim@pneumonoultramicroscopicsilicovolcanoconiosis.ca ]\n[ Reptilian Research -- Longer Life through Colder Blood ]\n[ aka jim@reptiles.org +1 416 410-5633 ]\n", "msg_date": "Wed, 14 Mar 2001 12:00:38 -0500", "msg_from": "Jim Mercer <jim@reptiles.org>", "msg_from_op": false, "msg_subject": "Re: Re: Week number" }, { "msg_contents": "On Wed, Mar 14, 2001 at 07:02:41PM +0100, Peter Eisentraut wrote:\n> Jim Mercer writes:\n> > most western calendars that i have seen show \"Sun Mon Tue Wed Thu Fri Sat\".\n> \n> Most *English* calendars you have seen, I suppose. In Germany there is no\n> such possible calendar. If you printed a calendar that way, it would be\n> considered a printo. The same is true in most parts of the continent.\n\ni stand corrected. i haven't had much dealings with european business.\n\n-- \n[ Jim Mercer jim@pneumonoultramicroscopicsilicovolcanoconiosis.ca ]\n[ Reptilian Research -- Longer Life through Colder Blood ]\n[ aka jim@reptiles.org +1 416 410-5633 ]\n", "msg_date": "Wed, 14 Mar 2001 13:01:00 -0500", "msg_from": "Jim Mercer <jim@reptiles.org>", "msg_from_op": false, "msg_subject": "Re: Re: Week number" }, { "msg_contents": "Jim Mercer writes:\n\n> most western calendars that i have seen show \"Sun Mon Tue Wed Thu Fri Sat\".\n\nMost *English* calendars you have seen, I suppose. In Germany there is no\nsuch possible calendar. If you printed a calendar that way, it would be\nconsidered a printo. The same is true in most parts of the continent.\n\nThe POSIX numbering (0-6) is actually pretty slick because it allows both\nversions to work: In the U.S. (e.g.) you get a natural order starting at\n0, in Germany (e.g.) you get Monday as #1.\n\n> so, suffice to say, there is no \"proper\" first day of the week.\n\nThere is a proper ISO first day of the week. In many parts of Europe, the\nday of the week + week of the year are real, official concepts. E.g., you\nwould mark business transactions as \"week x, day y\" instead of with a date\n(notice how this simplifies arithmetic). Without trying to push through\nmy cultural bias, I think these applications should have some priority\nover making up a solution that satisfies everybody but doesn't actually\nsuit any real application.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Wed, 14 Mar 2001 19:02:41 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Re: Week number" }, { "msg_contents": ">>>>> \"AZ\" == Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n\n >> Unix day-of-week starts on Sunday, not Monday, which is what\n >> date_trunc('dow',...) returns. Presumably this is modeled on\n >> the traditional notion (at least in the US; I suspect this is\n >> true in most European countries at least) of Sunday being \"the\n >> first day of week\".\n\n AZ> Germany and Austria have Monday as first day of week, I think\n AZ> most of Europe also.\n\nI believe the goal was to have a to_char() that was complete and\nOracle-compatible. Perhaps we need to also have a trunc() which is\nOracle compatible. I haven't been playing with 7.1beta, but 7.0\ntrunc() doesn't like timestamps. In Oracle, I can say\n\n select trunc(sysdate) - trunc(sysdate,'ww') + 1 from dual;\n\nto get Monday=1.\n\nroland\n-- \n\t\t PGP Key ID: 66 BC 3B CD\nRoland B. Roberts, PhD RL Enterprises\nroland@rlenter.com 76-15 113th Street, Apt 3B\nrbroberts@acm.org Forest Hills, NY 11375\n", "msg_date": "14 Mar 2001 13:23:30 -0500", "msg_from": "Roland Roberts <roland@astrofoto.org>", "msg_from_op": false, "msg_subject": "Re: AW: Re: Week number" }, { "msg_contents": ">>>>> \"Peter\" == Peter Eisentraut <peter_e@gmx.net> writes:\n\n Peter> The POSIX numbering (0-6) is actually pretty slick because\n Peter> it allows both versions to work: In the U.S. (e.g.) you get\n Peter> a natural order starting at 0, in Germany (e.g.) you get\n Peter> Monday as #1.\n\nOracle's to_char() supports format IW for the ISO week of the year,\nbut there is no equivalent ID for the ISO day of the week. Perhaps\nthis should be a PostgreSQL extension?\n\nroland\n-- \n\t\t PGP Key ID: 66 BC 3B CD\nRoland B. Roberts, PhD RL Enterprises\nroland@rlenter.com 76-15 113th Street, Apt 3B\nrbroberts@acm.org Forest Hills, NY 11375\n", "msg_date": "14 Mar 2001 13:34:08 -0500", "msg_from": "Roland Roberts <roland@astrofoto.org>", "msg_from_op": false, "msg_subject": "Re: Re: Week number" }, { "msg_contents": "On Wed, Mar 14, 2001 at 01:23:30PM -0500, Roland Roberts wrote:\n> >>>>> \"AZ\" == Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n> \n> >> Unix day-of-week starts on Sunday, not Monday, which is what\n> >> date_trunc('dow',...) returns. Presumably this is modeled on\n> >> the traditional notion (at least in the US; I suspect this is\n> >> true in most European countries at least) of Sunday being \"the\n> >> first day of week\".\n> \n> AZ> Germany and Austria have Monday as first day of week, I think\n> AZ> most of Europe also.\n> \n> I believe the goal was to have a to_char() that was complete and\n> Oracle-compatible. Perhaps we need to also have a trunc() which is\n\n Yes, an Oracle-compatiblity is important for masks (format pictures)\nused in both (Ora and PG). But our PG's implementation has some extensions,\nfor example 'ID' ISO-day-of-week in 7.2 where Monday = first day of week.\nI hope all countries will glad :-)\n\n\n for 'WW' and 'D' are results same:\n\nOra:\n\nSVRMGR> select to_char( to_date('2001/03/12', 'YYYY/MM/DD'), 'WW Day D\nYYYY/MM/DD') from dual;\nTO_CHAR(TO_DATE('2001/03/\n-------------------------\n11 Monday 2 2001/03/12\n1 row selected.\n\nPG:\n\nselect to_char( to_date('2001/03/12', 'YYYY/MM/DD'), 'WW Day D YYYY/MM/DD');\n to_char\n---------------------------\n 11 Monday 2 2001/03/12\n(1 row)\n\n\t\tKarel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz", "msg_date": "Thu, 15 Mar 2001 10:58:11 +0100", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: AW: Re: Week number" } ]
[ { "msg_contents": "I would like to apply the following patch to the CVS tree. It allows\npgmonitor to show query strings even if the backend is not compiled with\ndebug symbols.\n\nIt does this by creating a global variable 'debug_query_string' and\nassigning it when the query begins and clearing it when the query ends. \nIt needs to be a global symbol so gdb can find it without debug symbols.\n\nSeems like a very safe patch, and it allows pgmonitor to be much more\nuseful until we get a shared memory solution in 7.2.\n\nIs this OK with everyone?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: src/backend/tcop/postgres.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/tcop/postgres.c,v\nretrieving revision 1.211\ndiff -c -r1.211 postgres.c\n*** src/backend/tcop/postgres.c\t2001/03/14 15:14:35\t1.211\n--- src/backend/tcop/postgres.c\t2001/03/14 16:29:26\n***************\n*** 74,79 ****\n--- 74,81 ----\n extern int optind;\n extern char *optarg;\n \n+ char *debug_query_string;\t\t/* used by pgmonitor */\n+ \n /*\n * for ps display\n */\n***************\n*** 621,626 ****\n--- 623,630 ----\n \tList\t *parsetree_list,\n \t\t\t *parsetree_item;\n \n+ \tdebug_query_string = query_string;\t/* used by pgmonitor */\n+ \n \t/*\n \t * Start up a transaction command. All queries generated by the\n \t * query_string will be in this same command block, *unless* we find\n***************\n*** 855,860 ****\n--- 859,866 ----\n \t */\n \tif (xact_started)\n \t\tfinish_xact_command();\n+ \n+ \tdebug_query_string = NULL;\t\t/* used by pgmonitor */\n }\n \n /*\n***************\n*** 1718,1723 ****\n--- 1724,1731 ----\n \n \tif (sigsetjmp(Warn_restart, 1) != 0)\n \t{\n+ \t\tdebug_query_string = NULL;\t\t/* used by pgmonitor */\n+ \n \t\t/*\n \t\t * NOTE: if you are tempted to add more code in this if-block,\n \t\t * consider the probability that it should be in AbortTransaction()", "msg_date": "Wed, 14 Mar 2001 11:34:54 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "pgmonitor patch for query string" }, { "msg_contents": "\nnot with me it isn't ... it doesn't fix a bug, it doesn't go in ... save\nit for after v7.1 is released ...\n\n\nOn Wed, 14 Mar 2001, Bruce Momjian wrote:\n\n> I would like to apply the following patch to the CVS tree. It allows\n> pgmonitor to show query strings even if the backend is not compiled with\n> debug symbols.\n>\n> It does this by creating a global variable 'debug_query_string' and\n> assigning it when the query begins and clearing it when the query ends.\n> It needs to be a global symbol so gdb can find it without debug symbols.\n>\n> Seems like a very safe patch, and it allows pgmonitor to be much more\n> useful until we get a shared memory solution in 7.2.\n>\n> Is this OK with everyone?\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n", "msg_date": "Wed, 14 Mar 2001 13:56:03 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: pgmonitor patch for query string" }, { "msg_contents": "> \n> not with me it isn't ... it doesn't fix a bug, it doesn't go in ... save\n> it for after v7.1 is released ...\n\nYou are saying save it for 7.2, right? That will certainly be months\naway. Without this patch, pgmonitor's 'query' button will only work if\nthe postgres binary was compiled with debug symbols.\n\nYou are basically saying no, even thought it has no risks, right?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 14 Mar 2001 13:00:18 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: pgmonitor patch for query string" }, { "msg_contents": "On Wed, 14 Mar 2001, Bruce Momjian wrote:\n\n> >\n> > not with me it isn't ... it doesn't fix a bug, it doesn't go in ... save\n> > it for after v7.1 is released ...\n>\n> You are saying save it for 7.2, right? That will certainly be months\n> away. Without this patch, pgmonitor's 'query' button will only work if\n> the postgres binary was compiled with debug symbols.\n\nPut it up as a patch to v7.1, and include it as part of 7.1.1 ...\n\n> You are basically saying no, even thought it has no risks, right?\n\nI'm saying no because it doesn't fix any known bugs, it *adds* another\nfeature ... we are *months* too late in the cycle for that ...\n\n", "msg_date": "Wed, 14 Mar 2001 14:11:34 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: pgmonitor patch for query string" }, { "msg_contents": "> On Wed, 14 Mar 2001, Bruce Momjian wrote:\n> \n> > >\n> > > not with me it isn't ... it doesn't fix a bug, it doesn't go in ... save\n> > > it for after v7.1 is released ...\n> >\n> > You are saying save it for 7.2, right? That will certainly be months\n> > away. Without this patch, pgmonitor's 'query' button will only work if\n> > the postgres binary was compiled with debug symbols.\n> \n> Put it up as a patch to v7.1, and include it as part of 7.1.1 ...\n\nOh. It would just be easier to say that I support 7.1 rather than\n7.1.1, though that is certainly much better than 7.2. :-)\n\n\n> > You are basically saying no, even thought it has no risks, right?\n> \n> I'm saying no because it doesn't fix any known bugs, it *adds* another\n> feature ... we are *months* too late in the cycle for that ...\n\nI agree we are many months back, and I am a little upset about it\nmyself. \n\nI just don't see any delay or risk in applying the patch. \n\nHowever, 7.1.1 is fine, so I will just wait for that one. I am working\non a lock-status display option now too, so I can put them both in 7.1.1\nmaybe with the pgmonitor script in CVS too.\n\nSounds like a plan.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 14 Mar 2001 13:16:06 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: pgmonitor patch for query string" }, { "msg_contents": "> Bruce Momjian writes:\n> \n> > It does this by creating a global variable 'debug_query_string' and\n> > assigning it when the query begins and clearing it when the query ends.\n> \n> You can find out the current query for a given backend by configuring the\n> server with \"debug_print_query on\" and \"log_pids on\" and running\n> \n> sed -n \"/[$pid]/\"'s/^.*query: \\(.*\\)$/\\1/p' $logfile\n> \n> This doesn't tell you whether the query is still running, but ps tells you\n> that. In fact, it might be an idea to add a logging option that prints\n> something like \"query finished in xxx ms\". We actually have something\n> similar hidden under show_query_stats, but the formatting needs to be made\n> more convenient and possibly less verbose. But at least this way you have\n> it for the record, and not only on the screen.\n\nYes, I thought of that idea. I wasn't sure I would be able to find the\nlog file in any installation-independent way, or even if a log file was\neven being kept.\n\nAnd I was afraid some people wouldn't want to log all queries.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 14 Mar 2001 13:27:11 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: pgmonitor patch for query string" }, { "msg_contents": "Bruce Momjian writes:\n\n> It does this by creating a global variable 'debug_query_string' and\n> assigning it when the query begins and clearing it when the query ends.\n\nYou can find out the current query for a given backend by configuring the\nserver with \"debug_print_query on\" and \"log_pids on\" and running\n\nsed -n \"/[$pid]/\"'s/^.*query: \\(.*\\)$/\\1/p' $logfile\n\nThis doesn't tell you whether the query is still running, but ps tells you\nthat. In fact, it might be an idea to add a logging option that prints\nsomething like \"query finished in xxx ms\". We actually have something\nsimilar hidden under show_query_stats, but the formatting needs to be made\nmore convenient and possibly less verbose. But at least this way you have\nit for the record, and not only on the screen.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Wed, 14 Mar 2001 19:33:42 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: pgmonitor patch for query string" }, { "msg_contents": "The Hermit Hacker <scrappy@hub.org> writes:\n> I'm saying no because it doesn't fix any known bugs, it *adds* another\n> feature ... we are *months* too late in the cycle for that ...\n\nI thought it was a pretty good idea even without any consideration for\nBruce's monitor program. The advantage is that one would be able to\nextract the current query from a crashed backend's core dump, even if\nthe backend had been compiled without debug symbols. Right now you can\nonly find out the query if you had compiled with debug, because you have\nto be able to look at local variables of functions. And there are an\nawful lot of people who don't use debug-enabled builds...\n\nGiven that and the low-risk nature of the patch, I vote for it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 14 Mar 2001 13:39:09 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgmonitor patch for query string " }, { "msg_contents": "On Wed, 14 Mar 2001, Peter Eisentraut wrote:\n\n> Bruce Momjian writes:\n>\n> > It does this by creating a global variable 'debug_query_string' and\n> > assigning it when the query begins and clearing it when the query ends.\n>\n> You can find out the current query for a given backend by configuring the\n> server with \"debug_print_query on\" and \"log_pids on\" and running\n>\n> sed -n \"/[$pid]/\"'s/^.*query: \\(.*\\)$/\\1/p' $logfile\n>\n> This doesn't tell you whether the query is still running, but ps tells you\n> that. In fact, it might be an idea to add a logging option that prints\n> something like \"query finished in xxx ms\". We actually have something\n> similar hidden under show_query_stats, but the formatting needs to be made\n> more convenient and possibly less verbose. But at least this way you have\n> it for the record, and not only on the screen.\n\nI *definitely* like this one ... I've been doing wrappers around my\npg_exec() calls in PHP to do some stats generation to work on \"slow\nqueries\", but having it in the backend would be more exact ... and easier\nto use then having to modify your apps ...\n\n", "msg_date": "Wed, 14 Mar 2001 15:16:00 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: pgmonitor patch for query string" }, { "msg_contents": "> > This doesn't tell you whether the query is still running, but ps tells you\n> > that. In fact, it might be an idea to add a logging option that prints\n> > something like \"query finished in xxx ms\". We actually have something\n> > similar hidden under show_query_stats, but the formatting needs to be made\n> > more convenient and possibly less verbose. But at least this way you have\n> > it for the record, and not only on the screen.\n> \n> I *definitely* like this one ... I've been doing wrappers around my\n> pg_exec() calls in PHP to do some stats generation to work on \"slow\n> queries\", but having it in the backend would be more exact ... and easier\n> to use then having to modify your apps ...\n> \n\nAdded to TODO:\n\n\t* Allow logging of query durations\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 14 Mar 2001 14:28:33 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: pgmonitor patch for query string" }, { "msg_contents": "> > I don't understand the attraction of the UDP stuff. If we have the\n> > stuff in shared memory, we can add a collector program that gathers info\n> > from shared memory and allows others to access it, right?\n> \n> There are a couple of problems with shared memory. First you\n> have to decide a size. That'll limit what you can put into\n> and if you want to put things per table (#scans, #block-\n> fetches, #cache-hits, ...), you might run out of mem either\n> way with complicated, multy-thousand table schemas.\n> \n> And the above illustrates too that the data structs in the\n> shmem wouldn't be just some simple arrays of counters. So we\n> have to deal with locking for both, readers and writers of\n> the statistics.\n\n[ Jan, previous email was not sent to list, my mistake.]\n\nOK, I understand the problem with pre-defined size. That is why I was\nlooking for a way to dump the information out to a flat file somehow.\n\nI think no matter how we deal with this, we will need some way to turn\non/off such reporting. We can write into shared memory with little\npenalty, but network or filesystem output is not going to be near-zero\ncost.\n\nOK, how about a shared buffer area that gets written in a loop so a\nseparate collection program can grab the info if it wants it, and if\nnot, it just gets overwritten later. It can even be per-backend:\n\n loops start end (loop to start)\n ----- [-----------------------------]\n 5 stat stat stat stat stat stat\n |^^^\n current pointer\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 16 Mar 2001 12:53:57 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: pgmonitor patch for query string" } ]
[ { "msg_contents": "I'm running Postgres 7.0.3 on a RedHat Linux 6.1. For some reason, rtrim\nis giving me an incorrect result:\n\ndb01=# SELECT tablename FROM pg_tables WHERE tablename LIKE '%_opto' AND\n\ntablename NOT LIKE 'pg%' ORDER BY tablename ASC ;\n tablename\n-----------------\n center_out_opto\n circles_opto\n ellipse_opto\n ex_ellipse_opto\n figure8_opto\n ro_ellipse_opto\n(6 rows)\n\nNow I want to return the same thing only with the trailing '_opto'\nremoved:\n\n\ndb01=# SELECT rtrim(tablename, '_opto') FROM pg_tables WHERE tablename\nLIKE '%_opto' AND tablename NOT LIKE 'pg%' ORDER BY tablename ASC ;\n rtrim\n------------\n center_ou <=======================\nNOTE: the trailing 't' is missing\n circles\n ellipse\n ex_ellipse\n figure8\n ro_ellipse\n(6 rows)\n\nHowever, as you can see, the 'center_out' table is missing the last 't'.\nIf I exclude the '_':\n\ndb01=# SELECT rtrim(tablename, 'opto') FROM pg_tables WHERE tablename\nLIKE '%_opto' AND tablename NOT LIKE 'pg%' ORDER BY tablename ASC ;\n rtrim\n-------------\n center_out_\n<======================= 't' shows up again\n circles_\n ellipse_\n ex_ellipse_\n figure8_\n ro_ellipse_\n(6 rows)\n\nThe 't' is back.\n\nIs there something that I'm doing wrong with my query here?\n\nThanks.\n-Tony\n\n\n\n\n\n", "msg_date": "Wed, 14 Mar 2001 18:14:12 -0800", "msg_from": "\"G. Anthony Reina\" <reina@nsi.edu>", "msg_from_op": true, "msg_subject": "rtrim giving weird result" }, { "msg_contents": "The second parameter to \"rtrim\" is interpreted as a set of characters and\nrtrim:\n\"Returns string with final characters removed after the last character not\nin set\"\n\nSo rtrim(\"center_out_opto\", \"_opto\") returns\n \"center_ou\"\nbecause \"u\" is not in the set {o, p, t, _} but all the characters after it\nare.\nrtrim(\"center_out_opto\", \"pot_\") will produce the same thing.\n\n\n----- Original Message -----\nFrom: \"G. Anthony Reina\" <reina@nsi.edu>\nTo: \"pgsql-hackers@postgreSQL.org\" <pgsql-hackers@postgresql.org>\nSent: Wednesday, March 14, 2001 9:14 PM\nSubject: [HACKERS] rtrim giving weird result\n\n\n> I'm running Postgres 7.0.3 on a RedHat Linux 6.1. For some reason, rtrim\n> is giving me an incorrect result:\n>\n> db01=# SELECT tablename FROM pg_tables WHERE tablename LIKE '%_opto' AND\n>\n> tablename NOT LIKE 'pg%' ORDER BY tablename ASC ;\n> tablename\n> -----------------\n> center_out_opto\n> circles_opto\n> ellipse_opto\n> ex_ellipse_opto\n> figure8_opto\n> ro_ellipse_opto\n> (6 rows)\n>\n> Now I want to return the same thing only with the trailing '_opto'\n> removed:\n>\n>\n> db01=# SELECT rtrim(tablename, '_opto') FROM pg_tables WHERE tablename\n> LIKE '%_opto' AND tablename NOT LIKE 'pg%' ORDER BY tablename ASC ;\n> rtrim\n> ------------\n> center_ou <=======================\n> NOTE: the trailing 't' is missing\n> circles\n> ellipse\n> ex_ellipse\n> figure8\n> ro_ellipse\n> (6 rows)\n>\n> However, as you can see, the 'center_out' table is missing the last 't'.\n> If I exclude the '_':\n>\n> db01=# SELECT rtrim(tablename, 'opto') FROM pg_tables WHERE tablename\n> LIKE '%_opto' AND tablename NOT LIKE 'pg%' ORDER BY tablename ASC ;\n> rtrim\n> -------------\n> center_out_\n> <======================= 't' shows up again\n> circles_\n> ellipse_\n> ex_ellipse_\n> figure8_\n> ro_ellipse_\n> (6 rows)\n>\n> The 't' is back.\n>\n> Is there something that I'm doing wrong with my query here?\n>\n> Thanks.\n> -Tony\n>\n>\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n", "msg_date": "Wed, 14 Mar 2001 22:36:29 -0500", "msg_from": "\"Ken Hirsch\" <kenhirsch@myself.com>", "msg_from_op": false, "msg_subject": "Re: rtrim giving weird result" }, { "msg_contents": "\"G. Anthony Reina\" <reina@nsi.edu> writes:\n> I'm running Postgres 7.0.3 on a RedHat Linux 6.1. For some reason, rtrim\n> is giving me an incorrect result:\n\nNo, you have an incorrect understanding of rtrim. The second argument\nis a set of removable characters, not a string to be matched.\n\nAFAIK we are following Oracle in defining it that way ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 14 Mar 2001 23:10:56 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: rtrim giving weird result " }, { "msg_contents": "Hi, all,\n\nCould somebody tell me if there is a work around to \ncreate \"union on view\" (which seems not implemented\nin the postgres yet) ?\n\nAlso, is there any alternative query that can do:\n\nselect * from (select * from table);\n\nI could not find an answer from the old archieve,\nand sorry if this has been answered previously.\n(I am new here :)\n\nRegards,\nJae\n\n\n\n\n\n\n", "msg_date": "Wed, 14 Mar 2001 20:53:02 -0800", "msg_from": "\"Jae-Woong Hwnag\" <jaewh@email.com>", "msg_from_op": false, "msg_subject": "Union on view and.." }, { "msg_contents": "\nIf you're willing to wait or use the betas, 7.1 \nshould probably do both of these. (Won't \nquite make toast though).\n\n[Although I believe the second'll be something\nlike: select * from (select * from table) alias;]\n\nOn Wed, 14 Mar 2001, Jae-Woong Hwnag wrote:\n\n> Hi, all,\n> \n> Could somebody tell me if there is a work around to \n> create \"union on view\" (which seems not implemented\n> in the postgres yet) ?\n> \n> Also, is there any alternative query that can do:\n> \n> select * from (select * from table);\n> \n> I could not find an answer from the old archieve,\n> and sorry if this has been answered previously.\n> (I am new here :)\n\n", "msg_date": "Wed, 14 Mar 2001 21:09:52 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Union on view and.." }, { "msg_contents": "Ken Hirsch wrote:\n\n> So rtrim(\"center_out_opto\", \"_opto\") returns\n> \"center_ou\"\n> because \"u\" is not in the set {o, p, t, _} but all the characters after it\n> are.\n> rtrim(\"center_out_opto\", \"pot_\") will produce the same thing.\n>\n\nThat seems like an odd definition (although as Tom points out, it is\nconsistent with Oracle).\n\nIs there a way to just remove the \"_opto\" from the end of the string?\n\n-Tony\n\n\n", "msg_date": "Thu, 15 Mar 2001 09:34:04 -0800", "msg_from": "\"G. Anthony Reina\" <reina@nsi.edu>", "msg_from_op": true, "msg_subject": "Re: rtrim giving weird result" }, { "msg_contents": "On Thu, Mar 15, 2001 at 09:34:04AM -0800, G. Anthony Reina wrote:\n> Ken Hirsch wrote:\n> \n> > So rtrim(\"center_out_opto\", \"_opto\") returns\n> > \"center_ou\"\n> > because \"u\" is not in the set {o, p, t, _} but all the characters after it\n> > are.\n> > rtrim(\"center_out_opto\", \"pot_\") will produce the same thing.\n> >\n\nModulo the correct quoting conventions for strings, of course.\n\n> \n> That seems like an odd definition (although as Tom points out, it is\n> consistent with Oracle).\n\nYup, I got bit by it, trying to remove 'The ' from the front of a set of\nwords, in order to get an approximation of 'library sort'.\n\n> \n> Is there a way to just remove the \"_opto\" from the end of the string?\n\nIf you have exactly one known string to (optionally) remove, this works\n(and even works if the string is missing. Watch out for the early\noccurance of substring problem, though!):\n\ntest=# select substr('center_out_opto',1,(strpos('center_out_opto','_opto')-1)); \n substr \n------------\n center_out\n(1 row)\n\ntest=# select substr('center_out_opto',1,(strpos('center_out_opto','foo')-1));\n substr \n-----------------\n center_out_opto\n(1 row)\n\ntest=# \n\nRoss\n", "msg_date": "Thu, 15 Mar 2001 11:53:37 -0600", "msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>", "msg_from_op": false, "msg_subject": "Re: rtrim giving weird result" }, { "msg_contents": "\"Ross J. Reedstrom\" <reedstrm@rice.edu> writes:\n>> Is there a way to just remove the \"_opto\" from the end of the string?\n\n> If you have exactly one known string to (optionally) remove, this works\n> (and even works if the string is missing. Watch out for the early\n> occurance of substring problem, though!):\n\n> test=# select substr('center_out_opto',1,(strpos('center_out_opto','_opto')-1)); \n\nMy first thought for any moderately complicated string-bashing problem\nis to write a function in pltcl or plperl ... they are much stronger in\nstring manipulation than SQL itself is.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 Mar 2001 13:18:57 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: rtrim giving weird result " }, { "msg_contents": "On Thu, Mar 15, 2001 at 01:18:57PM -0500, Tom Lane wrote:\n> \"Ross J. Reedstrom\" <reedstrm@rice.edu> writes:\n> >> Is there a way to just remove the \"_opto\" from the end of the string?\n> \n> > If you have exactly one known string to (optionally) remove, this works\n> > (and even works if the string is missing. Watch out for the early\n> > occurance of substring problem, though!):\n> \n> > test=# select substr('center_out_opto',1,(strpos('center_out_opto','_opto')-1)); \n> \n> My first thought for any moderately complicated string-bashing problem\n> is to write a function in pltcl or plperl ... they are much stronger in\n> string manipulation than SQL itself is.\n\nAgreed, hence the caveats about 'exactly one string, that you know ahead of\ntime, and never appears as a substring ...'\n\nBut it _can_ be done, it's just not pretty. And it _is_ standard SQL:\nhere's the SQL92 spelling of the above:\n\nSELECT SUBSTRING ('center_out_opto' FROM 1 FOR (POSITION ('_opto' IN 'center_out_opto') - 1));\n\nRoss\n", "msg_date": "Thu, 15 Mar 2001 12:37:02 -0600", "msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>", "msg_from_op": false, "msg_subject": "Re: rtrim giving weird result" } ]
[ { "msg_contents": "We have an application that we were running quite happily using pg6.5.3\nin various customer sites. Now we are about to roll out a new version of\nour application, and we are going to use pg7.0.3. However, in testing\nwe've come across a couple of isolated incidents of database\ncorruption. They are sufficiently rare that I can't reproduce the problem,\nnor can I put my finger on just what application behaviour causes the\nproblems.\n\nThe symptoms most often involve some sort of index corruption, which is\nreported by vacuum and it seems that vacuum can fix it. On occasion vacuum\nreports \"invalid OID\" or similar (sorry, don't have exact wording of\nmessage). On one occasion the database has been corrupted to the point of\nunusability (ie vacuum admitted that it couldn't fix the problem), and a\ndump/restore was required (thankfully that at least worked). The index\ncorruption also occasionally manifests itself in the form of spurious\nuniqueness constraint violation errors.\n\nThe previous version of our app using 6.5.3 has never shown the slightest\nsymptom of database misbehaviour, to the best of my knowledge, despite\nfairly extensive use. So our expectations are fairly high :-).\n\nOne thing that is different about the new version of our app is that we\nnow use multiple connections to the database (previously we only had\none). We can in practice have transactions in progress on several\nconnections at once, and it is possible for some transactions to be rolled\nback under application control (ie explicit ROLLBACK; statement).\n\nI realise I haven't really provided an awful lot of information that would\nhelp identify the problem, so I shall attempt to be understanding if\nno-one can offer any useful suggestions. But I hope someone can :-). Has\nanyone seen this sort of problem before? Are there any known\ndatabase-corrupting bugs in 7.0.3? I don't recall anyone mentioning any in\nthe mailing lists. Is using multiple connections likely to stimulate any\nknown areas of risk?\n\nBTW we are using plain vanilla SQL, no triggers, no new types defined, no\nfunctions, no referential integrity checks, nothing more ambitious than a\nmulti-column primary key.\n\nThe platform is x86 Red Hat Linux 6.2. Curiously enough, on one of our\ntesting boxes and on my development box we have never seen this, but we\nhave seen it several times on our other test box and at least one customer\nsite, so there is some possibility it's related to dodgy hardware. The\ncustomer box with the problem is a multi-processor box, all the other\nboxes we've tested on are single-processor.\n\nTIA for any help,\n\nTim\n\n-- \n-----------------------------------------------\nTim Allen tim@proximity.com.au\nProximity Pty Ltd http://www.proximity.com.au/\n http://www4.tpg.com.au/users/rita_tim/\n\n", "msg_date": "Thu, 15 Mar 2001 19:52:13 +1100 (EST)", "msg_from": "Tim Allen <tim@proximity.com.au>", "msg_from_op": true, "msg_subject": "Database corruption in 7.0.3" }, { "msg_contents": "Can confirm this. Get this just yesterday time ago...\n\nMessages:\n\nNOTICE: Rel acm: TID 1697/217: OID IS INVALID. TUPGONE 1.\n\nAnd lots of such lines...\nAnd\n\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\n\n\nIn the end :-((( I lost a library of our institute... :-((( But I have a \nbackup!!! :-)))) This table even have NO indices!!!\n\nProgram received signal SIGSEGV, Segmentation fault.\n0x813837f in PageRepairFragmentation (page=0x82840b0 \"\") at bufpage.c:311\n311 alignedSize = MAXALIGN((*lp).lp_len);\n(gdb) bt\n#0 0x813837f in PageRepairFragmentation (page=0x82840b0 \"\") at bufpage.c:311\n#1 0x80a9b07 in vc_scanheap (vacrelstats=0x82675b0, onerel=0x8273428, \nvacuum_pages=0xbfffe928, fraged_pages=0xbfffe918) at vacuum.c:1022\n#2 0x80a8e8b in vc_vacone (relid=27296, analyze=0 '\\000', va_cols=0x0) at \nvacuum.c:599\n#3 0x80a8217 in vc_vacuum (VacRelP=0xbfffe9b4, analyze=0 '\\000', \nva_cols=0x0) at vacuum.c:299\n#4 0x80a818b in vacuum (vacrel=0x8267400 \"\", verbose=1 '\\001', analyze=0 \n'\\000', va_spec=0x0) at vacuum.c:223\n#5 0x813fba5 in ProcessUtility (parsetree=0x8267418, dest=Remote) at \nutility.c:694\n#6 0x813c16e in pg_exec_query_dest (query_string=0x820aaa0 \"vacuum verbose \nacm;\", dest=Remote, aclOverride=0 '\\000') at postgres.c:617\n#7 0x813c08e in pg_exec_query (query_string=0x820aaa0 \"vacuum verbose acm;\") \nat postgres.c:562\n#8 0x813d4c3 in PostgresMain (argc=9, argv=0xbffff068, real_argc=9, \nreal_argv=0xbffffa3c) at postgres.c:1588\n#9 0x811ace5 in DoBackend (port=0x8223068) at postmaster.c:2009\n#10 0x811a639 in BackendStartup (port=0x8223068) at postmaster.c:1776\n#11 0x811932f in ServerLoop () at postmaster.c:1037\n#12 0x8118b0e in PostmasterMain (argc=9, argv=0xbffffa3c) at postmaster.c:725\n#13 0x80d5e5e in main (argc=9, argv=0xbffffa3c) at main.c:93\n#14 0x40111fee in __libc_start_main () from /lib/libc.so.6\n\nThis is plain 7.0.3.\n\nOn Thursday 15 March 2001 14:52, Tim Allen wrote:\n> We have an application that we were running quite happily using pg6.5.3\n> in various customer sites. Now we are about to roll out a new version of\n> our application, and we are going to use pg7.0.3. However, in testing\n> we've come across a couple of isolated incidents of database\n> corruption. They are sufficiently rare that I can't reproduce the problem,\n> nor can I put my finger on just what application behaviour causes the\n> problems.\n>\n> The symptoms most often involve some sort of index corruption, which is\n> reported by vacuum and it seems that vacuum can fix it. On occasion vacuum\n> reports \"invalid OID\" or similar (sorry, don't have exact wording of\n> message). On one occasion the database has been corrupted to the point of\n> unusability (ie vacuum admitted that it couldn't fix the problem), and a\n> dump/restore was required (thankfully that at least worked). The index\n> corruption also occasionally manifests itself in the form of spurious\n> uniqueness constraint violation errors.\n>\n> The previous version of our app using 6.5.3 has never shown the slightest\n> symptom of database misbehaviour, to the best of my knowledge, despite\n> fairly extensive use. So our expectations are fairly high :-).\n>\n> One thing that is different about the new version of our app is that we\n> now use multiple connections to the database (previously we only had\n> one). We can in practice have transactions in progress on several\n> connections at once, and it is possible for some transactions to be rolled\n> back under application control (ie explicit ROLLBACK; statement).\n>\n> I realise I haven't really provided an awful lot of information that would\n> help identify the problem, so I shall attempt to be understanding if\n> no-one can offer any useful suggestions. But I hope someone can :-). Has\n> anyone seen this sort of problem before? Are there any known\n> database-corrupting bugs in 7.0.3? I don't recall anyone mentioning any in\n> the mailing lists. Is using multiple connections likely to stimulate any\n> known areas of risk?\n>\n> BTW we are using plain vanilla SQL, no triggers, no new types defined, no\n> functions, no referential integrity checks, nothing more ambitious than a\n> multi-column primary key.\n>\n> The platform is x86 Red Hat Linux 6.2. Curiously enough, on one of our\n> testing boxes and on my development box we have never seen this, but we\n> have seen it several times on our other test box and at least one customer\n> site, so there is some possibility it's related to dodgy hardware. The\n> customer box with the problem is a multi-processor box, all the other\n> boxes we've tested on are single-processor.\n>\n> TIA for any help,\n>\n> Tim\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: dyp@perchine.com\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------\n", "msg_date": "Thu, 15 Mar 2001 15:50:53 +0600", "msg_from": "Denis Perchine <dyp@perchine.com>", "msg_from_op": false, "msg_subject": "Re: Database corruption in 7.0.3" }, { "msg_contents": "Tim Allen <tim@proximity.com.au> writes:\n> Are there any known database-corrupting bugs in 7.0.3?\n\nNone that aren't also in earlier releases, AFAIR, so your report is\nfairly troubling. However there's not enough here to venture a guess\nabout the source of the problem.\n\nDo you see any backend crashes or other misbehavior before the VACUUM\nerror pops up, or is that the only symptom?\n\nIt would be a good idea to rebuild the system with assert checks on\n(configure --enable-cassert), in hopes that some Assert a little closer\nto the source of the problem will fire. Also, if you can spare some\ndisk space for logging, running the postmaster with -d2 to log all\nqueries might provide useful historical context when the problem\nreappears.\n\nI would like to be able to study the corrupted table, as well. Can you\nsee your way to either giving me access to your machine, or (if the\ndatabase isn't too large) sending me a tar dump of the whole $PGDATA\ndirectory next time it happens?\n\nPlease contact me off-list so we can figure out how best to pursue this\nproblem.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 Mar 2001 10:25:03 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Database corruption in 7.0.3 " } ]
[ { "msg_contents": "Sorry, if I used not corresponding mailing list, but I really dont\nknow where to send such email.\n\nIs that possible to add one more SQL command to Postgres? The problem\nthat IMHO no one RDBMS allows SQL command for sheduling. To support\nsheduling SQL programmers have to use outer tools to periodically\ncheck database if event commit. But IMHO it's much better to add one\nmore SQL command to allow sheduling in same SQL.\n\nMy thoughts about such command follow:\n=====================================================================\n\nThe SQL command for sheduler creating:\n\nCREATE SHEDULER name\nON table.field\n[FOR [EACH]|[LAST]]\nEXECUTE PROCEDURE func(arguments)\n\nWhen the current time becomes equal or more than minimal time in\nthe _table.field_, the event happens and the _func_ will be executed,\nand after that all records in this _table_ that in the _field_ have\ntime equal or less than current time will be deleted.\n\nThe other fields of this _table_ could be used as _arguments_ (or\nagregates of the other fields when _FOR EACH_ is absent).\n\n_FOR LAST_ - only for the record(s) of the _table_ that has(ve) the\nmaximum time (that equal or less the current time) the event(s) will\nbe processed.\n\n_FOR EACH_ - if there is such parameter for each corresponding record\nthe event could be processed, not for all at once.\n\nFor each _CREATE SHEDULER_ will be created:\n1. B-tree index on _table.field_.\n2. Inner trigger on insert/delete/update _table.field_ to have up to\ndate min(_table.field_) for nearest event processing.\n\n\nThe SQL command for sheduler deleting:\n\nDELETE SHEDULER name\n\n====================================================================\n\n-- \nBest regards,\n Paul Mamin mailto:magamos@mail.ru\n\n\n", "msg_date": "Thu, 15 Mar 2001 14:40:47 +0500", "msg_from": "Paul <magamos@mail.ru>", "msg_from_op": true, "msg_subject": "Sheduling in SQL" }, { "msg_contents": "On Thu, 15 Mar 2001, Paul wrote:\n\n> Sorry, if I used not corresponding mailing list, but I really dont\n> know where to send such email.\n> \n> Is that possible to add one more SQL command to Postgres? The problem\n> that IMHO no one RDBMS allows SQL command for sheduling. To support\n> sheduling SQL programmers have to use outer tools to periodically\n> check database if event commit. But IMHO it's much better to add one\n> more SQL command to allow sheduling in same SQL.\n> \n> My thoughts about such command follow:\n> =====================================================================\n\n\none option for doing this, ( in a fairly non-portable way ), is to create\na 'C' function contained in a shared library. on most unixen you can put\nin _init and _fini functions such that when the library is dlopened/closed\nthe functions execute. simply create a thread in the _init, that sits\narround on a timer, then does some stuff. not ideal, but an option\n\n\n\nPGP key: http://codex.net/pgp/pgp.asc\n\n", "msg_date": "Thu, 15 Mar 2001 14:25:33 +0000 (GMT)", "msg_from": "Vincent AE Scott <vince@codex.net>", "msg_from_op": false, "msg_subject": "Re: Sheduling in SQL" }, { "msg_contents": "Paul <magamos@mail.ru> writes:\n> CREATE SHEDULER name\n> ON table.field\n> [FOR [EACH]|[LAST]]\n> EXECUTE PROCEDURE func(arguments)\n\n> When the current time becomes equal or more than minimal time in\n> the _table.field_, the event happens and the _func_ will be executed,\n> and after that all records in this _table_ that in the _field_ have\n> time equal or less than current time will be deleted.\n\nThis strikes me as way too problem-specific to be reasonable as a\ngeneral-purpose system extension.\n\nYou can actually build this sort of facility in Postgres as it stands,\nusing a background process that executes the items from the \"todo\"\ntable. You'd put rules or triggers on the todo table to send out a\nNOTIFY event, which the background guy would listen for; that would cue\nhim to re-select the minimum timestamp in the table. Then he'd just\nsleep until the next NOTIFY or time to do something.\n\nThe primary advantage of doing things this way is that you have an\nactual client process executing the todo actions, so it could perform\noutside-the-database actions as well as any database updates that might\nbe needed. In the scheme you describe, the \"func\" would have to be\nexecuted in some disembodied backend context --- it wouldn't even have\na client to talk to, let alone any chance of doing outside-the-database\nactions.\n\nI've built applications that do roughly this sort of thing in Postgres\n(some of the protocol changes in 6.4 were done to make it easier ;-)).\nUnfortunately that was proprietary code and I can't show it to you,\nbut it's not really difficult. Perhaps you'd like to do up a simple\nexample and contribute it as a \"contrib\" module?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 Mar 2001 10:43:47 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Sheduling in SQL " }, { "msg_contents": "Tom Lane wrote:\n\n> \n> I've built applications that do roughly this sort of thing in Postgres\n> (some of the protocol changes in 6.4 were done to make it easier ;-)).\n\nI may misremember, but IIRC some older protocol (or at least libpq) \nreturned 0 as backend pid to listening client if it was notified by itself.\n\nCurrently it returns the actual pid for any backend. Is this what you \nchanged?\n\nAnyhow we need some _documented_ way to get backend pid (there is one \nactually received and stored with \"cookie\" for Ctrl-C processing, but \nAFAIK it is neither documented as being the backend id nor is there a \nfunction to get at it).\n\nFor my own use I created a C function pid() but perhaps there should be \nsomething mainstream for this.\n\n---------------\nHannu\n\n", "msg_date": "Thu, 15 Mar 2001 19:05:17 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Sheduling in SQL" }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> Tom Lane wrote:\n>> (some of the protocol changes in 6.4 were done to make it easier ;-)).\n\n> I may misremember, but IIRC some older protocol (or at least libpq) \n> returned 0 as backend pid to listening client if it was notified by itself.\n\n> Currently it returns the actual pid for any backend. Is this what you \n> changed?\n\nThat was one of the smaller items. The bigger problem was that the\nbackend wouldn't forward you NOTIFY events unless you issued a constant\nstream of dummy queries.\n\n> Anyhow we need some _documented_ way to get backend pid\n\nPQbackendPID() seems adequately documented to me ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 Mar 2001 12:11:28 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Sheduling in SQL " } ]
[ { "msg_contents": "Hello all,\n\nI try to build postgresql 7.1 beta 5 on UnixWare 7.1.1. I have problem to\nbuild the Python extension,\nthere is a problem to build the shared library, I have see some information\non that but don't help me.\nSorry\n\nCan you help me by providing the correct command\n\nThanks\nJoel\n\n\n\n\n\n", "msg_date": "Thu, 15 Mar 2001 12:18:39 +0100", "msg_from": "\"Joel Quinet\" <joel.quinet@swing.be>", "msg_from_op": true, "msg_subject": "Problem build python extension on Unixware" } ]
[ { "msg_contents": "Hi all pgsql-hackers,\n\nIam a new hacker of this list.\n\nI and a few others have started an Linux localization project for Indian\nLanguages called - Project Tuxila (http://inapp.com/tuxila), and are\ncurrently doing the localization for an Indian language called\n\"Malayalam\". The utilities that will be developed as part of the\nproject will be under GNU GPL.\n\nWe are trying to develop all the required utilities for linux\nlocalization. And as part of that we are trying to implement\n\"Malayalam\" into postgreSQL with the Unicode support available in it.\n\nCould anyone tell me, whether there is any research going in\npostgreSQL-Unicode areas, so that i can communicate with them and try to\nfind solutions to my problems.\n\nIs there any list for postgreSQL-Unicode?\n\nIs there any Unicode sorting engine present in postgreSQL?\n\nI also invite interested hackers to participate in this FreeSoftware \nmovement.\n\nregards,\nSuraj Kumar S.\n--\nGNU/Linux rulz!\n\n", "msg_date": "Thu, 15 Mar 2001 18:26:52 +0530 (IST)", "msg_from": "\"Suraj Kumar S.\" <s_suraj_in@yahoo.co.uk>", "msg_from_op": true, "msg_subject": "Unicode in postgresql" }, { "msg_contents": "> Hi all pgsql-hackers,\n> \n> Iam a new hacker of this list.\n> \n> I and a few others have started an Linux localization project for Indian\n> Languages called - Project Tuxila (http://inapp.com/tuxila), and are\n> currently doing the localization for an Indian language called\n> \"Malayalam\". The utilities that will be developed as part of the\n> project will be under GNU GPL.\n> \n> We are trying to develop all the required utilities for linux\n> localization. And as part of that we are trying to implement\n> \"Malayalam\" into postgreSQL with the Unicode support available in it.\n\nWhat kind of encoding is Malayalam? Is it ISO 2022 compatible? Or yet\nanother local encoding?\n\n> Could anyone tell me, whether there is any research going in\n> postgreSQL-Unicode areas, so that i can communicate with them and try to\n> find solutions to my problems.\n\nPostgreSQL 7.1 will have a feature that does an automatic encoding\nconversion between Unicode(UTF-8) and other encodings including ISO\n8859-1 to 5, EUC(Extended Unix Code) in the database engine.\n\n> Is there any list for postgreSQL-Unicode?\n> \n> Is there any Unicode sorting engine present in postgreSQL?\n\nNo.\n--\nTatsuo Ishii\n", "msg_date": "Thu, 15 Mar 2001 23:11:18 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Unicode in postgresql" } ]
[ { "msg_contents": "Based on the tests we did last week, it seems clear than on many\nplatforms it's a win to sync the WAL log by writing it with open()\noption O_SYNC (or O_DSYNC where available) rather than issuing explicit\nfsync() (resp. fdatasync()) calls. In theory fsync ought to be faster,\nbut it seems that too many kernels have inefficient implementations of\nfsync.\n\nI think we need to make both O_SYNC and fsync() choices available in\n7.1. Two important questions need to be settled:\n\n1. Is a compile-time flag (in config.h.in) good enough, or do we need\nto make it configurable via a GUC variable? (A variable would have to\nbe postmaster-start-time changeable only, so you'd still need a\npostmaster restart to change it.)\n\n2. Which way should be the default?\n\nThere's also the lesser question of what to call the config symbol\nor variable.\n\nMy inclination is to go with a compile-time flag named USE_FSYNC_FOR_WAL\nand have the default be off (ie, use O_SYNC by default) but I'm not\nstrongly set on that. Opinions anyone?\n\nIn any case the code should automatically prefer O_DSYNC over O_SYNC if\navailable, and should prefer fdatasync() over fsync() if available;\nI doubt we need to provide a knob to alter those choices.\n\nBTW, are there any platforms where O_DSYNC exists but has a different\nspelling?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 Mar 2001 12:29:52 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Allowing WAL fsync to be done via O_SYNC" }, { "msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [010315 09:35] wrote:\n> \n> BTW, are there any platforms where O_DSYNC exists but has a different\n> spelling?\n\nYes, FreeBSD only has: O_FSYNC\nit doesn't have O_SYNC nor O_DSYNC.\n\n-- \n-Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]\n\n", "msg_date": "Thu, 15 Mar 2001 09:44:15 -0800", "msg_from": "Alfred Perlstein <bright@wintelcom.net>", "msg_from_op": false, "msg_subject": "Re: Allowing WAL fsync to be done via O_SYNC" }, { "msg_contents": "Alfred Perlstein <bright@wintelcom.net> writes:\n> * Tom Lane <tgl@sss.pgh.pa.us> [010315 09:35] wrote:\n>> BTW, are there any platforms where O_DSYNC exists but has a different\n>> spelling?\n\n> Yes, FreeBSD only has: O_FSYNC\n> it doesn't have O_SYNC nor O_DSYNC.\n\nOkay ... we can fall back to O_FSYNC if we don't see either of the\nothers. No problem. Any other weird cases out there? I think Andreas\nmight've muttered something about AIX but I'm not sure now.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 Mar 2001 12:48:14 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Allowing WAL fsync to be done via O_SYNC " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> As a general rule, if something can be a run time option, as opposed to a\n> compile time option, then it should be. At the very least you keep the\n> installation simple and allow for easier experimenting.\n\nI've been mentally working through the code, and see only one reason why\nit might be necessary to go with a compile-time choice: suppose we see\nthat none of O_DSYNC, O_SYNC, O_FSYNC, [others] are defined? With the\ncompile-time choice it's easy: #define USE_FSYNC_FOR_WAL, and sail on.\nIf it's a GUC variable then we need a way to prevent the GUC option from\nbecoming unset (which would disable the fsync() calls, leaving nothing\nto replace 'em). Doable, perhaps, but seems kind of ugly ... any\nthoughts about that?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 Mar 2001 13:15:18 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Allowing WAL fsync to be done via O_SYNC " }, { "msg_contents": "Tom Lane writes:\n\n> I think we need to make both O_SYNC and fsync() choices available in\n> 7.1. Two important questions need to be settled:\n>\n> 1. Is a compile-time flag (in config.h.in) good enough, or do we need\n> to make it configurable via a GUC variable? (A variable would have to\n> be postmaster-start-time changeable only, so you'd still need a\n> postmaster restart to change it.)\n\nAs a general rule, if something can be a run time option, as opposed to a\ncompile time option, then it should be. At the very least you keep the\ninstallation simple and allow for easier experimenting.\n\n> There's also the lesser question of what to call the config symbol\n> or variable.\n\nI suggest \"wal_use_fsync\" as a GUC variable, assuming the default would be\noff. Otherwise \"wal_use_open_sync\". (Use a general-to-specific naming\nscheme to allow for easier grouping. Having defaults be \"off\"\nconsistently is more intuitive.)\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Thu, 15 Mar 2001 19:17:20 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Allowing WAL fsync to be done via O_SYNC" }, { "msg_contents": "> Based on the tests we did last week, it seems clear than on many\n> platforms it's a win to sync the WAL log by writing it with open()\n> option O_SYNC (or O_DSYNC where available) rather than issuing explicit\n> fsync() (resp. fdatasync()) calls. In theory fsync ought to be faster,\n> but it seems that too many kernels have inefficient implementations of\n> fsync.\n\nCan someone explain why configure/platform-specific flags are allowed to\nbe added at this stage in the release, but my pgmonitor patch was\nrejected?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 15 Mar 2001 15:20:09 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allowing WAL fsync to be done via O_SYNC" }, { "msg_contents": "> Peter Eisentraut <peter_e@gmx.net> writes:\n> > As a general rule, if something can be a run time option, as opposed to a\n> > compile time option, then it should be. At the very least you keep the\n> > installation simple and allow for easier experimenting.\n> \n> I've been mentally working through the code, and see only one reason why\n> it might be necessary to go with a compile-time choice: suppose we see\n> that none of O_DSYNC, O_SYNC, O_FSYNC, [others] are defined? With the\n> compile-time choice it's easy: #define USE_FSYNC_FOR_WAL, and sail on.\n> If it's a GUC variable then we need a way to prevent the GUC option from\n> becoming unset (which would disable the fsync() calls, leaving nothing\n> to replace 'em). Doable, perhaps, but seems kind of ugly ... any\n> thoughts about that?\n\nI don't think having something a run-time option is always a good idea. \nGiving people too many choices is often confusing. \n\nI think we should just check at compile time, and choose O_* if we have\nit, and if not, use fsync(). No one will ever do the proper timing\ntests to know which is better except us. Also, it seems O_* should be\nfaster because you are fsync'ing the buffer you just wrote, so there is\nno looking around for dirty buffers like fsync().\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 15 Mar 2001 15:26:59 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allowing WAL fsync to be done via O_SYNC" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Can someone explain why configure/platform-specific flags are allowed to\n> be added at this stage in the release, but my pgmonitor patch was\n> rejected?\n\nPossibly just because Marc hasn't stomped on me quite yet ;-)\n\nHowever, I can actually make a case for this: we are flushing out\nperformance bugs in a new feature, ie WAL.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 Mar 2001 15:29:23 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Allowing WAL fsync to be done via O_SYNC " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Can someone explain why configure/platform-specific flags are allowed to\n> > be added at this stage in the release, but my pgmonitor patch was\n> > rejected?\n> \n> Possibly just because Marc hasn't stomped on me quite yet ;-)\n> \n> However, I can actually make a case for this: we are flushing out\n> performance bugs in a new feature, ie WAL.\n\n\nYou did a masterful job of making my pgmonitor patch sound like a debug\naid instead of a feature too. :-)\n\nHave you considered a career in law. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 15 Mar 2001 15:32:08 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allowing WAL fsync to be done via O_SYNC" }, { "msg_contents": "> > I've been mentally working through the code, and see only one reason why\n> > it might be necessary to go with a compile-time choice: suppose we see\n> > that none of O_DSYNC, O_SYNC, O_FSYNC, [others] are defined? With the\n> > compile-time choice it's easy: #define USE_FSYNC_FOR_WAL, and sail on.\n> > If it's a GUC variable then we need a way to prevent the GUC option from\n> > becoming unset (which would disable the fsync() calls, leaving nothing\n> > to replace 'em). Doable, perhaps, but seems kind of ugly ... any\n> > thoughts about that?\n> \n> I don't think having something a run-time option is always a good idea. \n> Giving people too many choices is often confusing. \n> \n> I think we should just check at compile time, and choose O_* if we have\n> it, and if not, use fsync(). No one will ever do the proper timing\n> tests to know which is better except us. Also, it seems O_* should be\n> faster because you are fsync'ing the buffer you just wrote, so there is\n> no looking around for dirty buffers like fsync().\n\nI later read Vadim's comment that fsync() of two blocks may be faster\nthan two O_* writes, so I am now confused about the proper solution. \nHowever, I think we need to pick one and make it invisible to the user. \nPerhaps a compiler/config.h flag for testing would be a good solution.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 15 Mar 2001 15:36:36 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allowing WAL fsync to be done via O_SYNC" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I later read Vadim's comment that fsync() of two blocks may be faster\n> than two O_* writes, so I am now confused about the proper solution. \n> However, I think we need to pick one and make it invisible to the user. \n> Perhaps a compiler/config.h flag for testing would be a good solution.\n\nI believe that we don't know enough yet to nail down a hard-wired\ndecision. Vadim's idea of preferring O_DSYNC if it appears to be\ndifferent from O_SYNC is a good first cut, but I think we'd better make\nit possible to override that, at least for testing purposes.\n\nSo I think it should be configurable at *some* level. I don't much care\nwhether it's a config.h entry or a GUC variable.\n\nBut consider this: we'll be more likely to get some feedback from the\nfield (allowing us to refine the policy in future releases) if it is a\nGUC variable. Not many people will build two versions of the software,\nbut people might take the trouble to play with a run-time configuration\nsetting.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 Mar 2001 15:44:22 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Allowing WAL fsync to be done via O_SYNC " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I later read Vadim's comment that fsync() of two blocks may be faster\n> > than two O_* writes, so I am now confused about the proper solution. \n> > However, I think we need to pick one and make it invisible to the user. \n> > Perhaps a compiler/config.h flag for testing would be a good solution.\n> \n> I believe that we don't know enough yet to nail down a hard-wired\n> decision. Vadim's idea of preferring O_DSYNC if it appears to be\n> different from O_SYNC is a good first cut, but I think we'd better make\n> it possible to override that, at least for testing purposes.\n> \n> So I think it should be configurable at *some* level. I don't much care\n> whether it's a config.h entry or a GUC variable.\n> \n> But consider this: we'll be more likely to get some feedback from the\n> field (allowing us to refine the policy in future releases) if it is a\n> GUC variable. Not many people will build two versions of the software,\n> but people might take the trouble to play with a run-time configuration\n> setting.\n\nYes, I can imagine. Can we remove it once we know the answer?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 15 Mar 2001 15:46:20 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allowing WAL fsync to be done via O_SYNC" }, { "msg_contents": "I'd actually vote for it to remain for a release or two or more, as \nwe get more experience with stuff, the defaults may be different for \ndifferent workloads. \n\nLER\n-- \nLarry Rosenman \n http://www.lerctr.org/~ler/\nPhone: +1 972 414 9812 \n E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749 US\n\n>>>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<<\n\nOn 3/15/01, 2:46:20 PM, Bruce Momjian <pgman@candle.pha.pa.us> wrote \nregarding Re: [HACKERS] Allowing WAL fsync to be done via O_SYNC:\n\n\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > I later read Vadim's comment that fsync() of two blocks may be faster\n> > > than two O_* writes, so I am now confused about the proper solution.\n> > > However, I think we need to pick one and make it invisible to the user.\n> > > Perhaps a compiler/config.h flag for testing would be a good solution.\n> >\n> > I believe that we don't know enough yet to nail down a hard-wired\n> > decision. Vadim's idea of preferring O_DSYNC if it appears to be\n> > different from O_SYNC is a good first cut, but I think we'd better make\n> > it possible to override that, at least for testing purposes.\n> >\n> > So I think it should be configurable at *some* level. I don't much care\n> > whether it's a config.h entry or a GUC variable.\n> >\n> > But consider this: we'll be more likely to get some feedback from the\n> > field (allowing us to refine the policy in future releases) if it is a\n> > GUC variable. Not many people will build two versions of the software,\n> > but people might take the trouble to play with a run-time configuration\n> > setting.\n\n> Yes, I can imagine. Can we remove it once we know the answer?\n\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n\n> http://www.postgresql.org/users-lounge/docs/faq.html\n", "msg_date": "Thu, 15 Mar 2001 21:12:36 GMT", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: Allowing WAL fsync to be done via O_SYNC" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> I haven't followed the jungle of numbers too closely.\n> Is it not the case that WAL + fsync is still faster than 7.0 + fsync and\n> WAL/no fsync is still faster than 7.0/no fsync?\n\nI believe the first is true in most cases. I wouldn't swear to the\nsecond though, since WAL requires more I/O and doesn't save any fsyncs\nif you've got 'em all turned off anyway ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 Mar 2001 16:32:09 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Allowing WAL fsync to be done via O_SYNC " }, { "msg_contents": "Tom Lane writes:\n\n> I've been mentally working through the code, and see only one reason why\n> it might be necessary to go with a compile-time choice: suppose we see\n> that none of O_DSYNC, O_SYNC, O_FSYNC, [others] are defined?\n\nWe postulate that one of those has to exist. Alternatively, you make the\noption read\n\nwal_sync_method = fsync | open_sync\n\nIn the \"parse_hook\" for the parameter you if #ifdef out 'open_sync' as a\nvalid option if none of those exist, so a user will get \"'open_sync' is\nnot a valid option value\".\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Thu, 15 Mar 2001 22:37:57 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Allowing WAL fsync to be done via O_SYNC " }, { "msg_contents": "Tom Lane writes:\n\n> However, I can actually make a case for this: we are flushing out\n> performance bugs in a new feature, ie WAL.\n\nI haven't followed the jungle of numbers too closely.\n\nIs it not the case that WAL + fsync is still faster than 7.0 + fsync and\nWAL/no fsync is still faster than 7.0/no fsync?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Thu, 15 Mar 2001 22:40:10 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Allowing WAL fsync to be done via O_SYNC " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> We postulate that one of those has to exist. Alternatively, you make the\n> option read\n> wal_sync_method = fsync | open_sync\n> In the \"parse_hook\" for the parameter you if #ifdef out 'open_sync' as a\n> valid option if none of those exist, so a user will get \"'open_sync' is\n> not a valid option value\".\n\nI like this a lot. In fact, I am mightily tempted to make it\n\nwal_sync_method = fsync | fdatasync | open_sync | open_datasync\n\nwhere fdatasync would only be valid if configure found fdatasync() and\nopen_datasync would only be valid if we found O_DSYNC exists and isn't\nO_SYNC. This would let people try all the available methods under\nrealistic test conditions, for hardly any extra work.\n\nFurthermore, the documentation could say something like \"The default is\nthe first available method in the order open_datasync, fdatasync, fsync,\nopen_sync\" (assuming that Vadim's preferences are right).\n\nA small problem is that I don't want to be doing multiple strcasecmp's\nto figure out what to do in xlog.c. Do you object if I add an\n\"assign_hook\" to guc.c that's called when an actual assignment is made?\nThat would provide a place to set up the flag variables that xlog.c\nwould actually look at. Furthermore, having an assign_hook would let us\nsupport changing this value at SIGHUP, not only at postmaster start.\n(The assign hook would just need to fsync whatever WAL file is currently\nopen and possibly close/reopen the file, to ensure that no blocks miss\ngetting synced when we change conventions.)\n\nCreeping featurism strikes again ;-) ... but this feels right ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 Mar 2001 17:11:19 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Allowing WAL fsync to be done via O_SYNC " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> switch(lower(string[0]) + lower(string[5]))\n> {\n> \tcase 'f':\t/* fsync */\n> \tcase 'f' + 's':\t/* fdatasync */\n> \tcase 'o' + 's':\t/* open_sync */\n> \tcase 'o' + 'd':\t/* open_datasync */\n> }\n\n> Although ugly, it should serve as a readable solution for now.\n\nUgly is the word ...\n\n>> Do you object if I add an \"assign_hook\" to guc.c that's called when an\n>> actual assignment is made?\n\n> Something like this is on my wish list, but I'm not sure if it's wise to\n> start this now.\n\nI'm not particularly concerned about changing the interface later if\nthat proves necessary. We're not likely to have so many of the things\nthat an API change is burdensome, and they will all be strictly backend\ninternal.\n\nWhat I have in mind for now is just\n\n\tvoid (*assign_hook) (const char *newval);\n\n(obviously this is for string variables only, for now) called just\nbefore actually changing the variable value. This lets the hook see\nthe old value if it needs to.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 Mar 2001 18:11:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Allowing WAL fsync to be done via O_SYNC " }, { "msg_contents": "Tom Lane writes:\n\n> wal_sync_method = fsync | fdatasync | open_sync | open_datasync\n\n> A small problem is that I don't want to be doing multiple strcasecmp's\n> to figure out what to do in xlog.c.\n\nThis should be efficient:\n\nswitch(lower(string[0]) + lower(string[5]))\n{\n\tcase 'f':\t/* fsync */\n\tcase 'f' + 's':\t/* fdatasync */\n\tcase 'o' + 's':\t/* open_sync */\n\tcase 'o' + 'd':\t/* open_datasync */\n}\n\nAlthough ugly, it should serve as a readable solution for now.\n\n> Do you object if I add an \"assign_hook\" to guc.c that's called when an\n> actual assignment is made?\n\nSomething like this is on my wish list, but I'm not sure if it's wise to\nstart this now. There are a few issues that need some thought, like how\nto make the interface for non-string options, and how to keep it in sync\nwith the parse hook of string options, ...\n\n> That would provide a place to set up the flag variables that xlog.c\n> would actually look at. Furthermore, having an assign_hook would let\n> us support changing this value at SIGHUP, not only at postmaster\n> start. (The assign hook would just need to fsync whatever WAL file is\n> currently open and possibly close/reopen the file, to ensure that no\n> blocks miss getting synced when we change conventions.)\n\n... and possibly here you need to pass the context to the assign hook as\nwell. This application strikes me as a bit too esoteric for a first try.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Fri, 16 Mar 2001 00:13:00 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Allowing WAL fsync to be done via O_SYNC " }, { "msg_contents": "Bruce Momjian wrote:\n> \n<snip>\n> No one will ever do the proper timing tests to know which is better except us.\n\nHi Bruce,\n\nI believe in the future that anyone doing serious benchmark tests before\nlarge-scale implementation will indeed be testing things like this. \nThere will also be people/companies out there who will specialise in\n\"tuning\" PostgreSQL systems and they will definitely test stuff like\nthis... different variations, different database structures, different\nOS's, etc.\n\nRegards and best wishes,\n\nJustin Clift\n", "msg_date": "Fri, 16 Mar 2001 11:50:25 +1100", "msg_from": "Justin Clift <aa2@bigpond.net.au>", "msg_from_op": false, "msg_subject": "Re: Allowing WAL fsync to be done via O_SYNC" }, { "msg_contents": "> Bruce Momjian wrote:\n> > \n> <snip>\n> > No one will ever do the proper timing tests to know which is better except us.\n> \n> Hi Bruce,\n> \n> I believe in the future that anyone doing serious benchmark tests before\n> large-scale implementation will indeed be testing things like this. \n> There will also be people/companies out there who will specialize in\n> \"tuning\" PostgreSQL systems and they will definitely test stuff like\n> this... different variations, different database structures, different\n> OS's, etc.\n\nBut I don't want to go the Informix/Oracle way where we have so many\ntuning options that no one understands them all. I would like us to\nfind the best options and only give users choices when there is a real\ntradeoff.\n\nFor example, Tom had a nice fsync test program. Why can't we run that\non various platforms and collect the results, then make a decision on\nthe best default.\n\nTrying to test the affects of fsync() with a database wrapped around it\nreally makes for difficult measurement anyway.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 15 Mar 2001 19:56:34 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allowing WAL fsync to be done via O_SYNC" }, { "msg_contents": "Is someone able to put together a testing-type script or sequence so\npeople can run this on the various platforms and then report the\nresults?\n\nFor example, I can setup benchmarking, (or automated testing) on various\nSolaris platforms to run overnight and report the results in the\nmorning. I suspect that quite a few people can do similar.\n\nWould this be a good thing for someone to spend some time and effort on,\nin generating testing-type scripts/structures? It might be a useful\ntool to use in the future when making performance/related decisions like\nthis.\n\nRegards and best wishes,\n\nJustin Clift\n\nTom Lane wrote:\n> \n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I later read Vadim's comment that fsync() of two blocks may be faster\n> > than two O_* writes, so I am now confused about the proper solution.\n> > However, I think we need to pick one and make it invisible to the user.\n> > Perhaps a compiler/config.h flag for testing would be a good solution.\n> \n> I believe that we don't know enough yet to nail down a hard-wired\n> decision. Vadim's idea of preferring O_DSYNC if it appears to be\n> different from O_SYNC is a good first cut, but I think we'd better make\n> it possible to override that, at least for testing purposes.\n> \n> So I think it should be configurable at *some* level. I don't much care\n> whether it's a config.h entry or a GUC variable.\n> \n> But consider this: we'll be more likely to get some feedback from the\n> field (allowing us to refine the policy in future releases) if it is a\n> GUC variable. Not many people will build two versions of the software,\n> but people might take the trouble to play with a run-time configuration\n> setting.\n> \n> regards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n", "msg_date": "Fri, 16 Mar 2001 12:02:39 +1100", "msg_from": "Justin Clift <aa2@bigpond.net.au>", "msg_from_op": false, "msg_subject": "Testing structure (was) Re: Allowing WAL fsync to be done via O_SYNC" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> For example, Tom had a nice fsync test program. Why can't we run that\n> on various platforms and collect the results, then make a decision on\n> the best default.\n\nMainly because (a) there's not enough time before release, and (b) that\ntest program was far too stupid to give trustworthy results anyway.\n(It was assuming exactly one commit per XLOG block, for example.)\n\n> Trying to test the affects of fsync() with a database wrapped around it\n> really makes for difficult measurement anyway.\n\nExactly. What I'm doing now is providing some infrastructure with which\nwe can hope to see some realistic tests. For example, I'm gonna be\nleaning on Great Bridge's lab guys to rerun their TPC tests with a bunch\nof combinations, just as soon as the dust settles. But I'm not planning\nto put my faith in only that one benchmark.\n\nI'm all for improving the intelligence of the defaults once we know\nenough to pick better defaults. But we don't yet, and there's no way\nthat we *will* know enough until after we've shipped a release that has\nthese tuning knobs and gotten some real-world results from the field.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 Mar 2001 20:04:11 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Allowing WAL fsync to be done via O_SYNC " }, { "msg_contents": "I was wondering if the multiple writes performed to the XLOG could be\ngrouped into one write(). Seems everyone agrees:\n\t\n\tfdatasync/O_DSYNC is better then plain fsync/O_SYNC\n\nand the O_* flags are better than fsync() if we are doing only one write\nbefore every fsync. It seems the only open question is now often we do\nmultiple writes before fsync, and if that is ever faster than putting\nthe O_* on the file for all writes.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 15 Mar 2001 21:57:17 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allowing WAL fsync to be done via O_SYNC" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I was wondering if the multiple writes performed to the XLOG could be\n> grouped into one write().\n\nThat would require fairly major restructuring of xlog.c, which I don't\nwant to undertake at this point in the cycle (we're trying to push out\na release candidate, remember?). I'm not convinced it would be a huge\nwin anyway. It would be a win if your average transaction writes\nmultiple blocks' worth of XLOG ... but if your average transaction\nwrites less than a block then it won't help.\n\nI think it probably is a good idea to restructure xlog.c so that it can\nwrite more than one page at a time --- but it's not such a great idea\nthat I want to hold up the release any more for it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 Mar 2001 22:41:12 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Allowing WAL fsync to be done via O_SYNC " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I was wondering if the multiple writes performed to the XLOG could be\n> > grouped into one write().\n> \n> That would require fairly major restructuring of xlog.c, which I don't\n> want to undertake at this point in the cycle (we're trying to push out\n> a release candidate, remember?). I'm not convinced it would be a huge\n> win anyway. It would be a win if your average transaction writes\n> multiple blocks' worth of XLOG ... but if your average transaction\n> writes less than a block then it won't help.\n> \n> I think it probably is a good idea to restructure xlog.c so that it can\n> write more than one page at a time --- but it's not such a great idea\n> that I want to hold up the release any more for it.\n\nOK, but the point of adding all those configuration options was to allow\nus to figure out which was faster. If you can do the code so we no\nlonger need to know the answer of which is best, why bother adding the\nconfig options. Just ship our best guess and fix it when we can. Does\nthat make sense?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 15 Mar 2001 22:57:57 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allowing WAL fsync to be done via O_SYNC" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> OK, but the point of adding all those configuration options was to allow\n> us to figure out which was faster. If you can do the code so we no\n> longer need to know the answer of which is best, why bother adding the\n> config options.\n\nHow in the world did you arrive at that idea? I don't see anyone around\nhere but you claiming that we don't need any experimentation ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 Mar 2001 23:01:24 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Allowing WAL fsync to be done via O_SYNC " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > OK, but the point of adding all those configuration options was to allow\n> > us to figure out which was faster. If you can do the code so we no\n> > longer need to know the answer of which is best, why bother adding the\n> > config options.\n> \n> How in the world did you arrive at that idea? I don't see anyone around\n> here but you claiming that we don't need any experimentation ...\n\nI am trying to understand what testing we need to do. I know we need\nconfigure tests to check to see what exists in the OS.\n\nMy question was what are we needing to test? If we can do only single writes\nto the log, don't we prefer O_* to fsync, and the O_D* options over\nplain O_*? Am I confused?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 15 Mar 2001 23:06:16 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allowing WAL fsync to be done via O_SYNC" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> My question was what are we needing to test? If we can do only single writes\n> to the log, don't we prefer O_* to fsync, and the O_D* options over\n> plain O_*? Am I confused?\n\nI don't think we have enough data to conclude that with any certainty.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Mar 2001 00:23:31 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Allowing WAL fsync to be done via O_SYNC " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > My question was what are we needing to test? If we can do only single writes\n> > to the log, don't we prefer O_* to fsync, and the O_D* options over\n> > plain O_*? Am I confused?\n> \n> I don't think we have enough data to conclude that with any certainty.\n\nI just figured we knew the answers to above issues, that that the only\nissue was multiple writes vs. fsync().\n\nIt is hard for me to imagine O_* being slower than fsync(), or fdatasync\nbeing slower than fsync. Are we not able to assume that?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 16 Mar 2001 00:26:36 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allowing WAL fsync to be done via O_SYNC" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> It is hard for me to imagine O_* being slower than fsync(),\n\nNot hard at all --- if we're writing multiple xlog blocks per\ntransaction, then O_* constrains the sequence of operations more\nthan we really want. Changing xlog.c to combine writes as much\nas possible would reduce this problem, but not eliminate it.\n\nBesides, the entire object of this exercise is to work around\nan unexpected inefficiency in some kernels' implementations of\nfsync/fdatasync (viz, scanning over lots of not-dirty buffers).\nWho's to say that there might not be inefficiencies in other\nplatforms' implementations of the O_* options?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Mar 2001 00:54:54 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Allowing WAL fsync to be done via O_SYNC " }, { "msg_contents": "Added to TODO:\n\n\t* Determine optimal fdatasync/fsync, O_SYNC/O_DSYNC options\n\t * Allow multiple blocks to be written to WAL with one write() \n\n\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > It is hard for me to imagine O_* being slower than fsync(),\n> \n> Not hard at all --- if we're writing multiple xlog blocks per\n> transaction, then O_* constrains the sequence of operations more\n> than we really want. Changing xlog.c to combine writes as much\n> as possible would reduce this problem, but not eliminate it.\n> \n> Besides, the entire object of this exercise is to work around\n> an unexpected inefficiency in some kernels' implementations of\n> fsync/fdatasync (viz, scanning over lots of not-dirty buffers).\n> Who's to say that there might not be inefficiencies in other\n> platforms' implementations of the O_* options?\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 20 Mar 2001 15:32:44 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allowing WAL fsync to be done via O_SYNC" } ]
[ { "msg_contents": "> Based on the tests we did last week, it seems clear than on many\n> platforms it's a win to sync the WAL log by writing it with open()\n> option O_SYNC (or O_DSYNC where available) rather than \n> issuing explicit fsync() (resp. fdatasync()) calls.\n\nI don't remember big difference in using fsync or O_SYNC in tfsync\ntests. Both depend on block size and keeping in mind that fsync\nallows us syncing after writing *multiple* blocks I would either\nuse fsync as default or don't deal with O_SYNC at all.\nBut if O_DSYNC is defined and O_DSYNC != O_SYNC then we should\nuse O_DSYNC by default.\n(BTW, we didn't compare fdatasync and O_SYNC yet).\n\nVadim\n", "msg_date": "Thu, 15 Mar 2001 10:53:36 -0800", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: Allowing WAL fsync to be done via O_SYNC" }, { "msg_contents": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n> ... I would either\n> use fsync as default or don't deal with O_SYNC at all.\n> But if O_DSYNC is defined and O_DSYNC != O_SYNC then we should\n> use O_DSYNC by default.\n\nHm. We could do that reasonably painlessly as a compile-time test in\nxlog.c, but I'm not clear on how it would play out as a GUC option.\nPeter, what do you think about configuration-dependent defaults for\nGUC variables?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 Mar 2001 14:04:24 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allowing WAL fsync to be done via O_SYNC " }, { "msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [010315 11:07] wrote:\n> \"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n> > ... I would either\n> > use fsync as default or don't deal with O_SYNC at all.\n> > But if O_DSYNC is defined and O_DSYNC != O_SYNC then we should\n> > use O_DSYNC by default.\n> \n> Hm. We could do that reasonably painlessly as a compile-time test in\n> xlog.c, but I'm not clear on how it would play out as a GUC option.\n> Peter, what do you think about configuration-dependent defaults for\n> GUC variables?\n\nSorry, what's a GUC? :)\n\n-- \n-Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]\n\n", "msg_date": "Thu, 15 Mar 2001 11:17:24 -0800", "msg_from": "Alfred Perlstein <bright@wintelcom.net>", "msg_from_op": false, "msg_subject": "Re: Allowing WAL fsync to be done via O_SYNC" }, { "msg_contents": "Alfred Perlstein wrote:\n> * Tom Lane <tgl@sss.pgh.pa.us> [010315 11:07] wrote:\n> > Peter, what do you think about configuration-dependent defaults for\n> > GUC variables?\n \n> Sorry, what's a GUC? :)\n\nGrand Unified Configuration, Peter E.'s baby.\n\nSee the thread starting at\nhttp://www.postgresql.org/mhonarc/pgsql-hackers/2000-03/msg00107.html\nfor details.\n\n(And the search is working.... :-)).\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Thu, 15 Mar 2001 14:37:03 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: Allowing WAL fsync to be done via O_SYNC" }, { "msg_contents": "* Peter Eisentraut <peter_e@gmx.net> [010315 11:33] wrote:\n> Alfred Perlstein writes:\n> \n> > Sorry, what's a GUC? :)\n> \n> Grand Unified Configuration system\n> \n> It's basically a cute name for the achievement that there's now a single\n> name space and interface for (almost) all postmaster run time\n> configuration variables,\n\nOh, thanks.\n\nWell considering that, a runtime check for doing_sync_wal_writes\n== 1 shouldn't be that expensive. Sort of the inverse of -F,\nmeaning that we're using O_SYNC for WAL writes, we don't need to\nfsync it.\n\nBtw, if you guys want to get some speed with WAL, I'd implement a\nwrite-behind process if it was possible to do the O_SYNC writes.\n\n...\n\nAnd since we're sorta on the topic of IO, I noticed that it looks\nlike (at least in 7.0.3) that vacuum and certain other routines\nread files in reverse order.\n\nThe problem (at least in FreeBSD) is that we haven't tuned\nthe system to detect reverse reading and hence don't do\nmuch readahead. There may be some going on as a function\nof the read clustering, but I'm not entirely sure.\n\nI'd suspect that other OSs might have neglected to check\nfor reverse reading of files as well, but I'm not sure.\n\nBasically, if there was a way to do this another way, or\nanticipate the backwards motion and do large reads, it\nmay add latency, but it should improve performance.\n\n-- \n-Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]\n\n", "msg_date": "Thu, 15 Mar 2001 11:40:14 -0800", "msg_from": "Alfred Perlstein <bright@wintelcom.net>", "msg_from_op": false, "msg_subject": "Re: Allowing WAL fsync to be done via O_SYNC" }, { "msg_contents": "Alfred Perlstein writes:\n\n> Sorry, what's a GUC? :)\n\nGrand Unified Configuration system\n\nIt's basically a cute name for the achievement that there's now a single\nname space and interface for (almost) all postmaster run time\nconfiguration variables,\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Thu, 15 Mar 2001 20:43:28 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Allowing WAL fsync to be done via O_SYNC" }, { "msg_contents": "Alfred Perlstein <bright@wintelcom.net> writes:\n> And since we're sorta on the topic of IO, I noticed that it looks\n> like (at least in 7.0.3) that vacuum and certain other routines\n> read files in reverse order.\n\nVacuum does that because it's trying to push tuples down from the end\ninto free space in earlier blocks. I don't see much way around that\n(nor any good reason to think that it's a critical part of vacuum's\nperformance anyway). Where else have you seen such behavior?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 Mar 2001 14:45:46 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allowing WAL fsync to be done via O_SYNC " }, { "msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [010315 11:45] wrote:\n> Alfred Perlstein <bright@wintelcom.net> writes:\n> > And since we're sorta on the topic of IO, I noticed that it looks\n> > like (at least in 7.0.3) that vacuum and certain other routines\n> > read files in reverse order.\n> \n> Vacuum does that because it's trying to push tuples down from the end\n> into free space in earlier blocks. I don't see much way around that\n> (nor any good reason to think that it's a critical part of vacuum's\n> performance anyway). Where else have you seen such behavior?\n\nJust vacuum, but the source is large, and I'm sort of lacking\non database-foo so I guessed that it may be done elsewhere.\n\nYou can optimize this out by implementing the read behind yourselves\nsorta like this:\n\nstruct sglist *\nread(fd, len)\n{\n\n\tif (fd.lastpos - fd.curpos <= THRESHOLD) {\n\t\tfd.curpos = fd.lastpos - THRESHOLD;\n\t\tlen = THRESHOLD;\n\t}\n\n\treturn (do_read(fd, len));\n}\n\nof course this is entirely wrong, but illustrates what\nwould/could help.\n\nI would fix FreeBSD, but it's sort of a mess and beyond what\nI've got time to do ATM.\n\n-- \n-Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]\n\n", "msg_date": "Thu, 15 Mar 2001 11:51:21 -0800", "msg_from": "Alfred Perlstein <bright@wintelcom.net>", "msg_from_op": false, "msg_subject": "Re: Allowing WAL fsync to be done via O_SYNC" }, { "msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> > Based on the tests we did last week, it seems clear than on many\n> > platforms it's a win to sync the WAL log by writing it with open()\n> > option O_SYNC (or O_DSYNC where available) rather than \n> > issuing explicit fsync() (resp. fdatasync()) calls.\n> \n> I don't remember big difference in using fsync or O_SYNC in tfsync\n> tests. Both depend on block size and keeping in mind that fsync\n> allows us syncing after writing *multiple* blocks I would either\n> use fsync as default or don't deal with O_SYNC at all.\n\nI see what you are saying. That the OS may be faster at fsync'ing two\nblocks in one operation rather than doing to O_SYNC operations.\n\nSeems we should just pick a default and leave the rest for a later\nrelease. Marc wants RC1 tomorrow, I think.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 15 Mar 2001 15:30:03 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allowing WAL fsync to be done via O_SYNC" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n>> Peter, what do you think about configuration-dependent defaults for\n>> GUC variables?\n\n> We have plenty of those already, but we should avoid a variable whose\n> specification is:\n\n> \"The default is 'on' if your system defines one of the macros O_SYNC,\n> O_DSYNC, O_FSYNC, and if O_SYNC and O_DSYNC are distinct, otherwise the\n> default is 'off'.\"\n\nUnfortunately, I think that's just about what the default would need to\nbe. What alternative do you have to offer?\n\n> The net result of this would be that the average user would have\n> absolutely no clue what the default on his machine is.\n\nSure he would. Fire up the software and do \"SHOW wal_use_fsync\"\n(or whatever we call it). I think the documentation could just say\n\"the default is platform-dependent\".\n\n> Additionally consider that maybe O_SYNC and O_DSYNC have different values\n> but the kernel treats them the same anyway. We really shouldn't try to\n> guess that far.\n\nWell, that's exactly *why* we need an overridable default. Or would you\nlike to try to do some performance measurements in configure?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 Mar 2001 16:28:13 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allowing WAL fsync to be done via O_SYNC " }, { "msg_contents": "Tom Lane writes:\n\n> \"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n> > ... I would either\n> > use fsync as default or don't deal with O_SYNC at all.\n> > But if O_DSYNC is defined and O_DSYNC != O_SYNC then we should\n> > use O_DSYNC by default.\n>\n> Hm. We could do that reasonably painlessly as a compile-time test in\n> xlog.c, but I'm not clear on how it would play out as a GUC option.\n> Peter, what do you think about configuration-dependent defaults for\n> GUC variables?\n\nWe have plenty of those already, but we should avoid a variable whose\nspecification is:\n\n\"The default is 'on' if your system defines one of the macros O_SYNC,\nO_DSYNC, O_FSYNC, and if O_SYNC and O_DSYNC are distinct, otherwise the\ndefault is 'off'.\"\n\nThe net result of this would be that the average user would have\nabsolutely no clue what the default on his machine is.\n\nAdditionally consider that maybe O_SYNC and O_DSYNC have different values\nbut the kernel treats them the same anyway. We really shouldn't try to\nguess that far.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Thu, 15 Mar 2001 22:33:44 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Allowing WAL fsync to be done via O_SYNC " }, { "msg_contents": "> \"The default is 'on' if your system defines one of the macros O_SYNC,\n> O_DSYNC, O_FSYNC, and if O_SYNC and O_DSYNC are distinct, otherwise the\n> default is 'off'.\"\n> \n> The net result of this would be that the average user would have\n> absolutely no clue what the default on his machine is.\n> \n> Additionally consider that maybe O_SYNC and O_DSYNC have different values\n> but the kernel treats them the same anyway. We really shouldn't try to\n> guess that far.\n\nGood point. I think Tom already found dfsync points to fsync in his\nlibc, or something like that.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 15 Mar 2001 16:39:35 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allowing WAL fsync to be done via O_SYNC" }, { "msg_contents": "> Well, that's exactly *why* we need an overridable default. Or would you\n> like to try to do some performance measurements in configure?\n\nAt this point I'm more comfortable with a compile-time option\n(determined statically or in a configure compilation test, not a\nperformance test), rather than a GUC variable. But imho 7.1 will be nice\nwith either choice, and if you think that a variable will make it easier\nfor developers to do tuning from a distance (as opposed to having it\njust confuse new users) then... ;)\n\n - Thomas\n", "msg_date": "Thu, 15 Mar 2001 23:50:35 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: Allowing WAL fsync to be done via O_SYNC" } ]
[ { "msg_contents": "\tI have started the \"PL/pgSQL CookBook\" project. The goal is to\ncreate a cookbook of PL/pgSQL functions that will be catalogued and made\navailable for others to use and learn from.\n\tCome to http://www.brasileiro.net/postgres and contribute your own \nPL/pgSQL (or PL/Tcl, PL/Perl) function or trigger! This will help many\nPostgres users, both novice and experienced, to use its procedural\nlanguages.\n\tThe CookBook has several sections, and you can add your own. No login\nis required, just come and contribute.\n\n\tOnce again http://www.brasileiro.net/postgres \n\n\tOh, did I mention that you get your own \"PostgreSQL Powered\" button\nwhen you contribute a function/trigger? :)\n\n\t-Roberto\t\n-- \n+----| http://fslc.usu.edu USU Free Software & GNU/Linux Club|------+\n Roberto Mello - Computer Science, USU - http://www.brasileiro.net \n http://www.sdl.usu.edu - Space Dynamics Lab, Web Developer \nPimentus annus alter, refrescum est.\n", "msg_date": "Thu, 15 Mar 2001 14:13:49 -0700", "msg_from": "Roberto Mello <rmello@cc.usu.edu>", "msg_from_op": true, "msg_subject": "Contribute to the PL/pgSQL CookBook !!" } ]
[ { "msg_contents": "> I believe that we don't know enough yet to nail down a hard-wired\n> decision. Vadim's idea of preferring O_DSYNC if it appears to be\n> different from O_SYNC is a good first cut, but I think we'd \n> better make it possible to override that, at least for testing purposes.\n\nSo let's leave fsync as default and add option to open log files\nwith O_DSYNC/O_SYNC.\n\nVadim\n", "msg_date": "Thu, 15 Mar 2001 13:28:08 -0800", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: Allowing WAL fsync to be done via O_SYNC " }, { "msg_contents": "* Mikheev, Vadim <vmikheev@SECTORBASE.COM> [010315 13:52] wrote:\n> > I believe that we don't know enough yet to nail down a hard-wired\n> > decision. Vadim's idea of preferring O_DSYNC if it appears to be\n> > different from O_SYNC is a good first cut, but I think we'd \n> > better make it possible to override that, at least for testing purposes.\n> \n> So let's leave fsync as default and add option to open log files\n> with O_DSYNC/O_SYNC.\n\nI have a weird and untested suggestion:\n\nHow many files need to be fsync'd?\n\nIf it's more than one, what might work is using mmap() to map the\nfiles in adjacent areas, then calling msync() on the entire range,\nthis would allow you to batch fsync the data.\n\nThe only problem is that I'm not sure:\n\n1) how portable msync() is.\n2) if msync garauntees metadata consistancy.\n\nAnother benifit of mmap() is the 'zero' copy nature of it.\n\n-- \n-Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]\n\n", "msg_date": "Thu, 15 Mar 2001 14:51:00 -0800", "msg_from": "Alfred Perlstein <bright@wintelcom.net>", "msg_from_op": false, "msg_subject": "Re: Allowing WAL fsync to be done via O_SYNC" }, { "msg_contents": "Alfred Perlstein <bright@wintelcom.net> writes:\n> How many files need to be fsync'd?\n\nOnly one.\n\n> If it's more than one, what might work is using mmap() to map the\n> files in adjacent areas, then calling msync() on the entire range,\n> this would allow you to batch fsync the data.\n\nInteresting thought, but mmap to a prespecified address is most\ndefinitely not portable, whether or not you want to assume that\nplain mmap is ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 Mar 2001 17:54:22 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allowing WAL fsync to be done via O_SYNC " }, { "msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [010315 14:54] wrote:\n> Alfred Perlstein <bright@wintelcom.net> writes:\n> > How many files need to be fsync'd?\n> \n> Only one.\n> \n> > If it's more than one, what might work is using mmap() to map the\n> > files in adjacent areas, then calling msync() on the entire range,\n> > this would allow you to batch fsync the data.\n> \n> Interesting thought, but mmap to a prespecified address is most\n> definitely not portable, whether or not you want to assume that\n> plain mmap is ...\n\nYeah... :(\n\nEvil thought though (for reference):\n\nmmap(anon memory) returns addr1\naddr2 = addr1 + maplen\nsplit addr1<->addr2 on points A B and C\nmmap(file1 over addr1 to A)\nmmap(file2 over A to B)\nmmap(file3 over B to C)\nmmap(file4 over C to addr2)\n\nIt _should_ work, but there's probably some corner cases where it\ndoesn't.\n\n-- \n-Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]\n\n", "msg_date": "Thu, 15 Mar 2001 15:02:20 -0800", "msg_from": "Alfred Perlstein <bright@wintelcom.net>", "msg_from_op": false, "msg_subject": "Re: Allowing WAL fsync to be done via O_SYNC" }, { "msg_contents": "Hello Tom,\n\nFriday, March 16, 2001, 6:54:22 AM, you wrote:\n\nTL> Alfred Perlstein <bright@wintelcom.net> writes:\n>> How many files need to be fsync'd?\n\nTL> Only one.\n\n>> If it's more than one, what might work is using mmap() to map the\n>> files in adjacent areas, then calling msync() on the entire range,\n>> this would allow you to batch fsync the data.\n\nTL> Interesting thought, but mmap to a prespecified address is most\nTL> definitely not portable, whether or not you want to assume that\nTL> plain mmap is ...\n\nTL> regards, tom lane\n\nCould anyone consider fork a syncer process to sync data to disk ?\nbuild a shared sync queue, when a daemon process want to do sync after\nwrite() is called, just put a sync request to the queue. this can release\nprocess from blocked on writing as soon as possible. multipile sync\nrequest for one file can be merged when the request is been inserting to\nthe queue.\n\n-- \nRegards,\nXu Yifeng\n\n\n", "msg_date": "Fri, 16 Mar 2001 14:26:00 +0800", "msg_from": "Xu Yifeng <jamexu@telekbird.com.cn>", "msg_from_op": false, "msg_subject": "Re[2]: Allowing WAL fsync to be done via O_SYNC" }, { "msg_contents": "* Xu Yifeng <jamexu@telekbird.com.cn> [010315 22:25] wrote:\n> Hello Tom,\n> \n> Friday, March 16, 2001, 6:54:22 AM, you wrote:\n> \n> TL> Alfred Perlstein <bright@wintelcom.net> writes:\n> >> How many files need to be fsync'd?\n> \n> TL> Only one.\n> \n> >> If it's more than one, what might work is using mmap() to map the\n> >> files in adjacent areas, then calling msync() on the entire range,\n> >> this would allow you to batch fsync the data.\n> \n> TL> Interesting thought, but mmap to a prespecified address is most\n> TL> definitely not portable, whether or not you want to assume that\n> TL> plain mmap is ...\n> \n> TL> regards, tom lane\n> \n> Could anyone consider fork a syncer process to sync data to disk ?\n> build a shared sync queue, when a daemon process want to do sync after\n> write() is called, just put a sync request to the queue. this can release\n> process from blocked on writing as soon as possible. multipile sync\n> request for one file can be merged when the request is been inserting to\n> the queue.\n\nI suggested this about a year ago. :)\n\nThe problem is that you need that process to potentially open and close\nmany files over and over.\n\nI still think it's somewhat of a good idea.\n\n-- \n-Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]\n\n", "msg_date": "Thu, 15 Mar 2001 23:21:09 -0800", "msg_from": "Alfred Perlstein <bright@wintelcom.net>", "msg_from_op": false, "msg_subject": "Re: Re[2]: Allowing WAL fsync to be done via O_SYNC" }, { "msg_contents": "Hello Alfred,\n\nFriday, March 16, 2001, 3:21:09 PM, you wrote:\n\nAP> * Xu Yifeng <jamexu@telekbird.com.cn> [010315 22:25] wrote:\n>>\n>> Could anyone consider fork a syncer process to sync data to disk ?\n>> build a shared sync queue, when a daemon process want to do sync after\n>> write() is called, just put a sync request to the queue. this can release\n>> process from blocked on writing as soon as possible. multipile sync\n>> request for one file can be merged when the request is been inserting to\n>> the queue.\n\nAP> I suggested this about a year ago. :)\n\nAP> The problem is that you need that process to potentially open and close\nAP> many files over and over.\n\nAP> I still think it's somewhat of a good idea.\n\nI am not a DBMS guru.\ncouldn't the syncer process cache opened files? is there any problem I\ndidn't consider ?\n\n-- \nBest regards,\nXu Yifeng\n\n\n", "msg_date": "Fri, 16 Mar 2001 16:53:12 +0800", "msg_from": "Xu Yifeng <jamexu@telekbird.com.cn>", "msg_from_op": false, "msg_subject": "Re[4]: Allowing WAL fsync to be done via O_SYNC" }, { "msg_contents": "* Xu Yifeng <jamexu@telekbird.com.cn> [010316 01:15] wrote:\n> Hello Alfred,\n> \n> Friday, March 16, 2001, 3:21:09 PM, you wrote:\n> \n> AP> * Xu Yifeng <jamexu@telekbird.com.cn> [010315 22:25] wrote:\n> >>\n> >> Could anyone consider fork a syncer process to sync data to disk ?\n> >> build a shared sync queue, when a daemon process want to do sync after\n> >> write() is called, just put a sync request to the queue. this can release\n> >> process from blocked on writing as soon as possible. multipile sync\n> >> request for one file can be merged when the request is been inserting to\n> >> the queue.\n> \n> AP> I suggested this about a year ago. :)\n> \n> AP> The problem is that you need that process to potentially open and close\n> AP> many files over and over.\n> \n> AP> I still think it's somewhat of a good idea.\n> \n> I am not a DBMS guru.\n\nHah, same here. :)\n\n> couldn't the syncer process cache opened files? is there any problem I\n> didn't consider ?\n\n1) IPC latency, the amount of time it takes to call fsync will\n increase by at least two context switches.\n\n2) a working set (number of files needed to be fsync'd) that\n is larger than the amount of files you wish to keep open.\n\n-- \n-Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]\n\n", "msg_date": "Fri, 16 Mar 2001 04:45:35 -0800", "msg_from": "Alfred Perlstein <bright@wintelcom.net>", "msg_from_op": false, "msg_subject": "Re: Re[4]: Allowing WAL fsync to be done via O_SYNC" }, { "msg_contents": "> > Could anyone consider fork a syncer process to sync data to disk ?\n> > build a shared sync queue, when a daemon process want to do sync after\n> > write() is called, just put a sync request to the queue. this can release\n> > process from blocked on writing as soon as possible. multipile sync\n> > request for one file can be merged when the request is been inserting to\n> > the queue.\n> \n> I suggested this about a year ago. :)\n> \n> The problem is that you need that process to potentially open and close\n> many files over and over.\n> \n> I still think it's somewhat of a good idea.\n\nI like the idea too, but people want the transaction to return COMMIT\nonly after data has been fsync'ed so I don't see a big win.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 16 Mar 2001 10:11:30 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re[2]: Allowing WAL fsync to be done via O_SYNC" }, { "msg_contents": "* Bruce Momjian <pgman@candle.pha.pa.us> [010316 07:11] wrote:\n> > > Could anyone consider fork a syncer process to sync data to disk ?\n> > > build a shared sync queue, when a daemon process want to do sync after\n> > > write() is called, just put a sync request to the queue. this can release\n> > > process from blocked on writing as soon as possible. multipile sync\n> > > request for one file can be merged when the request is been inserting to\n> > > the queue.\n> > \n> > I suggested this about a year ago. :)\n> > \n> > The problem is that you need that process to potentially open and close\n> > many files over and over.\n> > \n> > I still think it's somewhat of a good idea.\n> \n> I like the idea too, but people want the transaction to return COMMIT\n> only after data has been fsync'ed so I don't see a big win.\n\nThis isn't simply handing off the sync to this other process, it requires\nan ack from the syncer before returning 'COMMIT'.\n\n-- \n-Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]\n\n", "msg_date": "Fri, 16 Mar 2001 07:43:24 -0800", "msg_from": "Alfred Perlstein <bright@wintelcom.net>", "msg_from_op": false, "msg_subject": "Re: Re[2]: Allowing WAL fsync to be done via O_SYNC" }, { "msg_contents": "From: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\n> > > Could anyone consider fork a syncer process to sync data to disk ?\n> > > build a shared sync queue, when a daemon process want to do sync after\n> > > write() is called, just put a sync request to the queue. this can\nrelease\n> > > process from blocked on writing as soon as possible. multipile sync\n> > > request for one file can be merged when the request is been inserting\nto\n> > > the queue.\n> >\n> > I suggested this about a year ago. :)\n> >\n> > The problem is that you need that process to potentially open and close\n> > many files over and over.\n> >\n> > I still think it's somewhat of a good idea.\n>\n> I like the idea too, but people want the transaction to return COMMIT\n> only after data has been fsync'ed so I don't see a big win.\n\nFor a log file on a busy system, this could improve throughput a lot--batch\ncommit. You end up with fewer than one fsync() per transaction.\n\n\n", "msg_date": "Fri, 16 Mar 2001 10:44:49 -0500", "msg_from": "\"Ken Hirsch\" <kahirsch@bellsouth.net>", "msg_from_op": false, "msg_subject": "Re: Allowing WAL fsync to be done via O_SYNC" }, { "msg_contents": "Alfred Perlstein <bright@wintelcom.net> writes:\n>> couldn't the syncer process cache opened files? is there any problem I\n>> didn't consider ?\n\n> 1) IPC latency, the amount of time it takes to call fsync will\n> increase by at least two context switches.\n\n> 2) a working set (number of files needed to be fsync'd) that\n> is larger than the amount of files you wish to keep open.\n\nThese days we're really only interested in fsync'ing the current WAL\nlog file, so working set doesn't seem like a problem anymore. However\ncontext-switch latency is likely to be a big problem. One thing we'd\ndefinitely need before considering this is to replace the existing\nspinlock mechanism with something more efficient.\n\nVadim has designed the WAL stuff in such a way that a separate\nwriter/syncer process would be easy to add; in fact it's almost that way\nalready, in that any backend can write or sync data that's been added\nto the queue by any other backend. The question is whether it'd\nactually buy anything to have another process. Good stuff to experiment\nwith for 7.2.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Mar 2001 11:03:21 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re[4]: Allowing WAL fsync to be done via O_SYNC " }, { "msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [010316 08:16] wrote:\n> Alfred Perlstein <bright@wintelcom.net> writes:\n> >> couldn't the syncer process cache opened files? is there any problem I\n> >> didn't consider ?\n> \n> > 1) IPC latency, the amount of time it takes to call fsync will\n> > increase by at least two context switches.\n> \n> > 2) a working set (number of files needed to be fsync'd) that\n> > is larger than the amount of files you wish to keep open.\n> \n> These days we're really only interested in fsync'ing the current WAL\n> log file, so working set doesn't seem like a problem anymore. However\n> context-switch latency is likely to be a big problem. One thing we'd\n> definitely need before considering this is to replace the existing\n> spinlock mechanism with something more efficient.\n\nWhat sort of problems are you seeing with the spinlock code?\n\n> Vadim has designed the WAL stuff in such a way that a separate\n> writer/syncer process would be easy to add; in fact it's almost that way\n> already, in that any backend can write or sync data that's been added\n> to the queue by any other backend. The question is whether it'd\n> actually buy anything to have another process. Good stuff to experiment\n> with for 7.2.\n\nThe delayed/coallecesed (sp?) fsync looked interesting.\n\n-- \n-Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]\n\n", "msg_date": "Fri, 16 Mar 2001 08:18:26 -0800", "msg_from": "Alfred Perlstein <bright@wintelcom.net>", "msg_from_op": false, "msg_subject": "Re: Re[4]: Allowing WAL fsync to be done via O_SYNC" }, { "msg_contents": "Alfred Perlstein <bright@wintelcom.net> writes:\n>> definitely need before considering this is to replace the existing\n>> spinlock mechanism with something more efficient.\n\n> What sort of problems are you seeing with the spinlock code?\n\nIt's great as long as you never block, but it sucks for making things\nwait, because the wait interval will be some multiple of 10 msec rather\nthan just the time till the lock comes free.\n\nWe've speculated about using Posix semaphores instead, on platforms\nwhere those are available. I think Bruce was concerned about the\npossible overhead of pulling in a whole thread-support library just to\nget semaphores, however.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Mar 2001 11:23:39 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re[4]: Allowing WAL fsync to be done via O_SYNC " }, { "msg_contents": "On Fri, 16 Mar 2001, Tom Lane wrote:\n\n> Alfred Perlstein <bright@wintelcom.net> writes:\n> >> definitely need before considering this is to replace the existing\n> >> spinlock mechanism with something more efficient.\n>\n> > What sort of problems are you seeing with the spinlock code?\n>\n> It's great as long as you never block, but it sucks for making things\n> wait, because the wait interval will be some multiple of 10 msec rather\n> than just the time till the lock comes free.\n>\n> We've speculated about using Posix semaphores instead, on platforms\n> where those are available. I think Bruce was concerned about the\n> possible overhead of pulling in a whole thread-support library just to\n> get semaphores, however.\n\nBut, with shared libraries, are you really pulling in a \"whole\nthread-support library\"? My understanding of shared libraries (altho it\nmay be totally off) was that instead of pulling in a whole library, you\npulled in the bits that you needed, pretty much as you needed them ...\n\n\n\n", "msg_date": "Fri, 16 Mar 2001 13:10:34 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Re[4]: Allowing WAL fsync to be done via O_SYNC " }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> Alfred Perlstein <bright@wintelcom.net> writes:\n> >> definitely need before considering this is to replace the existing\n> >> spinlock mechanism with something more efficient.\n> \n> > What sort of problems are you seeing with the spinlock code?\n> \n> It's great as long as you never block, but it sucks for making things\n> wait, because the wait interval will be some multiple of 10 msec rather\n> than just the time till the lock comes free.\n\nPlus, using select() for the timeout is putting you into the kernel\nmultiple times in a short period, and causing a reschedule everytime,\nwhich is a big lose. This was discussed in the linux-kernel thread\nthat was referred to a few days ago.\n\n> We've speculated about using Posix semaphores instead, on platforms\n> where those are available. I think Bruce was concerned about the\n> possible overhead of pulling in a whole thread-support library just to\n> get semaphores, however.\n\nAre Posix semaphores faster by definition than SysV semaphores (which\nare described as \"slow\" in the source comments)? I can't see how\nthey'd be much faster unless locking/unlocking an uncontended\nsemaphore avoids a system call, in which case you might run into the\nsame problems with userland backoff...\n\nJust looked, and on Linux pthreads and POSIX semaphores are both\nalready in the C library. Unfortunately, the Linux C library doesn't\nsupport the PROCESS_SHARED attribute for either pthreads mutexes or\nPOSIX semaphores. Grumble. What's the point then?\n\nJust some ignorant ramblings, thanks for listening...\n\n-Doug\n", "msg_date": "16 Mar 2001 12:17:38 -0500", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: Re[4]: Allowing WAL fsync to be done via O_SYNC" }, { "msg_contents": "Yes, you are. On UnixWare, you need to add -Kthread, which CHANGES a LOT \nof primitives to go through threads wrappers and scheduling.\n\nSee the doc on the http://UW7DOC.SCO.COM or http://www.lerctr.org:457/ \nweb pages.\n\nAlso, some functions are NOT available without the -Kthread or -Kpthread \ndirectives. \n\nLER\n\n>>>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<<\n\nOn 3/16/01, 11:10:34 AM, The Hermit Hacker <scrappy@hub.org> wrote \nregarding Re: Re[4]: [HACKERS] Allowing WAL fsync to be done via O_SYNC :\n\n\n> On Fri, 16 Mar 2001, Tom Lane wrote:\n\n> > Alfred Perlstein <bright@wintelcom.net> writes:\n> > >> definitely need before considering this is to replace the existing\n> > >> spinlock mechanism with something more efficient.\n> >\n> > > What sort of problems are you seeing with the spinlock code?\n> >\n> > It's great as long as you never block, but it sucks for making things\n> > wait, because the wait interval will be some multiple of 10 msec rather\n> > than just the time till the lock comes free.\n> >\n> > We've speculated about using Posix semaphores instead, on platforms\n> > where those are available. I think Bruce was concerned about the\n> > possible overhead of pulling in a whole thread-support library just to\n> > get semaphores, however.\n\n> But, with shared libraries, are you really pulling in a \"whole\n> thread-support library\"? My understanding of shared libraries (altho it\n> may be totally off) was that instead of pulling in a whole library, you\n> pulled in the bits that you needed, pretty much as you needed them ...\n\n\n\n\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n", "msg_date": "Fri, 16 Mar 2001 17:23:48 GMT", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: Re[4]: Allowing WAL fsync to be done via O_SYNC " }, { "msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> Yes, you are. On UnixWare, you need to add -Kthread, which CHANGES a LOT \n> of primitives to go through threads wrappers and scheduling.\n\nThis was my concern; the change that happens on startup and lib calls\nwhen thread support comes in through a library.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 16 Mar 2001 12:34:27 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re[4]: Allowing WAL fsync to be done via O_SYNC" }, { "msg_contents": "Larry Rosenman <ler@lerctr.org> writes:\n>> But, with shared libraries, are you really pulling in a \"whole\n>> thread-support library\"?\n\n> Yes, you are. On UnixWare, you need to add -Kthread, which CHANGES a LOT \n> of primitives to go through threads wrappers and scheduling.\n\nRight, it's not so much that we care about referencing another shlib,\nit's that -lpthreads may cause you to get a whole new thread-aware\nversion of libc, with attendant overhead that we don't need or want.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Mar 2001 12:36:12 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re[4]: Allowing WAL fsync to be done via O_SYNC " }, { "msg_contents": "> On 3/16/01, 11:10:34 AM, The Hermit Hacker <scrappy@hub.org> wrote \n> regarding Re: Re[4]: [HACKERS] Allowing WAL fsync to be done via O_SYNC :\n> \n> > But, with shared libraries, are you really pulling in a \"whole\n> > thread-support library\"? My understanding of shared libraries (altho it\n> > may be totally off) was that instead of pulling in a whole library, you\n> > pulled in the bits that you needed, pretty much as you needed them ...\n\n\n* Larry Rosenman <ler@lerctr.org> [010316 10:02] wrote:\n> Yes, you are. On UnixWare, you need to add -Kthread, which CHANGES a LOT \n> of primitives to go through threads wrappers and scheduling.\n> \n> See the doc on the http://UW7DOC.SCO.COM or http://www.lerctr.org:457/ \n> web pages.\n> \n> Also, some functions are NOT available without the -Kthread or -Kpthread \n> directives. \n\nThis is true on FreeBSD as well.\n\n-- \n-Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]\n\n", "msg_date": "Fri, 16 Mar 2001 12:15:58 -0800", "msg_from": "Alfred Perlstein <bright@wintelcom.net>", "msg_from_op": false, "msg_subject": "Re: Re[4]: Allowing WAL fsync to be done via O_SYNC" }, { "msg_contents": "The Hermit Hacker wrote:\n>> \n> But, with shared libraries, are you really pulling in a \"whole\n> thread-support library\"? My understanding of shared libraries (altho it\n> may be totally off) was that instead of pulling in a whole library, you\n> pulled in the bits that you needed, pretty much as you needed them ...\n\nJust by making a thread call libc changes personality to use thread\nsafe routines (I.E. add mutex locking). Use one thread feature, get\nthe whole set...which may not be that bad.\n-- \nWilliam K. Volkman.\nCIO - H.I.S. Financial Services Corporation.\n102 S. Tejon, Ste. 920, Colorado Springs, CO 80903\nPhone: 719-633-6942 Fax: 719-633-7006 Cell: 719-330-8423\n", "msg_date": "Fri, 16 Mar 2001 18:01:39 -0700", "msg_from": "\"William K. Volkman\" <wkv@hiscorp.net>", "msg_from_op": false, "msg_subject": "Re: Allowing WAL fsync to be done via O_SYNC" }, { "msg_contents": "* William K. Volkman <wkv@hiscorp.net> [010318 11:56] wrote:\n> The Hermit Hacker wrote:\n> >> \n> > But, with shared libraries, are you really pulling in a \"whole\n> > thread-support library\"? My understanding of shared libraries (altho it\n> > may be totally off) was that instead of pulling in a whole library, you\n> > pulled in the bits that you needed, pretty much as you needed them ...\n> \n> Just by making a thread call libc changes personality to use thread\n> safe routines (I.E. add mutex locking). Use one thread feature, get\n> the whole set...which may not be that bad.\n\nActually it can be pretty bad. Locked bus cycles needed for mutex\noperations are very, very expensive, not something you want to do\nunless you really really need to do it.\n\n-- \n-Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]\n\n", "msg_date": "Sun, 18 Mar 2001 12:03:28 -0800", "msg_from": "Alfred Perlstein <bright@wintelcom.net>", "msg_from_op": false, "msg_subject": "Re: Allowing WAL fsync to be done via O_SYNC" }, { "msg_contents": "Alfred Perlstein <bright@wintelcom.net> writes:\n>> Just by making a thread call libc changes personality to use thread\n>> safe routines (I.E. add mutex locking). Use one thread feature, get\n>> the whole set...which may not be that bad.\n\n> Actually it can be pretty bad. Locked bus cycles needed for mutex\n> operations are very, very expensive, not something you want to do\n> unless you really really need to do it.\n\nIt'd be interesting to try to get some numbers about the actual cost\nof using a thread-aware libc, on platforms where there's a difference.\nShouldn't be that hard to build a postgres executable with the proper\nlibrary and run some benchmarks ... anyone care to try?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 18 Mar 2001 15:52:03 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allowing WAL fsync to be done via O_SYNC " }, { "msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [010318 14:55]:\n> Alfred Perlstein <bright@wintelcom.net> writes:\n> >> Just by making a thread call libc changes personality to use thread\n> >> safe routines (I.E. add mutex locking). Use one thread feature, get\n> >> the whole set...which may not be that bad.\n> \n> > Actually it can be pretty bad. Locked bus cycles needed for mutex\n> > operations are very, very expensive, not something you want to do\n> > unless you really really need to do it.\n> \n> It'd be interesting to try to get some numbers about the actual cost\n> of using a thread-aware libc, on platforms where there's a difference.\n> Shouldn't be that hard to build a postgres executable with the proper\n> library and run some benchmarks ... anyone care to try?\nI can get the code compiled, but don't have the skills to generate\na test case worthy of anything....\n\nLER\n\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Sun, 18 Mar 2001 16:15:06 -0600", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: Allowing WAL fsync to be done via O_SYNC" }, { "msg_contents": "* Larry Rosenman <ler@lerctr.org> [010318 14:17] wrote:\n> * Tom Lane <tgl@sss.pgh.pa.us> [010318 14:55]:\n> > Alfred Perlstein <bright@wintelcom.net> writes:\n> > >> Just by making a thread call libc changes personality to use thread\n> > >> safe routines (I.E. add mutex locking). Use one thread feature, get\n> > >> the whole set...which may not be that bad.\n> > \n> > > Actually it can be pretty bad. Locked bus cycles needed for mutex\n> > > operations are very, very expensive, not something you want to do\n> > > unless you really really need to do it.\n> > \n> > It'd be interesting to try to get some numbers about the actual cost\n> > of using a thread-aware libc, on platforms where there's a difference.\n> > Shouldn't be that hard to build a postgres executable with the proper\n> > library and run some benchmarks ... anyone care to try?\n> I can get the code compiled, but don't have the skills to generate\n> a test case worthy of anything....\n\nThere's a 'make test' or something ('regression' maybe?) target that\nruns a suite of tests on the database, you could use that as a\nbench/timer, you could also try mysql's \"crashme\" script.\n\n-- \n-Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]\n\n", "msg_date": "Sun, 18 Mar 2001 14:48:31 -0800", "msg_from": "Alfred Perlstein <bright@wintelcom.net>", "msg_from_op": false, "msg_subject": "Re: Allowing WAL fsync to be done via O_SYNC" }, { "msg_contents": "Larry Rosenman <ler@lerctr.org> writes:\n> I can get the code compiled, but don't have the skills to generate\n> a test case worthy of anything....\n\ncontrib/pgbench would do as a first cut.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 18 Mar 2001 18:47:22 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allowing WAL fsync to be done via O_SYNC " }, { "msg_contents": "> * William K. Volkman <wkv@hiscorp.net> [010318 11:56] wrote:\n> > The Hermit Hacker wrote:\n> > >> \n> > > But, with shared libraries, are you really pulling in a \"whole\n> > > thread-support library\"? My understanding of shared libraries (altho it\n> > > may be totally off) was that instead of pulling in a whole library, you\n> > > pulled in the bits that you needed, pretty much as you needed them ...\n> > \n> > Just by making a thread call libc changes personality to use thread\n> > safe routines (I.E. add mutex locking). Use one thread feature, get\n> > the whole set...which may not be that bad.\n> \n> Actually it can be pretty bad. Locked bus cycles needed for mutex\n> operations are very, very expensive, not something you want to do\n> unless you really really need to do it.\n\nAnd don't forget buggy implementations.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 19 Mar 2001 09:29:01 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allowing WAL fsync to be done via O_SYNC" } ]
[ { "msg_contents": "I have completed all the features I want in the first release of\npgmonitor. It is available at:\n\n\tftp://candle.pha.pa.us/pub/postgresql/pgmonitor.tar.gz\n\nI am going to send this over soon to announce/general to encourage its\nuse.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 15 Mar 2001 16:41:25 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "pgmonitor completed" } ]
[ { "msg_contents": "Tom Lane wrote:\n> Jan Wieck <janwieck@Yahoo.com> writes:\n> > What about a collector deamon, fired up by the postmaster and\n> > receiving UDP packets from the backends. Under heavy load, it\n> > might miss some statistic messages, well, but that's not as\n> > bad as having locks causing backends to loose performance.\n>\n> Interesting thought, but we don't want UDP I think; that just opens\n> up a whole can of worms about checking access permissions and so forth.\n> Why not a simple pipe? The postmaster creates the pipe and the\n> collector daemon inherits one end, while all the backends inherit the\n> other end.\n\n I don't think so - though I haven't tested the following yet,\n but AFAIR it's correct.\n\n Have the postmaster creating two UDP sockets before it forks\n off the collector. It can examine the peer addresses of both,\n so they don't need well known port numbers, it can be the\n randomly ones assigned by the kernel. Thus, we don't need\n SO_REUSE on them either.\n\n Now, since the collector is forked off by the postmaster, it\n knows the peer address of the other socket. And since all\n backends get forked off from the postmaster as well, they'll\n all use the same peer address, don't they? So all the\n collector has to look at is the sender address including port\n number of the packets. It needs to be what the postmaster\n examined, anything else is from someone else and goes to bit\n heaven. The same way the backends know where to send their\n statistics.\n\n If I'm right that in the case of fork() all children share\n the same socket with the same peer address, then it's even\n safe in the case the collector dies. The postmaster can still\n hold the collectors socket and will notice that the collector\n died (due to a wait() returning it's PID) and can fire up\n another one. Again some packets got lost (plus all the so far\n collected statistics, hmmm - aint that a cool way to reset\n statistic counters - killing the collector?), but it did not\n disturb any live backend in any way. They will never get any\n signal, don't care about what's done with their statistics\n and such. They just do their work...\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Thu, 15 Mar 2001 17:22:41 -0500 (EST)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": true, "msg_subject": "Re: Performance monitor signal handler" } ]
[ { "msg_contents": "Tom Lane wrote:\n> Jan Wieck <janwieck@Yahoo.com> writes:\n> > What about a collector deamon, fired up by the postmaster and\n> > receiving UDP packets from the backends. Under heavy load, it\n> > might miss some statistic messages, well, but that's not as\n> > bad as having locks causing backends to loose performance.\n>\n> Interesting thought, but we don't want UDP I think; that just opens\n> up a whole can of worms about checking access permissions and so forth.\n> Why not a simple pipe? The postmaster creates the pipe and the\n> collector daemon inherits one end, while all the backends inherit the\n> other end.\n\n I don't think so - though I haven't tested the following yet,\n but AFAIR it's correct.\n\n Have the postmaster creating two UDP sockets before it forks\n off the collector. It can examine the peer addresses of both,\n so they don't need well known port numbers, it can be the\n randomly ones assigned by the kernel. Thus, we don't need\n SO_REUSE on them either.\n\n Now, since the collector is forked off by the postmaster, it\n knows the peer address of the other socket. And since all\n backends get forked off from the postmaster as well, they'll\n all use the same peer address, don't they? So all the\n collector has to look at is the sender address including port\n number of the packets. It needs to be what the postmaster\n examined, anything else is from someone else and goes to bit\n heaven. The same way the backends know where to send their\n statistics.\n\n If I'm right that in the case of fork() all children share\n the same socket with the same peer address, then it's even\n safe in the case the collector dies. The postmaster can still\n hold the collectors socket and will notice that the collector\n died (due to a wait() returning it's PID) and can fire up\n another one. Again some packets got lost (plus all the so far\n collected statistics, hmmm - aint that a cool way to reset\n statistic counters - killing the collector?), but it did not\n disturb any live backend in any way. They will never get any\n signal, don't care about what's done with their statistics\n and such. They just do their work...\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Thu, 15 Mar 2001 17:34:29 -0500 (EST)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": true, "msg_subject": "Re: Performance monitor signal handler" } ]
[ { "msg_contents": "\n\tI have started the \"PL/pgSQL CookBook\" project. The goal is to\ncreate a cookbook of PL/pgSQL functions that will be catalogued and made\navailable for others to use and learn from.\n\tCome to http://www.brasileiro.net/postgres and contribute your own \nPL/pgSQL (or PL/Tcl, PL/Perl) function or trigger! This will help many\nPostgres users, both novice and experienced, to use its procedural\nlanguages.\n\tThe CookBook has several sections, and you can add your own. No login\nis required, just come and contribute.\n\n\tOnce again http://www.brasileiro.net/postgres \n\n\tOh, did I mention that you get your own \"PostgreSQL Powered\" button\nwhen you contribute a function/trigger? :)\n\n\t-Roberto\t\n\n-- \n+----| http://fslc.usu.edu USU Free Software & GNU/Linux Club|------+\n Roberto Mello - Computer Science, USU - http://www.brasileiro.net \n http://www.sdl.usu.edu - Space Dynamics Lab, Web Developer \nTetris tagline: @@ o@o oooo @oo oo@ \n", "msg_date": "Thu, 15 Mar 2001 15:45:06 -0700", "msg_from": "Roberto Mello <rmello@cc.usu.edu>", "msg_from_op": true, "msg_subject": "Contribute to the PL/pgSQL CookBook !!" } ]
[ { "msg_contents": "I have written a simple search engine that utilizes pgsql. I would\nlike to make it a stored procedure, although I am still learning\nplpgsql. Attached to this message is the basic search logic (in PHP).\nThis script will be in production on a major e-commerce site in about\na week. I think it would be much faster as a stored procedure.\n\nThis implemetation searches product data, but it could be used for\nanything.\n\nThe script does a few things, gets the id's of valid words, finds\nwhich products these words are mapped to, finds which products have\nthe most keyword mappings out of the result set, and outputs a result\nset joined to the product table and ordered by products with the most\nkeyword hits.\n\nIt accomplishes this by keeping an indexed list of words, maintaing a\nmapping table to products. Then when someone does a search, a\ntemporary table called 'hits' is created which stores the product_id\nthat was matched to a word. Then an additional temporary table is\ncreated which consists of the product id and hit count from the\nsearch. The search then retrns the product details ordered by the\nproduct that had the most hits.\n\nIf you are interested in seeing it in action, I can send you a url, if\nyou'd like to implement it I can help you out, and if you can help me\ncovert it to a stored procedure I'd be very apreciative!\n\nIt uses five tables:\n\nCREATE TABLE \"pa_search_keyword\" (\n \"keyword_id\" int4 DEFAULT\nnextval('\"pa_search_keywor_keyword_id_seq\"'::text) NOT NULL,\n \"keyword_value\" varchar(30),\n CONSTRAINT \"pa_search_keyword_pkey\" PRIMARY KEY (\"keyword_id\")\n);\n\n and\n\nCREATE TABLE \"pa_search_map\" (\n \"keyword_id\" int4 NOT NULL,\n \"product_id\" int4 NOT NULL,\n CONSTRAINT \"pa_search_map_pkey\" PRIMARY KEY (\"keyword_id\",\n\"product_id\")\n);\n\ntwo temporary tables:\n\nCREATE TEMPORARY TABLE hits \n(product_id integer not null);\n\nand\n\nCREATE TEMPORARY TABLE prod_hit_count (\nproduct_id integer not null, \nhit_count smallint not null\n);\n\nthe fifth table would be the table you are joining to to get the\nproduct data, or details, or whatever.\n\n-Ryan Mahoney\n", "msg_date": "Thu, 15 Mar 2001 22:59:41 GMT", "msg_from": "ryan@paymentalliance.net", "msg_from_op": true, "msg_subject": "PostgreSQL Search Engine - searchraw.php3 (0/1)" } ]
[ { "msg_contents": "begin 644 searchraw.php3\nM/#]P:'`*\"2\\O<W1A<G0@<V5S<VEO;@H)\"7-E<W-I;VY?<W1A<G0H*3L*\"2\\O\nM<&%R<V4@:6YP=70*\"0DD8VQE86Y?<75E<GD@/2!T<FEM*'-T<G1O=7!P97(H\nM)&-R:71E<FEA*2D[\"@DO+VUA:V4@87)R87D*\"0DD87)R87E?;V9?=V]R9',@\nM/2!E>'!L;V1E*\"<@)RP@)&-L96%N7W%U97)Y*3L*\"2\\O82!L:7-T(&]F('-T\nM;W`@=V]R9',@8F5C;VUE<R!A;F0@87)R87D@;V8@<W1O<\"!W;W)D<RX*\"0DD\nM<W1O<'=O<F1S(#T@(BP[(3LN.T`[(SLD.R4[7CLF.RH[*#LI.U\\[+3M!.T$G\nM4SM!0DQ%.T%\"3U54.T%\"3U9%.T%#0T]21$E.1SM!0T-/4D1)3D=,63M!0U)/\nM4U,[04-454%,3%D[049415([049415)705)$4SM!1T%)3CM!1T%)3E-4.T%)\nM3B=4.T%,3#M!3$Q/5SM!3$Q/5U,[04Q-3U-4.T%,3TY%.T%,3TY'.T%,4D5!\nM1%D[04Q33SM!3%1(3U5'2#M!3%=!65,[04T[04U/3D<[04U/3D=35#M!3CM!\nM3D0[04Y/5$A%4CM!3ED[04Y90D]$63M!3EE(3U<[04Y93TY%.T%.651(24Y'\nM.T%.65=!63M!3EE705E3.T%.65=(15)%.T%005)4.T%04$5!4CM!4%!214-)\nM051%.T%04%)/4%))051%.T%213M!4D5.)U0[05)/54Y$.T%3.T%3241%.T%3\nM2SM!4TM)3D<[05-33T-)051%1#M!5#M!5D%)3$%\"3$4[05=!63M!5T953$Q9\nM.T([0D4[0D5#04U%.T)%0T%54T4[0D5#3TU%.T)%0T]-15,[0D5#3TU)3D<[\nM0D5%3CM\"149/4D4[0D5&3U)%2$%.1#M\"14A)3D0[0D5)3D<[0D5,245613M\"\nM14Q/5SM\"15-)1$4[0D53241%4SM\"15-4.T)%5%1%4CM\"1517145..T)%64].\nM1#M\"3U1(.T)2245&.T)55#M\"63M#.T,G34]..T,G4SM#04U%.T-!3CM#04XG\nM5#M#04Y.3U0[0T%.5#M#055313M#055315,[0T525$%)3CM#15)404E.3%D[\nM0TA!3D=%4SM#3$5!4DQ9.T-/.T-/33M#3TU%.T-/3453.T-/3D-%4DY)3D<[\nM0T].4T51545.5$Q9.T-/3E-)1$52.T-/3E-)1$5224Y'.T-/3E1!24X[0T].\nM5$%)3DE.1SM#3TY404E.4SM#3U)215-03TY$24Y'.T-/54Q$.T-/54Q$3B=4\nM.T-/55)313M#55)214Y43%D[1#M$149)3DE414Q9.T1%4T-224)%1#M$15-0\nM251%.T1)1#M$241.)U0[1$E&1D5214Y4.T1/.T1/15,[1$]%4TXG5#M$3TE.\nM1SM$3TXG5#M$3TY%.T1/5TX[1$]73E=!4D13.T154DE.1SM%.T5!0T@[1415\nM.T5'.T5)1TA4.T5)5$A%4CM%3%-%.T5,4T572$5213M%3D]51T@[14Y425)%\nM3%D[15-014-)04Q,63M%5#M%5$,[159%3CM%5D52.T5615)9.T5615)90D]$\nM63M%5D5264].13M%5D52651(24Y'.T5615)95TA%4D4[15@[15A!0U1,63M%\nM6$%-4$Q%.T580T505#M&.T9!4CM&15<[1DE&5$@[1DE24U0[1DE613M&3TQ,\nM3U=%1#M&3TQ,3U=)3D<[1D],3$]74SM&3U([1D]23452.T9/4DU%4DQ9.T9/\nM4E1(.T9/55([1E)/33M&55)42$52.T954E1(15)-3U)%.T<[1T54.T=%5%,[\nM1T545$E.1SM'259%3CM'259%4SM'3SM'3T53.T=/24Y'.T=/3D4[1T]4.T=/\nM5%1%3CM'4D5%5$E.1U,[2#M(040[2$%$3B=4.TA!4%!%3E,[2$%21$Q9.TA!\nM4SM(05-.)U0[2$%613M(059%3B=4.TA!5DE.1SM(13M(12=3.TA%3$Q/.TA%\nM3%`[2$5.0T4[2$52.TA%4D4[2$5212=3.TA%4D5!1E1%4CM(15)%0ED[2$52\nM14E..TA%4D554$]..TA%4E,[2$524T5,1CM(23M(24T[2$E-4T5,1CM(25,[\nM2$E42$52.TA/4$5&54Q,63M(3U<[2$]70D5)5#M(3U=%5D52.TD[22=$.TDG\nM3$P[22=-.TDG5D4[244[248[24=.3U)%1#M)34U%1$E!5$4[24X[24Y!4TU5\nM0T@[24Y#.TE.1$5%1#M)3D1)0T%413M)3D1)0T%4140[24Y$24-!5$53.TE.\nM3D52.TE.4T]&05([24Y35$5!1#M)3E1/.TE.5T%21#M)4SM)4TXG5#M)5#M)\nM5\"=$.TE4)TQ,.TE4)U,[2513.TE44T5,1CM*.TI54U0[2SM+1450.TM%15!3\nM.TM%4%0[2TY/5SM+3D]74SM+3D]73CM,.TQ!4U0[3$%414Q9.TQ!5$52.TQ!\nM5%1%4CM,051415),63M,14%35#M,15-3.TQ%4U0[3$54.TQ%5\"=3.TQ)2T4[\nM3$E+140[3$E+14Q9.TQ)5%1,13M,3T]+.TQ/3TM)3D<[3$]/2U,[3%1$.TT[\nM34%)3DQ9.TU!3ED[34%9.TU!64)%.TU%.TU%04X[345!3E=(24Q%.TU%4D5,\nM63M-24=(5#M-3U)%.TU/4D5/5D52.TU/4U0[34]35$Q9.TU50T@[35535#M-\nM63M-65-%3$8[3CM.04U%.TY!345,63M.1#M.14%2.TY%05),63M.14-%4U-!\nM4ED[3D5%1#M.145$4SM.14E42$52.TY%5D52.TY%5D525$A%3$534SM.15<[\nM3D585#M.24Y%.TY/.TY/0D]$63M.3TX[3D].13M.3T].13M.3U([3D]234%,\nM3%D[3D]4.TY/5$A)3D<[3D]614P[3D]7.TY/5TA%4D4[3SM/0E9)3U533%D[\nM3T8[3T9&.T]&5$5..T](.T]+.T]+05D[3TQ$.T]..T].0T4[3TY%.T].15,[\nM3TY,63M/3E1/.T]2.T]42$52.T]42$524SM/5$A%4E=)4T4[3U5'2%0[3U52\nM.T]54E,[3U524T5,5D53.T]55#M/5513241%.T]615([3U9%4D%,3#M/5TX[\nM4#M005)424-53$%2.U!!4E1)0U5,05),63M015([4$522$%04SM03$%#140[\nM4$Q%05-%.U!,55,[4$]34TE\"3$4[4%)%4U5-04),63M04D]\"04),63M04D]6\nM241%4SM1.U%513M154E413M15CM2.U)!5$A%4CM21#M213M214%,3%D[4D5!\nM4T].04),63M214=!4D1)3D<[4D5'05)$3$534SM214=!4D13.U)%3$%4259%\nM3%D[4D534$5#5$E614Q9.U))1TA4.U,[4T%)1#M304U%.U-!5SM305D[4T%9\nM24Y'.U-!65,[4T5#3TY$.U-%0T].1$Q9.U-%13M3145)3D<[4T5%33M3145-\nM140[4T5%34E.1SM3145-4SM3145..U-%3$8[4T5,5D53.U-%3E-)0DQ%.U-%\nM3E0[4T5224]54SM315))3U533%D[4T5614X[4T5615)!3#M32$%,3#M32$4[\nM4TA/54Q$.U-(3U5,1$XG5#M324Y#13M325@[4T\\[4T]-13M33TU%0D]$63M3\nM3TU%2$]7.U-/345/3D4[4T]-151(24Y'.U-/345424U%.U-/345424U%4SM3\nM3TU%5TA!5#M33TU%5TA%4D4[4T]/3CM33U)263M34$5#249)140[4U!%0TE&\nM63M34$5#249924Y'.U-424Q,.U-50CM354-(.U-54#M355)%.U0[5\"=3.U1!\nM2T4[5$%+14X[5$5,3#M414Y$4SM42#M42$%..U1(04Y+.U1(04Y+4SM42$%.\nM6#M42$%4.U1(050G4SM42$%44SM42$4[5$A%25([5$A%25)3.U1(14T[5$A%\nM35-%3%9%4SM42$5..U1(14Y#13M42$5213M42$5212=3.U1(15)%049415([\nM5$A%4D5\"63M42$52149/4D4[5$A%4D5)3CM42$5215,[5$A%4D554$]..U1(\nM15-%.U1(15D[5$A%62=$.U1(15DG3$P[5$A%62=213M42$59)U9%.U1(24Y+\nM.U1(25)$.U1(25,[5$A/4D]51T@[5$A/4D]51TA,63M42$]313M42$]51T@[\nM5$A2144[5$A23U5'2#M42%)/54=(3U54.U1(4E4[5$A54SM43SM43T=%5$A%\nM4CM43T\\[5$]/2SM43U=!4D0[5$]705)$4SM44DE%1#M44DE%4SM44E5,63M4\nM4ED[5%)924Y'.U1724-%.U173SM5.U5..U5.1$52.U5.1D]25%5.051%3%D[\nM54Y,15-3.U5.3$E+14Q9.U5.5$E,.U5.5$\\[55`[55!/3CM54SM54T4[55-%\nM1#M54T5&54P[55-%4SM54TE.1SM54U5!3$Q9.U550U`[5CM604Q513M605))\nM3U53.U9%4ED[5DE!.U9)6CM64SM7.U=!3E0[5T%.5%,[5T%3.U=!4TXG5#M7\nM05D[5T4[5T4G1#M712=,3#M712=213M712=613M714Q#3TU%.U=%3$P[5T5.\nM5#M715)%.U=%4D5.)U0[5TA!5#M72$%4)U,[5TA!5$5615([5TA%3CM72$5.\nM0T4[5TA%3D5615([5TA%4D4[5TA%4D4G4SM72$5214%&5$52.U=(15)%05,[\nM5TA%4D5\"63M72$5214E..U=(15)%55!/3CM72$52159%4CM72$542$52.U=(\nM24-(.U=(24Q%.U=(251(15([5TA/.U=(3R=3.U=(3T5615([5TA/3$4[5TA/\nM33M72$]313M72%D[5TE,3#M724Q,24Y'.U=)4T@[5TE42#M7251(24X[5TE4\nM2$]55#M73TXG5#M73TY$15([5T]53$0[5T]53$0[5T]53$1.)U0[6#M9.UE%\nM4SM9150[64]5.UE/52=$.UE/52=,3#M93U4G4D4[64]5)U9%.UE/55([64]5\nM4E,[64]54E-%3$8[64]54E-%3%9%4SM:.UI%4D\\B.PH)\"21S=&]P7V%R<F%Y\nM(#T@97AP;&]D92@G.R<L(\"1S=&]P=V]R9',I.PH)+R]G970@<FED(&]F('-T\nM;W`@=V]R9',*\"0DD:V5Y=V]R9%]A<G)A>2`](&%R<F%Y7V1I9F8H)&%R<F%Y\nM7V]F7W=O<F1S+\"`D<W1O<%]A<G)A>2D[\"@DO+W)E<V5T('1H92!A<G)A>2!I\nM;F1E>`H)\"7-O<G0H)&ME>7=O<F1?87)R87DI.PD)\"@DO+W-H;W5L9\"!W92!P\nM<F]C965D/PH)\"6EF(\"AC;W5N=\"@D:V5Y=V]R9%]A<G)A>2D@/B`P*2![\"@D)\nM\"2\\O8V]N;F5C=\"!T;R!D871A8F%S90H)\"0D):6YC;'5D92@B+W=W=R]S97)V\nM97)S+V1B8V]N;BYI;F,B*3L*\"0D)+R]C<F5A=&4@82!T96UP;W)A<GD@=&%B\nM;&4@=&\\@<W1O<F4@:&ET<R!I;@H)\"0D))'-Q;%]T96UP7W1A8FQE(#T@(D-2\nM14%412!414U03U)!4ED@5$%\"3$4@:&ET<R`H<')O9'5C=%]I9\"!I;G1E9V5R\nM(&YO=\"!N=6QL*2([\"@D)\"0DD<F5S=6QT7W1E;7!?=&%B;&4@/2!P9U]E>&5C\nM*\"1C;VYN+\"`D<W%L7W1E;7!?=&%B;&4I.PH)\"0DO+V)U:6QD(&$@<W1A=&5M\nM96YT('1O(&EN<V5R=\"!I;G1O('1H92!T96UP('1A8FQE\"@D)\"0EF;W(@*\"1I\nM/3`[(\"1I(#P@8V]U;G0H)&ME>7=O<F1?87)R87DI.R`D:2LK*2![\"@D)\"0D)\nM:68@*\"1I(#P@*&-O=6YT*\"1K97EW;W)D7V%R<F%Y*2`M(#$I*2![\"@D)\"0D)\nM\"21O<E]W;W)D(\"X](\"(G(B`N(\"1K97EW;W)D7V%R<F%Y6R1I72`N(\"(G($]2\nM(&ME>7=O<F1?=F%L=64@/2`B.PH)\"0D)\"7T@96QS92![\"@D)\"0D)\"21O<E]W\nM;W)D(\"X](\"(G(B`N(\"1K97EW;W)D7V%R<F%Y6R1I72`N(\"(G(CL*\"0D)\"0E]\nM\"@D)\"0E]\"@D)\"0DD<W%L7V]R7W=O<F0@/2`B24Y315)4($E.5$\\@:&ET<R`H\nM<')O9'5C=%]I9\"D@4T5,14-4('!R;V1U8W1?:60@1E)/32!P85]S96%R8VA?\nM;6%P(%=(15)%(&ME>7=O<F1?:60@24X@*%-%3$5#5\"!K97EW;W)D7VED($92\nM3TT@<&%?<V5A<F-H7VME>7=O<F0@5TA%4D4@:V5Y=V]R9%]V86QU92`](\"(@\nM+B`D;W)?=V]R9\"`N(\"(I(CL*\"0D)\"21R97-U;'1?;W)?=V]R9\"`]('!G7V5X\nM96,H)&-O;FXL(\"1S<6Q?;W)?=V]R9\"D[\"@D)\"2\\O8G5I;&0@82!T96UP;W)A\nM<GD@=&%B;&4@=&\\@<W1O<F4@=&AE(&AI=\"!C;W5N=',*\"0D)\"21S<6Q?:&ET\nM7V-O=6YT7W1A8FQE(#T@(D-214%412!414U03U)!4ED@5$%\"3$4@<')O9%]H\nM:71?8V]U;G0@*'!R;V1U8W1?:60@:6YT96=E<B!N;W0@;G5L;\"P@:&ET7V-O\nM=6YT('-M86QL:6YT(&YO=\"!N=6QL*2([\"@D)\"0DD<F5S=6QT(#T@<&=?97AE\nM8R@D8V]N;BP@)'-Q;%]H:71?8V]U;G1?=&%B;&4I.PH)\"0DO+V9O<B!E86-H\nM('!R;V1U8W0@:6X@9&ES=&EN8W0@:&ET<RP@8V]U;G0@=&AE(&AI=',*\"0D)\nM\"21S<6Q?9&ES=&EN8W1?:&ET(#T@(E-%3$5#5\"!$25-424Y#5\"!P<F]D=6-T\nM7VED($923TT@:&ET<R([\"@D)\"0DD<F5S=6QT7V1I<W1I;F-T7VAI=\"`]('!G\nM7V5X96,H)&-O;FXL(\"1S<6Q?9&ES=&EN8W1?:&ET*3L*\"0D)\"21N=6UR;W=S\nM7V1I<W1I;F-T7VAI=\"`]('!G7VYU;7)O=W,H)')E<W5L=%]D:7-T:6YC=%]H\nM:70I.PH)\"0D)+R]E8VAO*\"1N=6UR;W=S7V1I<W1I;F-T7VAI=\"D[\"@D)\"0EF\nM;W(@*\"1I/3`[(\"1I(#P@)&YU;7)O=W-?9&ES=&EN8W1?:&ET.R`D:2LK*2![\nM\"@D)\"0D))&9E=&-H7V1I<W1I8W1?:&ET(#T@<&=?9F5T8VA?87)R87DH)')E\nM<W5L=%]D:7-T:6YC=%]H:70L(\"1I*3L*\"0D)\"0DD<W%L7VEN<V5R=%]H:71?\nM8V]U;G0@/2`B24Y315)4($E.5$\\@<')O9%]H:71?8V]U;G0@*'!R;V1U8W1?\nM:60L(&AI=%]C;W5N=\"D@5D%,5453(\"@B(\"X@)&9E=&-H7V1I<W1I8W1?:&ET\nM6R=P<F]D=6-T7VED)UT@+B`B+\"`H4T5,14-4($-/54Y4*\"HI($923TT@:&ET\nM<R!72$5212!P<F]D=6-T7VED(#T@(B`N(\"1F971C:%]D:7-T:6-T7VAI=%LG\nM<')O9'5C=%]I9\"==(\"X@(BDI(CL*\"0D)\"0DO+V5C:&\\H)'-Q;%]I;G-E<G1?\nM:&ET7V-O=6YT(\"X@)SQ\"4CXG*3L*\"0D)\"0DD<F5S=6QT7VEN<V5R=%]H:71?\nM8V]U;G0@/2!P9U]E>&5C*\"1C;VYN+\"`D<W%L7VEN<V5R=%]H:71?8V]U;G0I\nM.PH)\"0D)?0H)\"0DO+V=E=\"!R97-U;'0@<V5T('!R97`@:70@9F]R(&1I<W!L\nM87D*\"0D)+R]F;W)M87(@86YD(&1I<W!L87D@9&%T82!F<F]M(&AI=\"!T86)L\nM90H)\"7T*\"0DO+V-L;W-E(&-O;FYE8W1I;VX*\"0D)<&=?8VQO<V4H)&-O;FXI\n!.P``\n`\nend\n", "msg_date": "Thu, 15 Mar 2001 22:59:42 GMT", "msg_from": "ryan@paymentalliance.net", "msg_from_op": true, "msg_subject": "PostgreSQL Search Engine - searchraw.php3 (1/1)" } ]
[ { "msg_contents": "\nI know there are still discussions going on concerning the whole fsync\nissue, but, from what I've been following, its purely a performance issue\nthen anything ...\n\nNow that Tom's patch is in place for the XLOG stuff, I'd like to put out a\nBeta6 tomorrow for testing, with an RC1 schedualed for next week ...\n\nIs there anything *major* left, other then the fsync issue, that needs to\nbe resolved?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n", "msg_date": "Thu, 15 Mar 2001 21:19:47 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Beta6 for Tomorrow?" }, { "msg_contents": "The Hermit Hacker <scrappy@hub.org> writes:\n> Is there anything *major* left, other then the fsync issue, that needs to\n> be resolved?\n\nDon't believe so.\n\nI'm testing xlog fsync revisions now, should be ready to commit in an\nhour or so. (I'm just curious to see what it does to the pgbench\nresults...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 Mar 2001 20:42:02 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Beta6 for Tomorrow? " }, { "msg_contents": "On Thu, 15 Mar 2001, Tom Lane wrote:\n\n> The Hermit Hacker <scrappy@hub.org> writes:\n> > Is there anything *major* left, other then the fsync issue, that needs to\n> > be resolved?\n>\n> Don't believe so.\n>\n> I'm testing xlog fsync revisions now, should be ready to commit in an\n> hour or so. (I'm just curious to see what it does to the pgbench\n> results...)\n\nOkay, I'll wrap up beta6 tomorrow, give a weekend for ppl to test, and\n*finally* roll out RC1 if nobody has anything major that creeps up ...:)\n\n", "msg_date": "Thu, 15 Mar 2001 21:48:16 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: Beta6 for Tomorrow? " } ]
[ { "msg_contents": "Greetings,\n\n I'm getting errors as a result of making queries. The attached log from \nthe last two queries gives an idea as to what is happening.\n\n I suspect I must have done something bad when I upgraded via cvsup.\n\n Does this mean that I should do an initdb?\n\n-- \nSincerely etc.,\n\n NAME Christopher Sawtell\n CELL PHONE 021 257 4451\n ICQ UIN 45863470\n EMAIL csawtell @ xtra . co . nz\n CNOTES ftp://ftp.funet.fi/pub/languages/C/tutorials/sawtell_C.tar.gz\n\n -->> Please refrain from using HTML or WORD attachments in e-mails to me \n<<--", "msg_date": "Fri, 16 Mar 2001 15:51:48 +1300", "msg_from": "Christopher Sawtell <csawtell@xtra.co.nz>", "msg_from_op": true, "msg_subject": "FATAL 2: XLogFlush: request is not satisfied" }, { "msg_contents": "Christopher Sawtell <csawtell@xtra.co.nz> writes:\n> I'm getting errors as a result of making queries. The attached log from \n> the last two queries gives an idea as to what is happening.\n\nHmmm ... you were the one who did the pg_resetxlog bit today, right?\nI have a feeling I missed something in that. Back to the drawing\nboard...\n\n> Does this mean that I should do an initdb?\n\nAfraid so. Sorry about that. You should be able to do a clean dump at\nleast.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 Mar 2001 22:57:47 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: FATAL 2: XLogFlush: request is not satisfied " } ]
[ { "msg_contents": "\n> Okay ... we can fall back to O_FSYNC if we don't see either of the\n> others. No problem. Any other weird cases out there? I think Andreas\n> might've muttered something about AIX but I'm not sure now.\n\nYou can safely use O_DSYNC on AIX, the only special on AIX is,\nthat it does not make a speed difference to O_SYNC. This is imho\nbecause the jfs only needs one sync write to the jfs journal for meta info \nin eighter case (so that nobody misunderstands: both perform excellent).\n\nAndreas\n", "msg_date": "Fri, 16 Mar 2001 16:02:38 +0100", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: Allowing WAL fsync to be done via O_SYNC " }, { "msg_contents": "> > Okay ... we can fall back to O_FSYNC if we don't see either of the\n> > others. No problem. Any other weird cases out there? I think Andreas\n> > might've muttered something about AIX but I'm not sure now.\n> You can safely use O_DSYNC on AIX, the only special on AIX is,\n> that it does not make a speed difference to O_SYNC. This is imho\n> because the jfs only needs one sync write to the jfs journal for meta info\n> in eighter case (so that nobody misunderstands: both perform excellent).\n\nHmm. Does everyone run jfs on AIX, or are there other file systems\navailable? The same issue should be raised for Linux (at least): have we\ntried test cases with both journaling and non-journaling file systems?\nPerhaps the flag choice would be markedly different for the different\noptions?\n\n - Thomas\n", "msg_date": "Fri, 16 Mar 2001 15:11:51 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: AW: Allowing WAL fsync to be done via O_SYNC" }, { "msg_contents": "\nMy UnixWare box runs Veritas' VXFS, and has Online-Data Manager \ninstalled. Documentation is available at http://www.lerctr.org:457/ \n\nThere are MULTIPLE sync modes, and there are also hints an app can give \nto the FS. \n\nMore info is available if you want. \n\nLER\n\n-- \nLarry Rosenman \n http://www.lerctr.org/~ler/\nPhone: +1 972 414 9812 \n E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749 US\n>>>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<<\n\nOn 3/16/01, 9:11:51 AM, Thomas Lockhart <lockhart@alumni.caltech.edu> wrote \nregarding [HACKERS] Re: AW: Allowing WAL fsync to be done via O_SYNC:\n\n\n> > > Okay ... we can fall back to O_FSYNC if we don't see either of the\n> > > others. No problem. Any other weird cases out there? I think Andreas\n> > > might've muttered something about AIX but I'm not sure now.\n> > You can safely use O_DSYNC on AIX, the only special on AIX is,\n> > that it does not make a speed difference to O_SYNC. This is imho\n> > because the jfs only needs one sync write to the jfs journal for meta \ninfo\n> > in eighter case (so that nobody misunderstands: both perform excellent).\n\n> Hmm. Does everyone run jfs on AIX, or are there other file systems\n> available? The same issue should be raised for Linux (at least): have we\n> tried test cases with both journaling and non-journaling file systems?\n> Perhaps the flag choice would be markedly different for the different\n> options?\n\n> - Thomas\n\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n", "msg_date": "Fri, 16 Mar 2001 15:38:43 GMT", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: Re: AW: Allowing WAL fsync to be done via O_SYNC" }, { "msg_contents": "Thomas Lockhart <lockhart@alumni.caltech.edu> writes:\n> tried test cases with both journaling and non-journaling file systems?\n> Perhaps the flag choice would be markedly different for the different\n> options?\n\nGood point. Another reason we don't have enough data to nail this down\nyet. Anyway, the code is in there and people can run test cases if they\nplease...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Mar 2001 10:52:01 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: Allowing WAL fsync to be done via O_SYNC " } ]
[ { "msg_contents": "> > I was wondering if the multiple writes performed to the \n> > XLOG could be grouped into one write().\n> \n> That would require fairly major restructuring of xlog.c, which I don't\n\nRestructing? Why? It's only XLogWrite() who make writes.\n\n> want to undertake at this point in the cycle (we're trying to push out\n> a release candidate, remember?). I'm not convinced it would be a huge\n> win anyway. It would be a win if your average transaction writes\n> multiple blocks' worth of XLOG ... but if your average transaction\n> writes less than a block then it won't help.\n\nBut in multi-user environment multiple transactions may write > 1 block\nbefore commit.\n\n> I think it probably is a good idea to restructure xlog.c so \n> that it can write more than one page at a time --- but it's\n> not such a great idea that I want to hold up the release any\n> more for it.\n\nAgreed.\n\nVadim\n", "msg_date": "Fri, 16 Mar 2001 08:55:24 -0800", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: Allowing WAL fsync to be done via O_SYNC " }, { "msg_contents": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n> I was wondering if the multiple writes performed to the \n> XLOG could be grouped into one write().\n>> \n>> That would require fairly major restructuring of xlog.c, which I don't\n\n> Restructing? Why? It's only XLogWrite() who make writes.\n\nI was thinking of changing the data structure. I guess you could keep\nthe data structure the same and make XLogWrite more complicated, though.\n\n>> I think it probably is a good idea to restructure xlog.c so \n>> that it can write more than one page at a time --- but it's\n>> not such a great idea that I want to hold up the release any\n>> more for it.\n\n> Agreed.\n\nYes, to-do item for 7.2.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Mar 2001 11:59:43 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allowing WAL fsync to be done via O_SYNC " } ]
[ { "msg_contents": "> We've speculated about using Posix semaphores instead, on platforms\n\nFor spinlocks we should use pthread mutex-es.\n\n> where those are available. I think Bruce was concerned about the\n\nAnd nutex-es are more portable than semaphores.\n\nVadim\n", "msg_date": "Fri, 16 Mar 2001 09:10:43 -0800", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: Re[4]: Allowing WAL fsync to be done via O_SYNC " } ]
[ { "msg_contents": "> >> definitely need before considering this is to replace the existing\n> >> spinlock mechanism with something more efficient.\n> \n> > What sort of problems are you seeing with the spinlock code?\n> \n> It's great as long as you never block, but it sucks for making things\n\nI like optimistic approaches :-)\n\n> wait, because the wait interval will be some multiple of 10 msec rather\n> than just the time till the lock comes free.\n\nOn the AIX platform usleep (3) is able to really sleep microseconds without \nbusying the cpu when called for more than approx. 100 us (the longer the interval,\nthe less busy the cpu gets) .\nWould this not be ideal for spin_lock, or is usleep not very common ?\nLinux sais it is in the BSD 4.3 standard.\n\npostgres@s0188000zeu:/usr/postgres> time ustest # with 100 us\nreal 0m10.95s\nuser 0m0.40s\nsys 0m0.74s\n\npostgres@s0188000zeu:/usr/postgres> time ustest # with 10 us\nreal 0m18.62s\nuser 0m1.37s\nsys 0m5.73s\n\nAndreas\n\nPS: sorry off for weekend now :-) Current looks good on AIX.", "msg_date": "Fri, 16 Mar 2001 18:14:06 +0100", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: Re[4]: Allowing WAL fsync to be done via O_SYNC " }, { "msg_contents": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n>> It's great as long as you never block, but it sucks for making things\n>> wait, because the wait interval will be some multiple of 10 msec rather\n>> than just the time till the lock comes free.\n\n> On the AIX platform usleep (3) is able to really sleep microseconds without \n> busying the cpu when called for more than approx. 100 us (the longer the interval,\n> the less busy the cpu gets) .\n> Would this not be ideal for spin_lock, or is usleep not very common ?\n> Linux sais it is in the BSD 4.3 standard.\n\nHPUX has usleep, but the man page says\n\n The usleep() function is included for its historical usage. The\n setitimer() function is preferred over this function.\n\nIn any case, I would expect that all these functions offer accuracy\nno better than the scheduler's regular clock cycle (~ 100Hz) on most\nkernels.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Mar 2001 16:59:42 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: Re[4]: Allowing WAL fsync to be done via O_SYNC " } ]
[ { "msg_contents": "\n> For a log file on a busy system, this could improve throughput a lot--batch\n> commit. You end up with fewer than one fsync() per transaction.\n\nThis is not the issue, since that is already implemented.\nThe current bunching method might have room for improvement, but\nthere are currently fewer fsync's than transactions when appropriate.\n\nAndreas\n", "msg_date": "Fri, 16 Mar 2001 18:14:29 +0100", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: Allowing WAL fsync to be done via O_SYNC" } ]
[ { "msg_contents": "My problem is that my two outer joined tables have columns that have the \nsame names. Therefore when my select list tries to reference the \ncolumns they are ambiguously defined. Looking at the doc I see the way \nto deal with this is by using the following syntax:\n\ntable as alias (column1alias, column2alias,...)\n\nSo we can alias the conficting column names to resolve the problem. \nHowever the problem with this is that the column aliases are positional \nper the table structure. Thus column1alias applies to the first column \nin the table. Code that relies on the order of columns in a table is \nvery brittle. As adding a column always places it at the end of the \ntable, it is very easy to have a newly installed site have one order \n(the order the create table command creates them in) and a site \nupgrading from an older version (where the upgrade simply adds the new \ncolumns) to have column orders be different.\n\nMy feeling is that postgres has misinterpreted the SQL92 spec in this \nregards. But I am having problems finding an online copy of the SQL92 \nspec so that I can verify.\n\nWhat I would expect the syntax to be is:\n\ntable as alias (columna as aliasa, columnb as aliasb,...)\n\nThis will allow the query to work regardless of what the table column \norder is. Generally the SQL spec has tried not to tie query behaviour \nto the table column order.\n\nI will fix my code so that it works given how postgres currently \nsupports the column aliases.\n\nCan anyone point me to a copy of the SQL92 spec so that I can research \nthis more?\n\nthanks,\n--Barry\n\n", "msg_date": "Fri, 16 Mar 2001 10:17:33 -0800", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": true, "msg_subject": "Problems with outer joins in 7.1beta5" }, { "msg_contents": "On Fri, Mar 16, 2001 at 10:17:33AM -0800, Barry Lind wrote:\n> \n> My feeling is that postgres has misinterpreted the SQL92 spec in this \n> regards. But I am having problems finding an online copy of the SQL92 \n> spec so that I can verify.\n> \n> What I would expect the syntax to be is:\n> \n> table as alias (columna as aliasa, columnb as aliasb,...)\n> \n> This will allow the query to work regardless of what the table column \n> order is. Generally the SQL spec has tried not to tie query behaviour \n> to the table column order.\n> \n\nWhat you expect, and what's in the spec. can be very different. As\nthe following quote shows, the definition is in fact order dependent:\nnote that a <derived column list> is a simple comma delimited list of\ncolumn names.\n\nQuote from SQL'92:\n\n 6.3 <table reference>\n\n Function\n\n Reference a table.\n\n Format\n\n <table reference> ::=\n <table name> [ [ AS ] <correlation name>\n [ <left paren> <derived column list> <right paren> ] ]\n | <derived table> [ AS ] <correlation name>\n [ <left paren> <derived column list> <right paren> ]\n | <joined table>\n\n <derived table> ::= <table subquery>\n\n <derived column list> ::= <column name list>\n\n <column name list> ::=\n <column name> [ { <comma> <column name> }... ]\n\n\n Syntax Rules\n\n[...]\n\n 7) If a <derived column list> is specified in a <table reference>,\n then the number of <column name>s in the <derived column list>\n shall be the same as the degree of the table specified by the\n <derived table> or the <table name> of that <table reference>,\n and the name of the i-th column of that <derived table> or the\n effective name of the i-th column of that <table name> is the\n i-th <column name> in that <derived column list>.\n\n", "msg_date": "Fri, 16 Mar 2001 13:34:45 -0600", "msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Problems with outer joins in 7.1beta5" }, { "msg_contents": "Barry Lind <barry@xythos.com> writes:\n> What I would expect the syntax to be is:\n> table as alias (columna as aliasa, columnb as aliasb,...)\n> This will allow the query to work regardless of what the table column \n> order is. Generally the SQL spec has tried not to tie query behaviour \n> to the table column order.\n\nUnfortunately, the spec authors seem to have forgotten that basic design\nrule when they wrote the aliasing syntax. Column alias lists are\nposition-sensitive:\n\n <table reference> ::=\n <table name> [ [ AS ] <correlation name>\n [ <left paren> <derived column list> <right paren> ] ]\n | <derived table> [ AS ] <correlation name>\n [ <left paren> <derived column list> <right paren> ]\n | <joined table>\n\n <derived column list> ::= <column name list>\n\n <column name list> ::=\n <column name> [ { <comma> <column name> }... ]\n\nSQL99 seems to be no better. Sorry.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Mar 2001 14:58:38 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Problems with outer joins in 7.1beta5 " } ]