threads
listlengths
1
275
[ { "msg_contents": "Hi,\n\nI have a transaction that has multiple separate command in it (nothing \nunusual there).\n\nHowever sometimes one of the sql statements will fail and so the whole \ntransaction fails.\n\nIn some cases I could fix the failing statement if only I knew which one \nit was. Can anyone thin...
[ { "msg_contents": "I have a function, call it \"myfunc()\", that is REALLY expensive computationally. Think of it like, \"If you call this function, it's going to telephone the Microsoft Help line and wait in their support queue to get the answer.\" Ok, it's not that bad, but it's so bad that the optimizer sh...
[ { "msg_contents": "Christian Paul B. Cosinas wrote:\n> I try to run this command in my linux server.\n> VACUUM FULL pg_class;\n> VACUUM FULL pg_attribute;\n> VACUUM FULL pg_depend;\n> \n> But it give me the following error:\n> \t-bash: VACUUM: command not found\n\nThat needs to be run from psql ...\n\n> \n> \n>...
[ { "msg_contents": "Hi everyone,\n\nI have a question about the performance of sort.\n\nSetup: Dell Dimension 3000, Suse 10, 1GB ram, PostgreSQL 8.1 RC 1 with \nPostGIS, 1 built-in 80 GB IDE drive, 1 SATA Seagate 400GB drive. The \nIDE drive has the OS and the WAL files, the SATA drive the database. \n From hdp...
[ { "msg_contents": "I have run into this type of query problem as well. I solved it in my\napplication by the following type of query.\n\nSELECT tlid\nFROM completechain AS o\nWHERE not exists ( \n\tSELECT 1\n\tFROM completechain\n\tWHERE tlid=o.tlid and ogc_fid!=o.ogc_fid\n);\n\nAssumes of course that you have...
[ { "msg_contents": "Charlie, \n\n> Should I expect results like this? I realize that the \n> computer is quite low-end and is very IO bound for this \n> query, but I'm still surprised that the sort operation takes so long.\n\nIt's the sort performance of Postgres that's your problem.\n \n> Out of curiosity, I s...
[ { "msg_contents": "unsubscribe\n\nunsubscribe", "msg_date": "Wed, 9 Nov 2005 10:23:54 +0800", "msg_from": "William Lai <wlai2768@gmail.com>", "msg_from_op": true, "msg_subject": "" } ]
[ { "msg_contents": "Hi, all!\n\nI've been using postgresql for a long time now, but today I had some\nproblem I couldn't solve properly - hope here some more experienced\nusers have some hint's for me.\n\nFirst, I'm using postgresql 7.4.7 on a 2GHz machine having 1.5GByte RAM\nand I have a table with about 220 c...
[ { "msg_contents": "...and on those notes, let me repeat my often stated advice that a DB server should be configured with as much RAM as is feasible. 4GB or more strongly recommended.\n\nI'll add that the HW you are using for a DB server should be able to hold _at least_ 4GB of RAM (note that modern _laptops_ ...
[ { "msg_contents": "I noticed outer join is very very slow in postgresql as compared\nto Oracle.\n\nSELECT a.dln_code, a.company_name,\nto_char(a.certificate_date,'DD-MON-YYYY'),\nto_char(a.certificate_type_id, '99'),\nCOALESCE(b.certificate_type_description,'None') ,\na.description, a.blanket_single, a.certific...
[ { "msg_contents": "Hi all,\n\nI've got PG 8.0 on Debian sarge set up ...\nI want to speed up performance on the system.\n\nThe system will run PG, Apache front-end on port 80 and Tomcat / Cocoon \nfor the webapp.\nThe webapp is not so heavily used, so we can give the max performance \nto the database.\nThe data...
[ { "msg_contents": "\nHello all , i post this question here because i wasen't able to find\nanswer to my question elsewhere , i hope someone can answer.\n\n\nAbstract:\n\nThe function that can be found at the end of the e-mail emulate two thing.\n\nFirst it will fill a record set of result with needed column fro...
[ { "msg_contents": "0= Optimize your schema to be a tight as possible. Your goal is to give yourself the maximum chance that everything you want to work on is in RAM when you need it.\n1= Upgrade your RAM to as much as you can possibly strain to afford. 4GB at least. It's that important.\n2= If the _entire_ D...
[ { "msg_contents": "The point Gentlemen, was that Good Architecture is King. That's what I was trying to emphasize by calling proper DB architecture step 0. All other things being equal (and they usually aren't, this sort of stuff is _very_ context dependent), the more of your critical schema that you can fit ...
[ { "msg_contents": "\n\n\n<snip>\n\tFOR r_record IN SELECT count(cid) AS hits,src, bid, tid,NULL::int8 as min_time,NULL::int8 as max_time FROM archive_event WHERE inst=\\'3\\' AND (utctime BETWEEN \\'1114920000\\' AND \\'1131512399\\') GROUP BY src, bid, tid LOOP\n\n\tSELECT INTO one_record MIN(utctime) as times...
[ { "msg_contents": "Hi,\n\nWe're having problems with our PostgreSQL server using forever for simple\nqueries, even when there's little load -- or rather, the transactions seem\nto take forever to commit. We're using 8.1 (yay!) on a single Opteron, with\nWAL on the system two-disk (software) RAID-1, separate fro...
[ { "msg_contents": "This is with Postgres 8.0.3. Any advice is appreciated. I'm not sure\nexactly what I expect, but I was hoping that if it used the\nexternal_id_map_source_target_id index it would be faster. Mainly I was\nsurprised that the same plan could perform so much differently with just\nan extra con...
[ { "msg_contents": "> The point Gentlemen, was that Good Architecture is King. That's what\nI\n> was trying to emphasize by calling proper DB architecture step 0. All\n> other things being equal (and they usually aren't, this sort of stuff\nis\n> _very_ context dependent), the more of your critical schema that...
[ { "msg_contents": "My original post did not take into account VAT, I apologize for that oversight.\n\nHowever, unless you are naive, or made of gold, or have some sort of \"special\" relationship that requires you to, _NE VER_ buy RAM from your computer HW OEM. For at least two decades it's been a provable fac...
[ { "msg_contents": "This is related to my post the other day about sort performance.\n\nPart of my problem seems to be that postgresql is greatly overestimating \nthe cost of index scans. As a result, it prefers query plans that \ninvolve seq scans and sorts versus query plans that use index scans.\nHere is an ...
[ { "msg_contents": "Hello,\n\nI'm perplexed. I'm trying to find out why some queries are taking a long \ntime, and have found that after running analyze, one particular query \nbecomes slow.\n\nThis query is based on a view that is based on multiple left outer joins \nto merge data from lots of tables.\n\nIf I ...
[ { "msg_contents": "That sure seems to bolster the theory that performance is degrading\nbecause you exhaust the cache space and need to start reading\nindex pages. When inserting sequential data, you don't need to\nrandomly access pages all over the index tree.\n\n-Kevin\n\n\n>>> Kelly Burkhart <kelly@tradebot...
[ { "msg_contents": "ns30966:~# NOTICE: Executing SQL: update tblPrintjobs set \nApplicationType = 1 where ApplicationType is null and \nupper(DocumentName) like '%.DOC'\n\nns30966:~# NOTICE: Executing SQL: update tblPrintjobs set \nApplicationType = 1 where ApplicationType is null and \nupper(DocumentName) l...
[ { "msg_contents": "We have a large DB with partitioned tables in postgres. We have had\ntrouble with a ORDER/LIMIT type query. The order and limit are not\npushed down to the sub-tables....\n \nCREATE TABLE base (\n foo int \n);\n \nCREATE TABLE bar_0\n extra int\n) INHERITS (base);\nALTER TABLE bar AD...
[ { "msg_contents": "Does anyone know what factors affect the recovery time of postgres if it does not shutdown cleanly? With the same size database I've seen times from a few seconds to a few minutes. The longest time was 33 minutes. The 33 minutes was after a complete system crash and reboot so there are a lot...
[ { "msg_contents": "\nWe've got an older system in production (PG 7.2.4). Recently\none of the users has wanted to implement a selective delete,\nbut is finding that the time it appears to take exceeds her\npatience factor by several orders of magnitude. Here's\na synopsis of her report. It appears that the \...
[ { "msg_contents": "Does anyone have recommendations for hardware and/or OS to work with\naround 5TB datasets? \n \nThe data is for analysis, so there is virtually no inserting besides a\nbig bulk load. Analysis involves full-database aggregations - mostly\nbasic arithmetic and grouping. In addition, much smalle...
[ { "msg_contents": "> Because I think we need to. The above would only delete rows \n> that have name = 'obsid' and value = 'oid080505'. We need to \n> delete all rows that have the same ids as those rows. \n> However, from what you note, I bet we could do:\n> \n> DELETE FROM \"tmp_table2\" WHERE id IN\n> ...
[ { "msg_contents": "Hi,\nI am trying to use Explain Analyze to trace a slow SQL statement called from \nJDBC.\nThe SQL statement with the parameters taked 11 seconds. When I run a explain \nanalyze from psql, it takes < 50 ms with a reasonable explain plan. However \nwhen I try to run an explain analyze from JDB...
[ { "msg_contents": "Adam,\n\n> -----Original Message-----\n> From: pgsql-performance-owner@postgresql.org \n> [mailto:pgsql-performance-owner@postgresql.org] On Behalf Of \n> Claus Guttesen\n> Sent: Tuesday, November 15, 2005 12:29 AM\n> To: Adam Weisberg\n> Cc: pgsql-performance@postgresql.org\n> Subject: Re: [...
[ { "msg_contents": "> Hardware-wise I'd say dual core opterons. One dual-core-opteron\n> performs better than two single-core at the same speed. Tyan makes\n> some boards that have four sockets, thereby giving you 8 cpu's (if you\n> need that many). Sun and HP also makes nice hardware although the Tyan\n> board ...
[ { "msg_contents": "Dave,\n\n\n________________________________\n\n\tFrom: pgsql-performance-owner@postgresql.org\n[mailto:pgsql-performance-owner@postgresql.org] On Behalf Of Dave Cramer\n\tSent: Tuesday, November 15, 2005 6:15 AM\n\tTo: Luke Lonergan\n\tCc: Adam Weisberg; pgsql-performance@postgresql.org\n\tSu...
[ { "msg_contents": "Merlin, \n\n> just FYI: tyan makes a 8 socket motherboard (up to 16 cores!):\n> http://www.swt.com/vx50.html\n> \n> It can be loaded with up to 128 gb memory if all the sockets \n> are filled :).\n\nCool!\n\nJust remember that you can't get more than 1 CPU working on a query at a\ntime withou...
[ { "msg_contents": "Merlin,\n\n> > just FYI: tyan makes a 8 socket motherboard (up to 16 cores!):\n> > http://www.swt.com/vx50.html\n> > \n> > It can be loaded with up to 128 gb memory if all the sockets are \n> > filled :).\n\nAnother thought - I priced out a maxed out machine with 16 cores and\n128GB of RAM an...
[ { "msg_contents": "> Merlin,\n> \n> > > just FYI: tyan makes a 8 socket motherboard (up to 16 cores!):\n> > > http://www.swt.com/vx50.html\n> > >\n> > > It can be loaded with up to 128 gb memory if all the sockets are\n> > > filled :).\n> \n> Another thought - I priced out a maxed out machine with 16 cores and\...
[ { "msg_contents": "Luke,\n\n-----Original Message-----\nFrom: Luke Lonergan [mailto:LLonergan@greenplum.com] \nSent: Tuesday, November 15, 2005 7:10 AM\nTo: Adam Weisberg\nCc: pgsql-performance@postgresql.org\nSubject: RE: [PERFORM] Hardware/OS recommendations for large databases (\n5TB)\n\nAdam,\n\n> -----Orig...
[ { "msg_contents": "Because only 1 cpu is used on each query.\r\n- Luke\r\n--------------------------\r\nSent from my BlackBerry Wireless Device\r\n\r\n\r\n-----Original Message-----\r\nFrom: Adam Weisberg <Aweisberg@seiu1199.org>\r\nTo: Luke Lonergan <LLonergan@greenplum.com>\r\nCC: pgsql-performance@postgresql...
[ { "msg_contents": "Unless there was a way to guarantee consistency, it would be hard at\nbest to make this work. Convergence on large data sets across boxes is\nnon-trivial, and diffing databases is difficult at best. Unless there\nwas some form of automated way to ensure consistency, going 8 ways into\nseparat...
[ { "msg_contents": "I have a query that's making the planner do the wrong thing (for my \ndefinition of wrong) and I'm looking for advice on what to tune to make \nit do what I want.\n\nThe query consists or SELECT'ing a few fields from a table for a large \nnumber of rows. The table has about seventy thousand ...
[ { "msg_contents": ">\n>I suggest you read this on the difference between enterprise/SCSI and\n>desktop/IDE drives:\n>\n>\thttp://www.seagate.com/content/docs/pdf/whitepaper/D2c_More_than_Interface_ATA_vs_SCSI_042003.pdf\n>\n> \n>\nThis is exactly the kind of vendor propaganda I was talking about\nand it proves...
[ { "msg_contents": "David Boreham wrote:\n> I guess I've never bought into the vendor story that there are\n> two reliability grades. Why would they bother making two\n> different kinds of bearing, motor etc ? Seems like it's more\n> likely an excuse to justify higher prices.\n\nthen how to account for the fact ...
[ { "msg_contents": "Hi, I just have a little question, does PgPool keeps the same session\nbetween different connections? I say it cuz I have a server with the\nfollowing specifications:\n\nP4 3.2 ghz\n80 gig sata drives x 2\n1 gb ram\n5 ips\n1200 gb bandwidth\n100 mbit/s port speed.\n\nI am running a PgSQL 8.1 ...
[ { "msg_contents": "i have the following query involving a view that i really need to optimise:\n\nSELECT *\nFROM\n\ttokens.ta_tokenhist h INNER JOIN\n\ttokens.vw_tokens t ON h.token_id = t.token_id\nWHERE\n\th.sarreport_id = 9\n;\n\nwhere vw_tokens is defined as\n\nCREATE VIEW tokens.vw_tokens AS SELECT\n\t-...
[ { "msg_contents": "> > Perhaps we should put a link on the home page underneath LATEST \n> > RELEASEs saying\n> > \t7.2: de-supported\n> > \n> > with a link to a scary note along the lines of the above.\n> > \n> > ISTM that there are still too many people on older releases.\n> > \n> > We probably need an explan...
[ { "msg_contents": "> >>That way if someone wanted to upgrade from 7.2 to 8.1, they \n> can just \n> >>grab the latest dumper from the website, dump their old \n> database, then \n> >>upgrade easily.\n> > \n> > But if they're upgrading to 8.1, don't they already have the new \n> > pg_dump? How else are they goin...
[ { "msg_contents": "Hi all,\n\nWe are operating a 1.5GB postgresql database for a year and we have \nproblems for nearly a month. Usually everything is OK with the database, \nqueries are executed fast even if they are complicated but sometimes and \nfor half an hour, we have a general slow down.\n\nThe server i...
[ { "msg_contents": "> Remember - large DB is going to be IO bound. Memory will get thrashed\n> for file block buffers, even if you have large amounts, it's all gonna\n> be cycled in and out again.\n\n'fraid I have to disagree here. I manage ERP systems for manufacturing\ncompanies of various sizes. My systems ...
[ { "msg_contents": "\nHi,\n\nWe want to create a database for each one of our departments, but we only \nwant to have one instance of postgresql running. There are about 10-20 \ndepartments. I can easily use createdb to create these databases. However, \nwhat is the max number of database I can create before ...
[ { "msg_contents": "I am using PostgreSQL in an embedded system which has only 32 or 64 MB RAM \n(run on PPC 266 MHz or ARM 266MHz CPU). I have a table to keep downlaod \ntasks. There is a daemon keep looking up the table and fork a new process to \ndownload data from internet.\n\nDaemon:\n . Check the table ...
[ { "msg_contents": "Hello\n\nOn my serwer Linux Fedora, HP DL360G3 with 2x3.06 GHz 4GB RAM working \npostgresql 7.4.6. Cpu utilization is about 40-50% but system process \nqueue is long - about 6 task. Do you have nay sugestion/solution?\n\nRegards\nMarek\n", "msg_date": "Tue, 22 Nov 2005 15:22:59 +0100", ...
[ { "msg_contents": "Hi,\n\n \n\nOne of our PG server is experiencing extreme slowness and there are\nhundreds of SELECTS building up. I am not sure if heavy context\nswitching is the cause of this or something else is causing it.\n\n \n\nIs this pretty much the final word on this issue?\n\nhttp://archives.postgr...
[ { "msg_contents": "Is there another way in PG to return a recordset from a function than \nto declare a type first ?\n\ncreate function fnTest () returns setof \nmyDefinedTypeIDontWantToDefineFirst ...\n\n\nMet vriendelijke groeten,\nBien à vous,\nKind regards,\n\nYves Vindevogel\nImplements\n\nMail: yves.vinde...
[ { "msg_contents": "Thanks, guys, I'll start planning on upgrading to PG8.1\n\nWould this problem change it's nature in any way on the recent Dual-Core\nIntel XEON MP machines?\n\nThanks,\nAnjan\n\n-----Original Message-----\nFrom: Tom Lane [mailto:tgl@sss.pgh.pa.us] \nSent: Tuesday, November 22, 2005 12:36 PM\n...
[ { "msg_contents": "Is there any way to get a temporary relief from this Context Switching\nstorm? Does restarting postmaster help?\n\nIt seems that I can recreate the heavy CS with just one SELECT\nstatement...and then when multiple such SELECT queries are coming in,\nthings just get hosed up until we cancel a ...
[ { "msg_contents": "Yes, it's turned on, unfortunately it got overlooked during the setup,\nand until now...!\n\nIt's mostly a 'read' application, I increased the vm.max-readahead to\n2048 from the default 256, after which I've not seen the CS storm,\nthough it could be incidental.\n\nThanks,\nAnjan\n\n-----Orig...
[ { "msg_contents": "Hi,\n\nI am trying to get better performance reading data from postgres, so I \nwould like to return the data as binary rather than text as parsing it \nis taking a considerable amount of processor.\n\nHowever I can't figure out how to do that! I have functions like.\n\nfunction my_func(ret r...
[ { "msg_contents": "The offending SELECT query that invoked the CS storm was optimized by\nfolks here last night, so it's hard to say if the VM setting made a\ndifference. I'll give it a try anyway.\n\nThanks,\nAnjan\n\n-----Original Message-----\nFrom: Simon Riggs [mailto:simon@2ndquadrant.com] \nSent: Wednesda...
[ { "msg_contents": "Simon,\n\nI tested it by running two of those simultaneous queries (the\n'unoptimized' one), and it doesn't make any difference whether\nvm.max-readahead is 256 or 2048...the modified query runs in a snap.\n\nThanks,\nAnjan\n\n-----Original Message-----\nFrom: Anjan Dave \nSent: Wednesday, No...
[ { "msg_contents": "Hi,\n\nPostgreSQL 8.1 fresh install on a freshly installed OpenBSD 3.8 box.\n\npostgres=# CREATE DATABASE test;\nCREATE DATABASE\npostgres=# create table test (id serial, val integer);\nNOTICE: CREATE TABLE will create implicit sequence \"test_id_seq\" for \nserial column \"test.id\"\nCREAT...
[ { "msg_contents": "Pailloncy Jean-Gerard <jg@rilk.com> wrote ..\n[snip]\n\nTHIS MAY SEEM SILLY but vacuum is mispelled below and presumably there was never any ANALYZE done.\n\n> \n> postgres=# vaccum full verbose analyze;\n\n", "msg_date": "Wed, 23 Nov 2005 21:20:19 -0800", "msg_from": "andrew@pillette...
[ { "msg_contents": "Hi ,\n\ni get the following error on doing anything with the database after \nstarting it.\nCan anyone suggest how do i fix this\n\n xlog flush request 7/7D02338C is not satisfied --- flushed only to \n3/2471E324\n\nVipul Gupta\n\nHi ,\n\ni get the following error on doing anything with the d...
[ { "msg_contents": "Hi Folks,\n\nI'm new to Postgresql.\n\nI'm having great difficulties getting the performance I had hoped for\nfrom Postgresql 8.0. The typical query below takes ~20 minutes !!\n\nI hope an expert out there will tell me what I'm doing wrong - I hope\n*I* am doing something wrong.\n\nHardware\n...
[ { "msg_contents": "OK.\n\nThe consensus seems to be that I need more indexes and I also need to\nlook into the NOT IN statement as a possible bottleneck. I've\nintroduced the indexes which has led to a DRAMATIC change in response\ntime. Now I have to experiment with INNER JOIN -> OUTER JOIN\nvariations, SET ENA...
[ { "msg_contents": "A quick note to say that I'm very grateful for Tom Lane's input also.\nTom, I did put you on the list of recipients for my last posting to\npgsql-performance, but got:\n\n\n--------------------cut here--------------------\nThis is an automatically generated Delivery Status Notification.\n\nDe...
[ { "msg_contents": "Hi tom,\n\nbasically when i run any query with database say,\n\nselect count(*) from table1;\n\nIt gives me the following error trace: \nWARNING: could not write block 297776 of 1663/2110743/2110807\nDETAIL: Multiple failures --- write error may be permanent.\nERROR: xlog flush request 7/7...
[ { "msg_contents": "I have been reading all this technical talk about costs and such that\nI don't (_yet_) understand.\n\nNow I'm scared... what's the fastest way to do an equivalent of\ncount(*) on a table to know how many items it has?\n\nThanks,\nRodrigo\n", "msg_date": "Fri, 25 Nov 2005 19:36:35 +0000", ...
[ { "msg_contents": "Ok, I've subscribed (hopefully list volume won't kill me :-)\n\nI'm covering several things in this message since I didn't receive the \nprior messages in the thread\n\nfirst off these benchamrks are not being sponsered by my employer, they \nneed the machines burned in and so I'm going to us...
[ { "msg_contents": ">These boxes don't look like being designed for a DB server. The first are \n>very CPU bound, and the third may be a good choice for very large amounts \n>of streamed data, but not optimal for TP random access.\n\nI don't know what you mean when you say that the first ones are CPU bound, \nth...
[ { "msg_contents": "by the way, this is the discussion that promped me to start this project\nhttp://lwn.net/Articles/161323/\n\nDavid Lang\n", "msg_date": "Sat, 26 Nov 2005 08:13:40 -0800 (PST)", "msg_from": "David Lang <dlang@invendra.net>", "msg_from_op": true, "msg_subject": "Re: Open request...
[ { "msg_contents": "Did you folks see this article on Slashdot with a fellow requesting input on \nwhat sort of benchmarks to run to get a good Postgresql vs Mysql dataset? \nPerhaps this would be a good opportunity for us to get some good benchmarking \ndone. Here's the article link and top text:\n\nhttp://ask...
[ { "msg_contents": ">Another thought - I priced out a maxed out machine with 16 cores and\n>128GB of RAM and 1.5TB of usable disk - $71,000.\n>\n>You could instead buy 8 machines that total 16 cores, 128GB RAM and 28TB\n>of disk for $48,000, and it would be 16 times faster in scan rate, which\n>is the most impor...
[ { "msg_contents": "On Sun, 27 Nov 2005, Luke Lonergan wrote:\n\n> For data warehousing its pretty well open and shut. To use all cpus and \n> io channels on each query you will need mpp.\n>\n> Has anyone done the math.on the original post? 5TB takes how long to \n> scan once? If you want to wait less than a ...
[ { "msg_contents": "Have you factored in how long it takes to build an index on 5TB? And the index size?\r\n\r\nReally, it's a whole different world at multi-TB, everything has to scale.\r\n\r\nBtw we don't just scan in parallel, we do all in parallel, check the sort number on this thread. Mpp is for the god b...
[ { "msg_contents": "On Mon, 28 Nov 2005, Brendan Duddridge wrote:\n\n> Forgive my ignorance, but what is MPP? Is that part of Bizgres? Is it \n> possible to upgrade from Postgres 8.1 to Bizgres?\n\nMPP is the Greenplum propriatary extention to postgres that spreads the \ndata over multiple machines, (raid, but w...
[ { "msg_contents": "> I have been reading all this technical talk about costs and such that\n> I don't (_yet_) understand.\n> \n> Now I'm scared... what's the fastest way to do an equivalent of\n> count(*) on a table to know how many items it has?\n\nMake sure to analyze the database frequently and check pg_clas...
[ { "msg_contents": "The MPP test I ran was with the release version 2.0 of MPP which is based on\nPostgres 8.0, the upcoming 2.1 release is based on 8.1, and 8.1 is far\nfaster at seq scan + agg. 12,937MB were counted in 4.5 seconds, or 2890MB/s\nfrom I/O cache. That's 722MB/s per host, and 360MB/s per Postgre...
[ { "msg_contents": "Hi all,\n\n I don't understand why this request take so long. Maybe I read the \nanalyse correctly but It seem that the first line(Nested Loop Left Join \n...) take all the time. But I don't understand where the performance \nproblem is ??? All the time is passed in the first line ...\n\...
[ { "msg_contents": "I know in mysql, index will auto change after copying data\nOf course, index will change after inserting a line in postgresql, but what about copying data?\n\n", "msg_date": "Tue, 29 Nov 2005 15:00:22 +0800", "msg_from": "\"energumen@buaa.edu.cn\" <energumen@buaa.edu.cn>", "msg_fr...
[ { "msg_contents": "Hi \n\ni´m using PostgreSQL on windows 2000, the pg_dump take around 50 minutes\nto do backup of 200Mb data ( with no compression, and 15Mb with\ncompression), but in windows XP does not pass of 40 seconds... :(\n\nThis happens with 8.1 and version 8.0, somebody passed for the same\nsituation...
[ { "msg_contents": "> At 08:35 AM 11/30/2005, Franklin Haut wrote:\n> >Hi\n> >\n> >i´m using PostgreSQL on windows 2000, the pg_dump take around 50 minutes\n> >to do backup of 200Mb data ( with no compression, and 15Mb with\n> >compression),\n> \n> Compression is reducing the data to 15/200= 3/40= 7.5% of origin...
[ { "msg_contents": " \n\nThis seems to me to be an expensive plan and I'm wondering if there's a\nway to improve it or a better way to do what I'm trying to do here (get\na count of distinct values for each record_id and map that value to the\nentity type) entity_type_id_mapping is 56 rows\nvolume_node_entity_da...
[ { "msg_contents": "> By default W2K systems often had a default TCP/IP packet size of 576B\n> and a tiny RWIN. Optimal for analog modems talking over noisy POTS\n> lines, but horrible for everything else\n\nwrong. default MTU for windows 2000 server is 1500, as was NT4.\nhttp://support.microsoft.com/?id=140375...
[ { "msg_contents": "Hi,\n\nI have a simple query that is running inside a plpgsql function.\n\nSELECT INTO _point_id id FROM ot2.point WHERE unit_id = _unit_id AND \ntime > _last_status ORDER BY time LIMIT 1;\n \nBoth _unit_id and _last_status variables in the function. the table has \nan index on unit_id,point\...
[ { "msg_contents": "Question is about the relation between fragmentation of file and VACUUM\nperformance.\n\n<Environment>\nOS:RedHat Enterprise Linux AS Release 3(Taroon Update 6)\n Kernel 2.4.21-37.ELsmp on an i686\n Filesystem Type ext3\n Filesystem features: has_journal filetype needs_recovery spar...
[ { "msg_contents": "Hi there,\n\nI need a simple but large table with several million records. I do batch \ninserts with JDBC. After the first million or so records,\nthe inserts degrade to become VERY slow (like 8 minutes vs initially 20 \nsecondes).\n\nThe table has no indices except PK while I do the inserts....
[ { "msg_contents": "Hi,\n\nwe are currently running a postgres server (upgraded to 8.1) which has \none large database with approx. 15,000 tables. Unfortunately performance \nsuffers from that, because the internal tables (especially that which \nholds the attribute info) get too large.\n\n(We NEED that many tab...
[ { "msg_contents": "this subject has come up a couple times just today (and it looks like one \nthat keeps popping up).\n\nunder linux ext2/3 have two known weaknesses (or rather one weakness with \ntwo manifestations). searching through large objects on disk is slow, this \napplies to both directories (creating...
[ { "msg_contents": "Hi!\n\nI've got an urgent problem with an application which is evaluating a\nmonthly survey; it's running quite a lot of queries like this:\n\nselect SOURCE.NAME as TYPE,\n count(PARTICIPANT.SESSION_ID) as TOTAL\nfrom (\n select PARTICIPANT.SESSION_ID\n from survey.PARTICIPANT,\n ...
[ { "msg_contents": "> >Franlin: are you making pg_dump from local or remote box and is this\na\n> >clean install? Try fresh patched win2k install and see what happens.\n> He claimed this was local, not network. It is certainly an\n> intriguing possibility that W2K and WinXP handle bytea\n> differently. I'm no...
[ { "msg_contents": "\n> -----Ursprüngliche Nachricht-----\n> Von: Tom Lane [mailto:tgl@sss.pgh.pa.us] \n> Gesendet: Donnerstag, 1. Dezember 2005 17:26\n> An: Markus Wollny\n> Cc: pgsql-performance@postgresql.org\n> Betreff: Re: [PERFORM] Queries taking ages in PG 8.1, have \n> been much faster in PG<=8.0 \n \n> ...
[ { "msg_contents": "\nNot having found anything so far, does anyone know of, and can point me \nto, either tools, or articles, that talk about doing tuning based on the \ninformation that this sort of information can help with?\n\n----\nMarc G. Fournier Hub.Org Networking Services (http://www.hub.org)\...
[ { "msg_contents": "I am importing roughly 15 million rows in one batch transaction. I am\ncurrently doing this through batch inserts of around 500 at a time,\nalthough I am looking at ways to do this via multiple (one-per-table)\ncopy commands for performance reasons. \n\nI am currently running: PostgreSQL 8....
[ { "msg_contents": "I'm running postgresql 8.1.0 with postgis 1.0.4 on a FC3 system, 3Ghz, 1 GB\nmemory.\n\n \n\nI am using COPY to fill a table that contains one postgis geometry column.\n\n \n\nWith no geometry index, it takes about 45 seconds to COPY one file.\n\n \n\nIf I add a geometry index, this time degr...
[ { "msg_contents": "> we are currently running a postgres server (upgraded to 8.1) which has\n> one large database with approx. 15,000 tables. Unfortunately\nperformance\n> suffers from that, because the internal tables (especially that which\n> holds the attribute info) get too large.\n> \n> (We NEED that many ...
[ { "msg_contents": "Our application tries to insert data into the database as fast as it can.\nCurrently the work is being split into a number of 1MB copy operations.\n\nWhen we restore the postmaster process tries to use 100% of the CPU.\n\nThe questions we have are:\n\n1) What is postmaster doing that it needs...
[ { "msg_contents": "Tom, \n\n> That analysis is far too simplistic, because only the WAL \n> write has to happen before the transaction can commit. The \n> table and index writes will normally happen at some later \n> point in the bgwriter, and with any luck there will only need \n> to be one write per page, no...
[ { "msg_contents": "Steve, \n\n> When we restore the postmaster process tries to use 100% of the CPU. \n> \n> The questions we have are: \n> \n> 1) What is postmaster doing that it needs so much CPU? \n\nParsing mostly, and attribute conversion from text to DBMS native\nformats.\n \n> 2) How can we get our syste...
[ { "msg_contents": "here are the suggestions from the MySQL folks, what additional tests \nshould I do.\n\nI'd like to see some tests submitted that map out when not to use a \nparticular database engine, so if you have a test that you know a \nparticular database chokes on let me know (bonus credibility if you ...
[ { "msg_contents": "David, \n\n> Luke, would it help to have one machine read the file and \n> have it connect to postgres on a different machine when doing \n> the copy? (I'm thinking that the first machine may be able to \n> do a lot of the parseing and conversion, leaving the second \n> machine to just worry ...
[ { "msg_contents": "Hello,\n\nWe used Postgresql 7.1 under Linux and recently we have changed it to \nPostgresql 8.1 under Windows XP. Our application uses ODBC and when we \ntry to get some information from the server throw a TCP connection, it's \nvery slow. We have also tried it using psql and pgAdmin III, an...