threads
listlengths
1
275
[ { "msg_contents": "Hi all,\n i have 5 servers that have been installing postgresql .In order to\nknow the postgresql working status and monitor them ,moreover i don't want\nto use the monitor tools .I want to use the SQL commands to monitoring\npostgresql system . please suggest any SQL COMMANDS to w...
[ { "msg_contents": "Using explain analyze I saw that many of my queries run really fast, less\nthan 1 milliseconds, for example the analyze output of a simple query over a\ntable with 5millions of records return \"Total runtime: 0.078 ms\"\n\n \n\nBut the real time is a lot more, about 15 ms, in fact the pgadm...
[ { "msg_contents": "Hello,\n\nI'm working with an application that connects to a remote server database using \"libpq\" library over internet, but making a simple query is really slow even though I've done PostgreSQL Tunning and table being indexed, so I want to know:\n\n-Why is postgresql or libpq that slow whe...
[ { "msg_contents": "Hello Community,\n\nI intend to understand further on PostgreSQL Index behavior on a \"SELECT\"\nstatement.\n\nWe have a situation where-in Index on unique column is not being picked up\nas expected when used with-in the WHERE clause with other non-unique\ncolumns using AND operator.\n\nexpla...
[ { "msg_contents": "Hi All\n\nI´ve ft_simple_core_content_content_idx\n  ON core_content\n  USING gin\n  (to_tsvector('simple'::regconfig, content) );\n\n  \nIf I´m seaching for a word which is NOT in the column content the query plan and the execution time differs with the given limit.\nIf I choose 3927 or any...
[ { "msg_contents": "Hi,\n\nI want to force deafults, and wonder about the performance.\nThe trigger i use (below) makes the query (also below) take 45% more time.\nThe result is the same now, but i do have a use for using the trigger (see\n\"background info\").\n\nIsn't there a more efficient way to force the de...
[ { "msg_contents": "On 10/09/12 16:24, bill_martin@freenet.de<mailto:bill_martin@freenet.de> wrote:\r\n\r\nHi All\r\n\r\nI´ve ft_simple_core_content_content_idx\r\n ON core_content\r\n USING gin\r\n (to_tsvector('simple'::regconfig, content) );\r\n\r\n\r\nIf I´m seaching for a word which is NOT in the column ...
[ { "msg_contents": "I have a table as follows:\n\\d entity\n Table \"public.entity\"\n Column | Type | Modifiers\n--------------+-----------------------------+--------------------\n crmid | integer | not null\n smcreatorid | integer ...
[ { "msg_contents": "Regarding the wiki page on reporting slow queries:\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\n\nWe currently recommend EXPLAIN ANALYZE over just EXPLAIN. Should we\nrecommend EXPLAIN (ANALYZE, BUFFERS) instead? I know I very often\nwish I could see that data. I don't think...
[ { "msg_contents": "Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> writes:\n> Bill Martin <bill(dot)martin(at)communote(dot)com> writes:\n>> I´ve created following table which contains one million records.\n>> ...\n\n>> \"Limit (cost=10091.09..19305.68 rows=3927 width=621) (actual time=0.255..0.255 rows=0 loops=1...
[ { "msg_contents": "> Tom Lane <tgl@sss.pgh.pa.us> writes:\r\n>> Bill Martin <bill.martin@communote.com> writes:\r\n>> I've tried different values for the statistics but it is all the same (the planner decide to switch to a seqscan if the limit is 10).\r\n\r\n>> ALTER TABLE core_content ALTER column content SET ...
[ { "msg_contents": "Hey PostgreSQL speed demons -\nAt work, we're considering an AppScale deployment (that's the Google App Engine\nroll-your-own http://appscale.cs.ucsb.edu/). It supports multiple technologies\nto back the datastore part of the platform (HBase, Hypertable, MySQL Cluster,\nCassandra, Voldemort, ...
[ { "msg_contents": "-- \nhttp://www.feteknoloji.com\nRegards, Feridun Türk\n\n-- http://www.feteknoloji.comRegards, Feridun Türk", "msg_date": "Thu, 13 Sep 2012 21:25:18 +0300", "msg_from": "=?UTF-8?Q?Feridun_t=C3=BCrk?= <feridunturk@gmail.com>", "msg_from_op": true, "msg_subject": "" } ]
[ { "msg_contents": "Hi\nI compiled the 3.6-rc5 kernel with the same config from 3.5.3 and got\nthe 15-20% performance drop of PostgreSQL 9.2 on AMD chipsets (880G,\n990X).\n\nCentOS 6.3 x86_64\nPostgreSQL 9.2\ncpufreq scaling_governor - performance\n\n# /etc/init.d/postgresql initdb\n# echo \"fsync = off\" >> /v...
[ { "msg_contents": "Hi, \n\nI have a Centos 6.2 Virtual machine that contains Postgresql version 8.4 database. The application installed in this Virtual machine uses this database that is local to this Virtual machine. I have tried to offload the database, by installing it on a remote Virtual machine, on anothe...
[ { "msg_contents": "I am looking at changing all of the foreign key definitions to be deferrable (initially immediate). Then during a few scenarios performed by the application, set all foreign key constraints to be deferred (initially deferred) for that given transaction.\n\nMy underlying question/concern is \...
[ { "msg_contents": "I am pondering about this... My thinking is that since *_scale_factor need\nto be set manually for largish tables (>1M), why not\nset autovacuum_vacuum_scale_factor and autovacuum_analyze_scale_factor, and\nincrease the value of autovacuum_vacuum_threshold to, say, 10000, and\nautovacuum_anal...
[ { "msg_contents": "I am not able to set wal_sync_method to anything but fsync on FreeBSD 9.0\nfor a DB created on ZFS (I have not tested on UFS). Is that expected ? Has\nit anything to do with running on EC2 ?\n\nSébastien\n\nI am not able to set wal_sync_method to anything but fsync on FreeBSD 9.0 for a DB cre...
[ { "msg_contents": "Hello All,\n \nWe are migrating our product from 32 bit CentOS version 5.0 (kernel 2.6.18) to 64 bit CentOS version 6.0 (kernel 2.6.32)\nSo we decided to upgrade the PostgreSQL version from 8.2.2 to 9.0.4\n \nWe are compiling the PostgreSQL source on our build machine to create an RPM before ...
[ { "msg_contents": "Using postgreSQL 9.2 with the following settings:\n\nmax_connections = 1000 # (change requires restart)\nshared_buffers = 65536MB # min 128kB\nwork_mem = 16MB # min 64kB\neffective_io_concurrency = 48 # 1-1000; 0 disables prefe...
[ { "msg_contents": "Our production database, postgres 8.4 has an approximate size of 200 GB,\nmost of the data are large objects (174 GB), until a few months ago we used\npg_dump to perform backups, took about 3-4 hours to perform all the\nprocess. Some time ago the process became interminable, take one or two\n...
[ { "msg_contents": "Hello!\n\nI'm one of the developers of the Ruby on Rails web framework.\n\nIn some situations, the framework generates an empty transaction block.\nI.e. we sent a BEGIN and then later a COMMIT, with no other queries in\nthe middle.\n\nWe currently can't avoid doing this, because a user *may* ...
[ { "msg_contents": "I've run into a query planner issue while querying my data with a\nlarge offset (100,000). My schema is\nhttp://pgsql.privatepaste.com/ce7cc05a66 . I have about 220,000 rows\nin audit_spoke.audits. The original query\nhttp://pgsql.privatepaste.com/61cbdd51c2 ( explain:\nhttp://explain.depesz....
[ { "msg_contents": "Hi,\nI'm using and Amazon ec2 instance with the following spec and the\napplication that I'm running uses a postgres DB 9.1.\nThe app has 3 main cron jobs.\n\n*Ubuntu 12, High-Memory Extra Large Instance\n17.1 GB of memory\n6.5 EC2 Compute Units (2 virtual cores with 3.25 EC2 Compute Units ea...
[ { "msg_contents": "Hi,\nI'm using and Amazon ec2 instance with the following spec and the\napplication that I'm running uses a postgres DB 9.1.\nThe app has 3 main cron jobs.\n\n*Ubuntu 12, High-Memory Extra Large Instance\n17.1 GB of memory\n6.5 EC2 Compute Units (2 virtual cores with 3.25 EC2 Compute Units ea...
[ { "msg_contents": "Hi,\n\nThe problem : Postgres is becoming slow, day after day, and only a full vacuum fixes the problem.\n\nInformation you may need to evaluate :\n\nThe problem lies on all tables and queries, as far as I can tell, but we can focus on a single table for better comprehension.\n\nThe queries I...
[ { "msg_contents": "I'm using Postgres 9.1 on Debian Lenny and via a Java server (JBoss AS 6.1) I'm executing a simple \"select ... for update\" query:\nSELECT\n\timporting\nFROM\n\tcustomer\nWHERE\n\tid = :customer_id\nFOR UPDATE NOWAIT\nOnce every 10 to 20 times Postgres fails to obtain the lock for no apparen...
[ { "msg_contents": "Kiriakos Tsourapas wrote:\n\n> When the problem appears, vacuuming is not helping. I ran vacuum\n> manually and the problem was still there. Only full vacuum worked.\n> \n> As far as I have understood, autovacuuming is NOT doing FULL\n> vacuum. So, messing around with its values should not he...
[ { "msg_contents": "Hi,\n\nI'm new here so i hope i don't do mistakes.\n\nI'm having a serious performance issue in postgresql.\n\nI have tables containing adresses with X,Y GPS coordinates and tables with\nzoning and square of gps coordinates.\n\nBasicly it looks like\n\nadresses_01 (id,X,Y)\ngps_01 (id,x_min,x...
[ { "msg_contents": "[resending because I accidentally failed to include the list]\n\nKiriakos Tsourapas wrote:\n\n> I am taking your suggestions one step at a time.\n> \n> I changed my configuration to a much more aggressive autovacuum\n> policy (0.5% for analyzing and 1% for autovacuum).\n> \n> autovacuum_napti...
[ { "msg_contents": "Hi Kevin,\n\nOn Sep 26, 2012, at 14:39, Kevin Grittner wrote:\n> \n> I am concerned that your initial email said that you had this\n> setting:\n> \n> autovacuum_naptime = 28800\n> \n> This is much too high for most purposes; small, frequently-modified\n> tables won't be kept in good shape wit...
[ { "msg_contents": "Hey Everyone, \n\nI seem to be getting an inaccurate cost from explain. Here are two examples for one query with two different query plans:\n\nexchange_prod=# set enable_nestloop = on;\nSET\nexchange_prod=#\nexchange_prod=# explain analyze SELECT COUNT(DISTINCT \"exchange_uploads\".\"id\") F...
[ { "msg_contents": "Hello,\n\ni have a problem with relatively easy query.\n\nEXPLAIN ANALYZE SELECT content.* FROM content JOIN blog ON blog.id = content.blog_id JOIN community_prop ON blog.id = community_prop.blog_id JOIN community ON community.id = community_prop.id WHERE community.id IN (33, 55, 61, 1741, 75...
[ { "msg_contents": "Hi everyone,\n\nI want to buy a new server, and am contemplating a Dell R710 or the \nnewer R720. The R710 has the x5600 series CPU, while the R720 has the \nnewer E5-2600 series CPU.\n\nAt this point I'm dealing with a fairly small database of 8 to 9 GB. \nThe server will be dedicated to P...
[ { "msg_contents": "Hello everybody,\n\nWe have being doing some testing with an ISD transaction and we had\nsome problems that we posted here.\n\nThe answers we got were very kind and useful but we couldn't solve the problem.\n\nWe have doing some investigations after this and we are thinking if is\nit possible...
[ { "msg_contents": "On machine 1 - a table that contains between 12 and 18 million rows\nOn machine 2 - a Java app that calls Select * on the table, and writes it\ninto a Lucene index\n\nOriginally had a fetchSize of 10,000 and would take around 38 minutes for 12\nmillion, 50 minutes for 16ish million to read it...
[ { "msg_contents": "Hey guys,\n\nI ran into this while we were working on an upgrade project. We're \nmoving from 8.2 (don't ask) to 9.1, and started getting terrible \nperformance for some queries. I've managed to boil it down to a test case:\n\ncreate temp table my_foo as\nselect a.id, '2012-01-01'::date + (ra...
[ { "msg_contents": "Howdy, I've been debugging a client's slow query today and I'm curious\nabout the query plan. It's picking a plan that hashes lots of rows from the\nversions table (on v9.0.10)...\n\nEXPLAIN ANALYZE\nSELECT COUNT(*) FROM notes a WHERE\na.project_id = 114 AND\nEXISTS (\n SELECT 1 FROM note_...
[ { "msg_contents": "Greetings.\n\nI have a small monitoring query on the following tables:\nselect relname,relpages,reltuples::numeric(12) from pg_class where relname\nin ('meta_version','account') order by 1;\n relname | relpages | reltuples\n--------------+----------+-----------\n account | 3235 ...
[ { "msg_contents": "Hi all,\n\nI have a question about the deadlock_timeout in regards to performance.\nRight now we have this timeout set at its default of 1s.\nMy understanding of it is that this means that every 1 second the server\nwill check for deadlocks.\nWhat I am wondering is how much of a performance i...
[ { "msg_contents": "Hi, previously I selected categorized data for update then updated counts\nor inserted a new record if it was a new category of data.\n\nselect all categories\nupdate batches of categories\nor insert batches [intermingled as they hit batch size]\n\nProblem was the select was saturating the ne...
[ { "msg_contents": "I'm struggling with a query that seems to use a suboptimal query plan.\n\nSchema: units reference a subjob reference a job. In other words: a job contains multiple subjobs. A subjob contains multiple units. (full schema below)\n\nWe're trying to select all subjobs that need to be reviewed and...
[ { "msg_contents": "Hi,\n\nI have a table with about 10 millions of records, this table is update and\ninserted very often during the day (approx. 200 per second) , in the night\nthe activity is a lot less, so in the first seconds of a day (00:00:01) a\nbatch process update some columns (used like counters) of ...
[ { "msg_contents": "Hello!\n\nI would like to ask following question:\nI have created a table and I updated all records.\nAnd I executed this update command again and again....\nExecution time was growing after each step.\nI cannot understand this behavior.\nFirst update command took 6 sec, 30th update (same) co...
[ { "msg_contents": "I have table:\ncreate table hashcheck(id serial, name varchar, value varchar);\nand query:\nhashaggr=# explain analyse verbose select name, count(name) as cnt from hashcheck group by name order by name desc;\n QUERY PLAN ...
[ { "msg_contents": "Hi!\n\nAfter upgrade (dump/restore/analyze) query (below) after some time is killed by kernel.\nPostgres creates a lot of tmp files:\n\ntemporary file: path \"base/pgsql_tmp/pgsql_tmp8949.2046\", size 24576\ntemporary file: path \"base/pgsql_tmp/pgsql_tmp8949.2045\", size 24576\ntemporary fil...
[ { "msg_contents": "Hi all,\n\n I have 10 million records in my postgres table.I am running the database in amazon ec2 medium instance. I need to access the last week data from the table.\nIt takes huge time to process the simple query.So, i throws time out exception error.\n\nquery is :\n select count(...
[ { "msg_contents": "This is driving me crazy. A new server, virtually identical to an old one,\nhas 50% of the performance with pgbench. I've checked everything I can\nthink of.\n\nThe setups (call the servers \"old\" and \"new\"):\n\nold: 2 x 4-core Intel Xeon E5620\nnew: 4 x 4-core Intel Xeon E5606\n\nboth:\...
[ { "msg_contents": "I've confirmed that hyperthreading causes a huge drop in performance on a\n2x4-core Intel Xeon E5620 2.40GHz system. The bottom line is:\n\n ~3200 TPS max with hyperthreading\n ~9000 TPS max without hyprethreading\n\nHere are my results.\n\n\"Hyprethreads\" (Run1) is \"out of the box\" wit...
[ { "msg_contents": "Hello,\n\nFirst let me say thanks for a fantastic database system. The hard work \nthis community has put into Postgres really shows.\n\nA client of mine has been using Postgres quite effectively for a while \nnow, but has recently hit a performance issue with text index queries, \nspecifica...
[ { "msg_contents": "Hi,\n\nI've been fighting with some CTE queries recently, and in the end I've\nended up with two basic cases. In one case the CTEs work absolutely\ngreat, making the estimates much more precise, while in the other the\nresults are pretty terrible. And I'm not sure why both of these behave\nth...
[ { "msg_contents": "Hi everyone,\n\nI have a Postgresql 9.1 dedicated server with 16 cores, 96GB RAM and RAID10\n15K SCSI drives which is runing Centos 6.2 x64. This server is mainly used\nfor inserting/updating large amounts of data via copy/insert/update\ncommands, and seldom for running select queries.\n\nHer...
[ { "msg_contents": "Hello! Is it possible to speed up the plan? \n\nhashes=# \\d hashcheck\n Table \"public.hashcheck\"\n Column | Type | Modifiers \n--------+-------------------+-----------------------------------------------...
[ { "msg_contents": "Hi,\n\nI have pretty large tables, with columns that might never receive any \ndata, or always receive data, based on the customer needs.\nThe index on these columns are really big, even if the column is never \nused, so I tend to add a \"where col is not null\" clause on those indexes.\n\nWh...
[ { "msg_contents": "I have a table with a column of type timestamp with time zone, this column\nhas an index\n\n \n\nIf I do a select like this\n\n \n\nselect * from mytable where cast(my_date as timestamp without time zone) >\n'2012-10-12 20:00:00'\n\n \n\nthis query will use the index over the my_date column?\...
[ { "msg_contents": "Hi,\n\nGiven I have a large table implemented with partitions and need fast\naccess to a (primary) key value in a scenario where every minute\nupdates (inserts/updates/deletes) are coming in.\n\nNow since PG does not allow any index (nor constraint) on \"master\"\ntable, I have a performance ...
[ { "msg_contents": "On PG 9.1 and 9.2 I'm running the following query:\nSELECT *FROM stream_store JOIN ( SELECT UNNEST(stream_store_ids) AS id FROM stream_store_version_index WHERE stream_id = 607106 AND version = 11 ) AS records ...
[ { "msg_contents": "Hi,\n\nOur IT Company systems architecture is based on IBM Websphere\nApplication Server, we would like to migrate our databases to\nPostgres, the main problem which stops us from doing that is Postgres\nis not supported by IBM Websphere Application Server.\nThere is a Request for Enhancement...
[ { "msg_contents": "Hello,\n I'm trying to do a simple SQL query over Postgresl 9.0 running on Ubuntu.\n\nI have a large table (over 100 million records) with three fields, \nid_signal (bigint), time_stamp (timestamp) and var_value (float).\n\nMy query looks like this:\n\nselect var_value from ism_floatvalues ...
[ { "msg_contents": "Dear all,\nWe have a DB containing transactional data. \nThere are about *50* to *100 x 10^6* rows in one *huge* table.\nWe are using postgres 9.1.6 on linux with a *SSD card on PCIex* providing us\na constant seeking time.\n\nA typical select (see below) takes about 200 secs. As the database...
[ { "msg_contents": "Hi to all, \n\n\nI've got a trouble with some delete statements. My db contains a little more than 10000 tables and runs on a dedicated server (Debian 6 - bi quad - 16Gb - SAS disks raid 0). Most of the tables contains between 2 and 3 million rows and no foreign keys exist between them. Each ...
[ { "msg_contents": "Hi communities,\n\nI am investigating a performance issue involved with LIKE 'xxxx%' on an\nindex in a complex query with joins. \n\nThe problem boils down into this simple scenario---:\n====Scenario====\nMy database locale is C, using UTF-8 encoding. I tested this on 9.1.6 and 9.\n2.1.\n\nQ1...
[ { "msg_contents": "Hi communities,\n\n \n\nI am investigating a performance issue involved with LIKE 'xxxx%' on an\nindex in a complex query with joins. \n\n \n\nThe problem boils down into this simple scenario---:\n\n====Scenario====\n\nMy database locale is C, using UTF-8 encoding. I tested this on 9.1.6 and ...
[ { "msg_contents": "Hi,\n\nWhy PostgreSQL, the EnterpriseBD supports create/alter/drop package and the opensource doesn't?\nIs a project or never will have support?\n\n\nThanks\n\nHi,Why PostgreSQL, the EnterpriseBD supports create/alter/drop package and the opensource doesn't?Is a project or never will have sup...
[ { "msg_contents": "Hi guys,\n\nPG = 9.1.5\nOS = winDOS 2008R8\n\nI have a table that currently has 207 million rows.\nthere is a timestamp field that contains data.\nmore data gets copied from another database into this database.\nHow do I make this do an index scan instead?\nI did an \"analyze audittrailclinic...
[ { "msg_contents": "Hi guys,\n\nPG = 9.1.5\nOS = winDOS 2008R8\n\nI have a table that currently has 207 million rows.\nthere is a timestamp field that contains data.\nmore data gets copied from another database into this database.\nHow do I make this do an index scan instead?\nI did an \"analyze audittrailclinic...
[ { "msg_contents": "What is the adequate *pgbounce* *max_client_conn ,default_pool_size* values\nfor a postgres config which has *max_connections = 400*.\n\nWe want to move to pgbouncer to let postgres do the only db job but it\nconfused us. We have over 120 databases in a single postgres engine with\nas i said...
[ { "msg_contents": "We've run into a perplexing issue with a customer database. He moved\nfrom a 9.1.5 to a 9.1.6 and upgraded from an EC2 m1.medium (3.75GB\nRAM, 1.3 GB shmmax), to an m2.xlarge (17GB RAM, 5.7 GB shmmax), and is\nnow regularly getting constant errors regarding running out of shared\nmemory (ther...
[ { "msg_contents": "Hello\n\nI am working on a potentially large database table, let's call it \"observation\", that has a foreign key to table \"measurement\". Each measurement is associated with either none or around five observations. In this kind of situation, it is well known that the statistics on the fore...
[ { "msg_contents": "Hi all,\n\nI've created a test table containing 21 million random dates and\ntimes, but I get wildly different results when I introduce a\nfunctional index then ANALYSE again, even though it doesn't use the\nindex:\n\npostgres=# CREATE TABLE test (id serial, sampledate timestamp);\nCREATE TAB...
[ { "msg_contents": "I have replication set up on servers with 9.1 and want to upgrade to 9.2\nI was hoping I could just bring them both down, upgrade them both and bring\nthem both up and continue replication, but that doesn't seem to work, the\nreplication server won't come up.\nIs there anyway to do this upgra...
[ { "msg_contents": "Hi,\n\nI have a self-referencing table that defines a hierarchy of projects and sub-projects.\n\nThis is the table definition:\n\nCREATE TABLE project\n(\n project_id integer primary key,\n project_name text,\n pl_name text,\n parent_id integer\n);\n\nALTER TABLE pro...
[ { "msg_contents": "Hello Perf,\n\nLately I've been pondering. As systems get more complex, it's not \nuncommon for tiered storage to enter the picture. Say for instance, a \nuser has some really fast tables on a NVRAM-based device, and \nslower-access stuff on a RAID, even slower stuff on an EDB, and variants \...
[ { "msg_contents": "Am I reading this correctly -- it appears that if SSL negotiation is\nenabled for a connection (say, when using pg_basebackup over a WAN) that\ncompression /*is automatically used*/ (provided it is supported on both\nends)?\n\nIs there a way to check and see if it _*is*_ on for a given connec...
[ { "msg_contents": "Hey everyone!\n\nThis is pretty embarrassing, but I've never seen this before. This is \nour system's current memory allocation from 'free -m':\n\n total used free buffers cached\nMem: 72485 58473 14012 3 34020\n-/+ buffers/cac...
[ { "msg_contents": "I have a problem with prepared statements choosing a bad query plan - I was hoping that 9.2 would have eradicated the problem :(\n\nTaken from the postgresql log:\n\n <2012-10-23 15:21:03 UTC acme_metastore 13798 5086b49e.35e6> LOG: duration: 20513.809 ms exec...
[ { "msg_contents": "henk de wit wrote:\n\n> Well, what do you know! That did work indeed. Immediately after the\n> ANALYZE on that parent table (taking only a few seconds) a fast\n> plan was created and the query executed in ms again. Silly me, I\n> should have tried that earlier.\n\nOf course, if your autovacuu...
[ { "msg_contents": "Hi,\n\ni've got a very strange problem on PostgreSQL 8.4, where the queryplaner goes absolutely havoc, when slightly changing one parameter.\n\nFirst the Tables which are involved:\n1. Table \"public.spsdata\"\n Column | ...
[ { "msg_contents": "Hey everyone,\n\nSo recently we upgraded to 9.1 and have noticed a ton of our queries got \nmuch worse. It turns out that 9.1 is *way* more optimistic about our \nfunctional indexes, even when they're entirely the wrong path. So after \ngoing through the docs, I see that the normal way to inc...
[ { "msg_contents": "Maciek Sakrejda wrote:\n\n> Before the switch, everything was running fine.\n\nOne thing to look for is a connection stuck in \"idle in transaction\"\nor old prepared transactions in pg_prepared_xacts. Either will cause\nall sorts of problems, but if you are using serializable transactions\nt...
[ { "msg_contents": "Shaun Thomas wrote:\n\n> Update the config and start as a slave, and it's the same as a\n> basebackup.\n\n... as long as the rsync was bracketed by calls to pg_start_backup()\nand pg_stop_backup().\n\n-Kevin\n\n", "msg_date": "Thu, 25 Oct 2012 08:10:03 -0400", "msg_from": "\"Kevin Gri...
[ { "msg_contents": "Böckler Andreas wrote:\n\n> I've played with seq_page_cost and enable_seqscan already, but you\n> have to know the right values before SELECT to get good results ;)\n\nThe idea is to model actual costs on your system.  You don't show\nyour configuration or describe your hardware, but you show...
[ { "msg_contents": "Böckler Andreas wrote:\n> Am 25.10.2012 um 20:22 schrieb Kevin Grittner:\n\n>> The idea is to model actual costs on your system. You don't show\n>> your configuration or describe your hardware, but you show an\n>> estimate of retrieving over 4000 rows through an index and\n>> describe a respo...
[ { "msg_contents": "ktm@rice.edu wrote:\n\n> You have the sequential_page_cost = 1 which is better than or equal\n> to the random_page_cost in all of your examples. It sounds like you\n> need a sequential_page_cost of 5, 10, 20 or more.\n\nThe goal should be to set the cost factors so that they model actual\ncos...
[ { "msg_contents": "Hi,\nI have a tree-structure managed with ltree and gist index.\nSimplified schema is\n\nCREATE TABLE crt (\n\tidcrt INT NOT NULL,\n\t...\n\tpathname LTREE\n)\nidcrt primary key and other index ix_crt_pathname on pathname with gist\n\nCREATE TABLE doc (\n\tiddoc INT NOT NULL, ...)\niddoc prim...
[ { "msg_contents": "Böckler Andreas wrote:\n\n> b) they high seq costs might be true for that table (partition at\n>    40gb), but not for the rest of the database Seqscan-Costs per\n>    table would be great.\n\nYou can set those per tablespace. Again, with about 40 spindles in\nour RAID, we got about ten times...
[ { "msg_contents": "Hey guys,\n\nI have a pretty nasty heads-up. If you have hardware using an Intel XEON \nand a newer Linux kernel, you may be experiencing very high CPU latency. \nYou can check yourself:\n\ncat /sys/devices/system/cpu/cpuidle/current_driver\n\nIf it says intel_idle, the Linux kernel will *agg...
[ { "msg_contents": "All... first let me say thank you for this forum.... I am new to it and\nrelatively new to postgres, more of a sysadmin than a DBA, but let me\nexplain my issue. I'll try also to post relevant information as well.\n\nOur IT group took over an app that we have running using postgres and it h...
[ { "msg_contents": "I am configuring streaming replication with hot standby\nwith PostgreSQL 9.1.3 on RHEL 6 (kernel 2.6.32-220.el6.x86_64).\nPostgreSQL was compiled from source.\n\nIt works fine, except that starting the standby took for ever:\nit took the system more than 80 minutes to replay 48 WAL files\nand...
[ { "msg_contents": "Hi, thanks for any help. I've tried to be thorough, but let me know if I should\nprovide more information.\n\nA description of what you are trying to achieve and what results you expect:\n I have a large (3 million row) table called \"tape\" that represents files,\n which I join to a sm...
[ { "msg_contents": "Shaun Thomas wrote:\n\n> I know that current_date seems like an edge case, but I can't see\n> how getting the most recent activity for something is an uncommon\n> activity. Tip tracking is actually the most frequent pattern in the\n> systems I've seen.\n\nYeah, this has been a recurring probl...
[ { "msg_contents": "Woolcock, Sean wrote:\n\n> A description of what you are trying to achieve and what results\n> you expect:\n>  I have a large (3 million row) table called \"tape\" that represents\n>  files, which I join to a small (100 row) table called \"filesystem\"\n>  that represents filesystems. I have ...
[ { "msg_contents": "As I increase concurrency I'm experiencing what I believe are too slow\nqueries given the minuscule amount of data in my tables.\n\nI have 20 Django worker processes and use ab to generate 3000 requests\nto a particular URL which is doing some read only queries. I ran this\nwith ab concurrenc...
[ { "msg_contents": "Hello all,\n\nI have been pulling my hair out over the last few days trying to get any useful performance out of the following \npainfully slow query.\nThe query is JPA created, I've just cleaned the aliases to make it more readable.\nUsing 'distinct' or 'group by' deliver about the same resu...
[ { "msg_contents": "hi\n\n\ni have sql file (it's size are 1GB )\nwhen i execute it then the String is 987098801 bytr too long for encoding\nconversion error occured .\npls give me solution about\n\ni have XP 64-bit with 8 GB RAM shared_buffer 1GB check point = 34\n\n\nwith thanks\nmahavir\n\nhi i have sql f...
[ { "msg_contents": "Hi,\n When i start my postgres. Iam getting this error. I had \ninstalled 8.4 and 9.1\nIt was working good yesterday but not now.\nservice postgresql restart\n * Restarting PostgreSQL 8.4databaseserver\n* The PostgreSQL server failed to start. Please check the log output.\n\nIf i see...
[ { "msg_contents": "Catalin Iacob wrote:\n\n> Hardware:\n> Virtual machine running on top of VMWare\n> 4 cores, Intel(R) Xeon(R) CPU E5645 @ 2.40GHz\n> 4GB of RAM\n\nYou should carefully test transaction-based pools limited to around 8\nDB connections. Experiment with different size limits.\n\nhttp://wiki.postgr...
[ { "msg_contents": "Hi all\n\nI have a problem with a data import procedure that involve the following query:\n\nselect a,b,c,d\nfrom big_table b\njoin data_sequences_table ds\non b.key1 = ds.key1 and b.key2 = ds.key2\nwhere ds.import_id=xxxxxxxxxx\n\nThe \"big table\" has something like 10.000.000 records ore m...
[ { "msg_contents": "Hello there,\n\nI have PostgreSQL 8.3.18 server running on Centos 6.2 (2.6.32-220.7.1) with\nthis specs:\n\n2x CPU AMD Opteron 6282\n128GB RAM\nRaid 10 (12HD 15k rpm 1GB cache) with data\nRaid 10 (4HD 15k rpm 1GB cache) with xlog\nRaid 1 (15k rpm 1GB cache shared with xlog) with system\n\nOn ...
[ { "msg_contents": "Hi all,\n\nI was wondering if it is safe to install pg_buffercache on production\nsystems?\n\nThank you.\n\nHi all,I was wondering if it is safe to install pg_buffercache on production systems?Thank you.", "msg_date": "Tue, 30 Oct 2012 14:34:31 -0400", "msg_from": "pg noob <pgnube@gma...