threads listlengths 1 2.99k |
|---|
[
{
"msg_contents": "Hi Vadim,\n\nI am trying to understand why GetSnapshotData() needs to acquire the\nSInval spinlock before it calls ReadNewTransactionId, rather than after.\nI see that you made it do so --- in the commit at\nhttp://www.ca.postgresql.org/cgi/cvsweb.cgi/pgsql/src/backend/storage/ipc/shmem.c.diff?r1=1.41&r2=1.42\nbut I don't understand why the loss of concurrency is \"necessary\".\nSince we are going to treat all xids >= xmax as in-progress anyway,\nwhat's wrong with reading xmax before we acquire the SInval lock?\n\nAlso, it seems to me that in GetNewTransactionId(), it's important\nfor MyProc->xid to be set before releasing XidGenLockId, not after.\nOtherwise there is a narrow window for failure:\n\n1. Process A calls GetNewTransactionId. It allocates an xid of, say,\n1234, and increments nextXid to 1235. Just after releasing the\nXidGenLock spinlock, but before it can set its MyProc->xid, control\nswaps away from it.\n\n2. Process B gets to run. It runs GetSnapshotData. It sees nextXid =\n1235, and it does not see xid = 1234 in any backend's proc->xid.\nTherefore, B will assume xid 1234 has already terminated, when it\nhasn't.\n\nIsn't this broken? The problem would be avoided if GetNewTransactionId\nsets MyProc->xid before releasing the spinlock, since then after\nGetSnapshotData has called ReadNewTransactionId, we know that all older\nXIDs that are still active are recorded in proc structures.\n\nComments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 11 Jul 2001 16:33:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Strangeness in xid allocation / snapshot setup"
}
] |
[
{
"msg_contents": "In this bit of code in src/pl/plpgsql/src/gram.y in the current CVS\nsources, curname_def is defined as PLpgSQL_expr * but it is is\nallocated the space required for a PLpgSQL_var. This looks like a\nbug.\n\nIan\n\n\t\t\t\t| decl_varname K_CURSOR decl_cursor_args decl_is_from K_SELECT decl_cursor_query\n\t\t\t\t\t{\n\t\t\t\t\t\tPLpgSQL_var *new;\n\t\t\t\t\t\tPLpgSQL_expr *curname_def;\n\t\t\t\t\t\tchar\t\tbuf[1024];\n\t\t\t\t\t\tchar\t\t*cp1;\n\t\t\t\t\t\tchar\t\t*cp2;\n\n\t\t\t\t\t\tplpgsql_ns_pop();\n\n\t\t\t\t\t\tnew = malloc(sizeof(PLpgSQL_var));\n\t\t\t\t\t\tmemset(new, 0, sizeof(PLpgSQL_var));\n\n\t\t\t\t\t\tcurname_def = malloc(sizeof(PLpgSQL_var));\n\t\t\t\t\t\tmemset(curname_def, 0, sizeof(PLpgSQL_var));\n",
"msg_date": "11 Jul 2001 17:38:28 -0700",
"msg_from": "Ian Lance Taylor <ian@airs.com>",
"msg_from_op": true,
"msg_subject": "Possible bug in plpgsql/src/gram.y"
},
{
"msg_contents": "Confirmed. I found a second problem in the file too, very similar. \nPatch applied.\n\n> In this bit of code in src/pl/plpgsql/src/gram.y in the current CVS\n> sources, curname_def is defined as PLpgSQL_expr * but it is is\n> allocated the space required for a PLpgSQL_var. This looks like a\n> bug.\n> \n> Ian\n> \n> \t\t\t\t| decl_varname K_CURSOR decl_cursor_args decl_is_from K_SELECT decl_cursor_query\n> \t\t\t\t\t{\n> \t\t\t\t\t\tPLpgSQL_var *new;\n> \t\t\t\t\t\tPLpgSQL_expr *curname_def;\n> \t\t\t\t\t\tchar\t\tbuf[1024];\n> \t\t\t\t\t\tchar\t\t*cp1;\n> \t\t\t\t\t\tchar\t\t*cp2;\n> \n> \t\t\t\t\t\tplpgsql_ns_pop();\n> \n> \t\t\t\t\t\tnew = malloc(sizeof(PLpgSQL_var));\n> \t\t\t\t\t\tmemset(new, 0, sizeof(PLpgSQL_var));\n> \n> \t\t\t\t\t\tcurname_def = malloc(sizeof(PLpgSQL_var));\n> \t\t\t\t\t\tmemset(curname_def, 0, sizeof(PLpgSQL_var));\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: src/pl/plpgsql/src/gram.y\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/pl/plpgsql/src/gram.y,v\nretrieving revision 1.22\ndiff -c -r1.22 gram.y\n*** src/pl/plpgsql/src/gram.y\t2001/07/11 18:54:18\t1.22\n--- src/pl/plpgsql/src/gram.y\t2001/07/12 01:15:05\n***************\n*** 332,338 ****\n \t\t\t\t\t{\n \t\t\t\t\t\tPLpgSQL_rec\t\t*new;\n \n! \t\t\t\t\t\tnew = malloc(sizeof(PLpgSQL_var));\n \n \t\t\t\t\t\tnew->dtype\t\t= PLPGSQL_DTYPE_REC;\n \t\t\t\t\t\tnew->refname\t= $1.name;\n--- 332,338 ----\n \t\t\t\t\t{\n \t\t\t\t\t\tPLpgSQL_rec\t\t*new;\n \n! \t\t\t\t\t\tnew = malloc(sizeof(PLpgSQL_rec));\n \n \t\t\t\t\t\tnew->dtype\t\t= PLPGSQL_DTYPE_REC;\n \t\t\t\t\t\tnew->refname\t= $1.name;\n***************\n*** 374,381 ****\n \t\t\t\t\t\tnew = malloc(sizeof(PLpgSQL_var));\n \t\t\t\t\t\tmemset(new, 0, sizeof(PLpgSQL_var));\n \n! \t\t\t\t\t\tcurname_def = malloc(sizeof(PLpgSQL_var));\n! \t\t\t\t\t\tmemset(curname_def, 0, sizeof(PLpgSQL_var));\n \n \t\t\t\t\t\tnew->dtype\t\t= PLPGSQL_DTYPE_VAR;\n \t\t\t\t\t\tnew->refname\t= $1.name;\n--- 374,381 ----\n \t\t\t\t\t\tnew = malloc(sizeof(PLpgSQL_var));\n \t\t\t\t\t\tmemset(new, 0, sizeof(PLpgSQL_var));\n \n! \t\t\t\t\t\tcurname_def = malloc(sizeof(PLpgSQL_expr));\n! \t\t\t\t\t\tmemset(curname_def, 0, sizeof(PLpgSQL_expr));\n \n \t\t\t\t\t\tnew->dtype\t\t= PLPGSQL_DTYPE_VAR;\n \t\t\t\t\t\tnew->refname\t= $1.name;",
"msg_date": "Wed, 11 Jul 2001 21:18:44 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Possible bug in plpgsql/src/gram.y"
},
{
"msg_contents": "\nAlso, can someone tell my why we use malloc in plpgsql?\n\n\n> In this bit of code in src/pl/plpgsql/src/gram.y in the current CVS\n> sources, curname_def is defined as PLpgSQL_expr * but it is is\n> allocated the space required for a PLpgSQL_var. This looks like a\n> bug.\n> \n> Ian\n> \n> \t\t\t\t| decl_varname K_CURSOR decl_cursor_args decl_is_from K_SELECT decl_cursor_query\n> \t\t\t\t\t{\n> \t\t\t\t\t\tPLpgSQL_var *new;\n> \t\t\t\t\t\tPLpgSQL_expr *curname_def;\n> \t\t\t\t\t\tchar\t\tbuf[1024];\n> \t\t\t\t\t\tchar\t\t*cp1;\n> \t\t\t\t\t\tchar\t\t*cp2;\n> \n> \t\t\t\t\t\tplpgsql_ns_pop();\n> \n> \t\t\t\t\t\tnew = malloc(sizeof(PLpgSQL_var));\n> \t\t\t\t\t\tmemset(new, 0, sizeof(PLpgSQL_var));\n> \n> \t\t\t\t\t\tcurname_def = malloc(sizeof(PLpgSQL_var));\n> \t\t\t\t\t\tmemset(curname_def, 0, sizeof(PLpgSQL_var));\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 11 Jul 2001 21:19:59 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Possible bug in plpgsql/src/gram.y"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Also, can someone tell my why we use malloc in plpgsql?\n\nPlain palloc() won't do because the compiled tree for the function needs\nto outlive the current query. However, malloc() is not cool. Really,\nthese structures ought to be built in a memory context created specially\nfor each function --- then it'd be possible to reclaim the memory if the\nfunction is deleted or we realize we need to invalidate its compiled\ntree.\n\nI've had this in mind to do for awhile, but haven't gotten to it.\nDo you want to put it on TODO?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 11 Jul 2001 22:40:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Possible bug in plpgsql/src/gram.y "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Also, can someone tell my why we use malloc in plpgsql?\n> \n> Plain palloc() won't do because the compiled tree for the function needs\n> to outlive the current query. However, malloc() is not cool. Really,\n> these structures ought to be built in a memory context created specially\n> for each function --- then it'd be possible to reclaim the memory if the\n> function is deleted or we realize we need to invalidate its compiled\n> tree.\n> \n> I've had this in mind to do for awhile, but haven't gotten to it.\n> Do you want to put it on TODO?\n\nDone:\n\n\t* Change PL/PgSQL to use palloc() instead of malloc()\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 11 Jul 2001 23:50:26 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Possible bug in plpgsql/src/gram.y"
},
{
"msg_contents": "Bruce Momjian wrote:\n>\n> Confirmed. I found a second problem in the file too, very similar.\n> Patch applied.\n\n Cut'n paste error. Thanks to both of you, good catch.\n\n\nJan\n\n>\n> > In this bit of code in src/pl/plpgsql/src/gram.y in the current CVS\n> > sources, curname_def is defined as PLpgSQL_expr * but it is is\n> > allocated the space required for a PLpgSQL_var. This looks like a\n> > bug.\n> >\n> > Ian\n> >\n> > | decl_varname K_CURSOR decl_cursor_args decl_is_from K_SELECT decl_cursor_query\n> > {\n> > PLpgSQL_var *new;\n> > PLpgSQL_expr *curname_def;\n> > char buf[1024];\n> > char *cp1;\n> > char *cp2;\n> >\n> > plpgsql_ns_pop();\n> >\n> > new = malloc(sizeof(PLpgSQL_var));\n> > memset(new, 0, sizeof(PLpgSQL_var));\n> >\n> > curname_def = malloc(sizeof(PLpgSQL_var));\n> > memset(curname_def, 0, sizeof(PLpgSQL_var));\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to majordomo@postgresql.org so that your\n> > message can get through to the mailing list cleanly\n> >\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n> Index: src/pl/plpgsql/src/gram.y\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/pl/plpgsql/src/gram.y,v\n> retrieving revision 1.22\n> diff -c -r1.22 gram.y\n> *** src/pl/plpgsql/src/gram.y 2001/07/11 18:54:18 1.22\n> --- src/pl/plpgsql/src/gram.y 2001/07/12 01:15:05\n> ***************\n> *** 332,338 ****\n> {\n> PLpgSQL_rec *new;\n>\n> ! new = malloc(sizeof(PLpgSQL_var));\n>\n> new->dtype = PLPGSQL_DTYPE_REC;\n> new->refname = $1.name;\n> --- 332,338 ----\n> {\n> PLpgSQL_rec *new;\n>\n> ! new = malloc(sizeof(PLpgSQL_rec));\n>\n> new->dtype = PLPGSQL_DTYPE_REC;\n> new->refname = $1.name;\n> ***************\n> *** 374,381 ****\n> new = malloc(sizeof(PLpgSQL_var));\n> memset(new, 0, sizeof(PLpgSQL_var));\n>\n> ! curname_def = malloc(sizeof(PLpgSQL_var));\n> ! memset(curname_def, 0, sizeof(PLpgSQL_var));\n>\n> new->dtype = PLPGSQL_DTYPE_VAR;\n> new->refname = $1.name;\n> --- 374,381 ----\n> new = malloc(sizeof(PLpgSQL_var));\n> memset(new, 0, sizeof(PLpgSQL_var));\n>\n> ! curname_def = malloc(sizeof(PLpgSQL_expr));\n> ! memset(curname_def, 0, sizeof(PLpgSQL_expr));\n>\n> new->dtype = PLPGSQL_DTYPE_VAR;\n> new->refname = $1.name;\n\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Thu, 12 Jul 2001 08:04:48 -0400 (EDT)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Possible bug in plpgsql/src/gram.y"
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Also, can someone tell my why we use malloc in plpgsql?\n>\n> Plain palloc() won't do because the compiled tree for the function needs\n> to outlive the current query. However, malloc() is not cool. Really,\n> these structures ought to be built in a memory context created specially\n> for each function --- then it'd be possible to reclaim the memory if the\n> function is deleted or we realize we need to invalidate its compiled\n> tree.\n>\n> I've had this in mind to do for awhile, but haven't gotten to it.\n> Do you want to put it on TODO?\n\n Planned that myself, but dropped the plan again because I\n think it'd be better to start more or less from scratch with\n a complete new PL that supports modules, global variables and\n the like. After 2-3 years we could simply remove the old\n style PL/pgSQL then.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Thu, 12 Jul 2001 08:07:53 -0400 (EDT)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Possible bug in plpgsql/src/gram.y"
}
] |
[
{
"msg_contents": "> I am trying to understand why GetSnapshotData() needs to acquire the\n> SInval spinlock before it calls ReadNewTransactionId, rather than after.\n> I see that you made it do so --- in the commit at\n>\nhttp://www.ca.postgresql.org/cgi/cvsweb.cgi/pgsql/src/backend/storage/ipc/sh\nmem.c.diff?r1=1.41&r2=1.42\n> but I don't understand why the loss of concurrency is \"necessary\".\n> Since we are going to treat all xids >= xmax as in-progress anyway,\n> what's wrong with reading xmax before we acquire the SInval lock?\n\nAFAIR, I made so to prevent following:\n\n1. Tx Old is running.\n2. Tx S reads new transaction ID in GetSnapshotData() and swapped away\n before SInval acquired.\n3. Tx New gets new transaction ID, makes changes and commits.\n4. Tx Old changes some row R changed by Tx New and commits.\n5. Tx S gets snapshot data and now sees R changed by *both* Tx Old and\n Tx New *but* does not see *other* changes made by Tx New =>\n Tx S reads unconsistent data.\n\n---------\n\nAs for issue below - I don't remember why I decided that\nit's not important and will need in some time to remember.\n\n> Also, it seems to me that in GetNewTransactionId(), it's important\n> for MyProc->xid to be set before releasing XidGenLockId, not after.\n> Otherwise there is a narrow window for failure:\n> \n> 1. Process A calls GetNewTransactionId. It allocates an xid of, say,\n> 1234, and increments nextXid to 1235. Just after releasing the\n> XidGenLock spinlock, but before it can set its MyProc->xid, control\n> swaps away from it.\n> \n> 2. Process B gets to run. It runs GetSnapshotData. It sees nextXid =\n> 1235, and it does not see xid = 1234 in any backend's proc->xid.\n> Therefore, B will assume xid 1234 has already terminated, when it\n> hasn't.\n> \n> Isn't this broken? The problem would be avoided if \n> GetNewTransactionId\n> sets MyProc->xid before releasing the spinlock, since then after\n> GetSnapshotData has called ReadNewTransactionId, we know that \n> all older\n> XIDs that are still active are recorded in proc structures.\n> \n> Comments?\n> \n> \t\t\tregards, tom lane\n> \n",
"msg_date": "Wed, 11 Jul 2001 19:00:27 -0700",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: Strangeness in xid allocation / snapshot setup"
},
{
"msg_contents": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n>> Since we are going to treat all xids >= xmax as in-progress anyway,\n>> what's wrong with reading xmax before we acquire the SInval lock?\n\n> AFAIR, I made so to prevent following:\n\n> 1. Tx Old is running.\n> 2. Tx S reads new transaction ID in GetSnapshotData() and swapped away\n> before SInval acquired.\n> 3. Tx New gets new transaction ID, makes changes and commits.\n> 4. Tx Old changes some row R changed by Tx New and commits.\n> 5. Tx S gets snapshot data and now sees R changed by *both* Tx Old and\n> Tx New *but* does not see *other* changes made by Tx New =>\n> Tx S reads unconsistent data.\n\nHmm, but that doesn't seem to have anything to do with the way that\nGetSnapshotData operates. If Tx New has an XID >= xmax read by Tx S'\nGetSnapshotData, then Tx New will be considered uncommitted by S no\nmatter which order we get the locks in; it hardly matters whether Tx New\nmanages to physically commit before we finish building the snapshot for\nS. On the other side of the coin, if Tx New's XID < xmax for S, then\n*with the GetNewTransactionId change that I want* we can be sure that\nTx New will be seen running by S when it does get the SInval lock\n(unless New has managed to finish before S gets the lock, in which case\nit's perfectly reasonable for S to treat it as committed or aborted).\n\nAnyway, it seems to me that the possibility of inconsistent data is\ninherent in the way we handle updated rows in Read Committed mode ---\nyou can always get to see a row that was emitted by a transaction you\ndon't see the other effects of.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 11 Jul 2001 22:32:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Strangeness in xid allocation / snapshot setup "
},
{
"msg_contents": "> > 1. Tx Old is running.\n> > 2. Tx S reads new transaction ID in GetSnapshotData() and swapped away\n> > before SInval acquired.\n> > 3. Tx New gets new transaction ID, makes changes and commits.\n> > 4. Tx Old changes some row R changed by Tx New and commits.\n> > 5. Tx S gets snapshot data and now sees R changed by *both* Tx Old and\n> > Tx New *but* does not see *other* changes made by Tx New =>\n> > Tx S reads unconsistent data.\n>\n> Hmm, but that doesn't seem to have anything to do with the way that\n> GetSnapshotData operates. If Tx New has an XID >= xmax read by Tx S'\n> GetSnapshotData, then Tx New will be considered uncommitted by S no\n> matter which order we get the locks in; it hardly matters whether Tx New\n\nYou forget about Tx Old! The point is that changes made by Tx Old *over*\nTx New' changes effectively make those Tx New' changes *visible* to\nTx S!\n\nAnd this is not good: Tx New inserts PK and corresponding FK and commits;\nTx Old changes some field in row with that FK and commits - now Tx S will\nsee\nFK row *but not PK one* (and what if Tx S was serializable Tx run by\npd_dump...)\n\nSInval lock prevents Tx Old from commit (xact.c:CommitTransaction()) in\npoints 2. - 4. above and so Tx Old' changes will not be visible to Tx S.\n\n> manages to physically commit before we finish building the snapshot for\n> S. On the other side of the coin, if Tx New's XID < xmax for S, then\n> *with the GetNewTransactionId change that I want* we can be sure that\n> Tx New will be seen running by S when it does get the SInval lock\n> (unless New has managed to finish before S gets the lock, in which case\n> it's perfectly reasonable for S to treat it as committed or aborted).\n\nAnd this is how it worked (MyProc->xid was updated while holding\nXXXGenLockId) in varsup.c from version 1.21 (Jun 1999) till\nversion 1.36 (Mar 2001) when you occasionally moved it outside\nof locked code part:\n\nhttp://www.ca.postgresql.org/cgi/cvsweb.cgi/pgsql/src/backend/access/transam\n/varsup.c.diff?r1=1.35&r2=1.36\n\n> Anyway, it seems to me that the possibility of inconsistent data is\n> inherent in the way we handle updated rows in Read Committed mode ---\n> you can always get to see a row that was emitted by a transaction you\n> don't see the other effects of.\n\nIf I correctly understand meaning of \"emitted\" then sentence above is not\ncorrect:\nset of rows to be updated can only be shortened by concurrent transactions.\nYes, changes can be made over changes from concurrent transactions but only\nfor rows from original set defined by query snapshot and only if\nconcurrently\nupdated rows (from that set) satisfy query qual => a row must satisfy\nsnapshot\n*and* query qual = double satisfaction guaranteed -:))\nAnd let's remember that this behaviour is required for current RI\nconstraints\nimplementation.\n\nVadim\n\n\n",
"msg_date": "Thu, 12 Jul 2001 01:36:31 -0700",
"msg_from": "\"Vadim Mikheev\" <vmikheev@sectorbase.com>",
"msg_from_op": false,
"msg_subject": "Re: Strangeness in xid allocation / snapshot setup "
},
{
"msg_contents": "\"Vadim Mikheev\" <vmikheev@sectorbase.com> writes:\n> You forget about Tx Old! The point is that changes made by Tx Old *over*\n> Tx New' changes effectively make those Tx New' changes *visible* to\n> Tx S!\n\nYes, but what's that got to do with the order of operations in\nGetSnapshotData? The scenario you describe can occur anyway.\nOnly if Tx Old is running in Read Committed mode, of course.\nBut if it is, then it's capable of deciding to update a row updated\nby Tx New. Whether Tx S's xmax value is before or after Tx New's ID\nis not going to change the behavior of Tx Old.\n\n> And this is how it worked (MyProc->xid was updated while holding\n> XXXGenLockId) in varsup.c from version 1.21 (Jun 1999) till\n> version 1.36 (Mar 2001) when you occasionally moved it outside\n> of locked code part:\n\nOkay, so that part was my error. I've changed it back.\n\nI'd still like to change GetSnapshotData to read the nextXid before\nit acquires SInvalLock, though. If we did that, it'd be safe to make\nGetNewTransactionId be\n\n\tSpinAcquire(XidGenLockId);\n\txid = nextXid++;\n\tSpinAcquire(SInvalLockId);\n\tMyProc->xid = xid;\n\tSpinRelease(SInvalLockId);\n\tSpinRelease(XidGenLockId);\n\nwhich is really necessary if you want to avoid assuming that\nTransactionIds can be fetched and stored atomically.\n\nTwo other changes I think are needed in this area:\n\n* In AbortTransaction, the clearing of MyProc->xid and MyProc->xmin\nshould be moved down to after RecordTransactionAbort and surrounded\nby acquire/release SInvalLock (to avoid atomic fetch/store assumption).\n\n* In HeapTupleSatisfiesVacuum (new tqual.c routine I just made\nyesterday, by extracting the tuple time qual checks from vacuum.c),\nthe order of checking for process status should be\n\t\tTransactionIdIsInProgress\n\t\tTransactionIdDidCommit\n\t\tTransactionIdDidAbort\nnot the present\n\t\tTransactionIdDidAbort\n\t\tTransactionIdDidCommit\n\t\tTransactionIdIsInProgress\n\nThe current way is wrong because if the other process is just in process\nof committing, we can get\n\nVACUUM\t\t\t\t\t\tother\n\nTransactionIdDidAbort - no\nTransactionIdDidCommit - no\n\n\t\t\t\t\t\tRecordTransactionCommit();\n\t\t\t\t\t\tMyProc->xid = 0;\n\nTransactionIdIsInProgress - no\n\nwhereupon vacuum decides that the other process crashed --- oops. If\nwe test TransactionIdIsInProgress *first* in tqual, and xact.c records\ncommit or abort *before* clearing MyProc->xid, then we cannot have this\nrace condition where the xact is no longer considered in progress but\nnot seen to be committed/aborted either.\n\n(Note: this bug is not a problem for existing VACUUM, since it can\nnever see any tuples from open transactions anyway. But it will be\nfatal for concurrent VACUUM.)\n\nComments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 12 Jul 2001 10:47:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Strangeness in xid allocation / snapshot setup "
}
] |
[
{
"msg_contents": "\n> The question is really whether you ever want a client to get a\n> \"rejected\" result from an open attempt, or whether you'd rather they \n> got a report from the back end telling them they can't log in. The \n> second is more polite but a lot more expensive. That expense might \n> really matter if you have MaxBackends already running.\n\nOne of us has probably misunderstood the listen parameter.\nIt only limits the number of clients that can connect concurrently.\nIt has nothing to do with the number of clients that are already connected.\nIt sort of resembles a maximum queue size for the accept loop.\nIncoming connections fill the queue, accept frees the queue by taking the \nconnection to a newly forked backend.\n\nAndreas\n",
"msg_date": "Thu, 12 Jul 2001 10:14:44 +0200",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: Re: SOMAXCONN (was Re: Solaris source code)"
},
{
"msg_contents": "On Thu, Jul 12, 2001 at 10:14:44AM +0200, Zeugswetter Andreas SB wrote:\n> \n> > The question is really whether you ever want a client to get a\n> > \"rejected\" result from an open attempt, or whether you'd rather they \n> > got a report from the back end telling them they can't log in. The \n> > second is more polite but a lot more expensive. That expense might \n> > really matter if you have MaxBackends already running.\n> \n> One of us has probably misunderstood the listen parameter.\n\nI don't think so.\n\n> It only limits the number of clients that can connect concurrently.\n> It has nothing to do with the number of clients that are already \n> connected. It sort of resembles a maximum queue size for the accept \n> loop. Incoming connections fill the queue, accept frees the queue by\n> taking the connection to a newly forked backend.\n\nThe MaxBackends constant and the listen() parameter have no effect \nuntil the number of clients already connected or trying to connect\nand not yet noticed by the postmaster (respectively) exceed some \nthreshold. We would like to choose such thresholds so that we don't \npromise service we can't deliver.\n\nWe can assume the administrator has tuned MaxBackends so that a\nsystem with that many back ends running really _is_ heavily loaded. \n(We have talked about providing a better measure of load than the\ngross number of back ends; is that on the Todo list?)\n\nWhen the system is too heavily loaded (however measured), any further \nlogin attempts will fail. What I suggested is, instead of the \npostmaster accept()ing the connection, why not leave the connection \nattempt in the queue until we can afford a back end to handle it? \nThen, the argument to listen() will determine how many attempts can \nbe in the queue before the network stack itself rejects them without \nthe postmaster involved.\n\nAs it is, the listen() queue limit is not useful. It could be made\nuseful with a slight change in postmaster behavior.\n\nNathan Myers\nncm@zembu.com\n",
"msg_date": "Thu, 12 Jul 2001 12:14:05 -0700",
"msg_from": "ncm@zembu.com (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: Re: SOMAXCONN (was Re: Solaris source code)"
},
{
"msg_contents": "Nathan Myers writes:\n\n> When the system is too heavily loaded (however measured), any further\n> login attempts will fail. What I suggested is, instead of the\n> postmaster accept()ing the connection, why not leave the connection\n> attempt in the queue until we can afford a back end to handle it?\n\nBecause the new connection might be a cancel request.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Thu, 12 Jul 2001 23:08:34 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: SOMAXCONN (was Re: Solaris source code)"
},
{
"msg_contents": "On Thu, Jul 12, 2001 at 11:08:34PM +0200, Peter Eisentraut wrote:\n> Nathan Myers writes:\n> \n> > When the system is too heavily loaded (however measured), any further\n> > login attempts will fail. What I suggested is, instead of the\n> > postmaster accept()ing the connection, why not leave the connection\n> > attempt in the queue until we can afford a back end to handle it?\n> \n> Because the new connection might be a cancel request.\n\nSupporting cancel requests seems like a poor reason to ignore what\nload-shedding support operating systems provide. \n\nTo support cancel requests, it would suffice for PG to listen at \nanother socket dedicated to administrative requests. (It might \neven ignore MaxBackends for connections on that socket.)\n\nNathan Myers\nncm@zembu.com\n",
"msg_date": "Fri, 13 Jul 2001 13:03:22 -0700",
"msg_from": "ncm@zembu.com (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: Re: SOMAXCONN (was Re: Solaris source code)"
}
] |
[
{
"msg_contents": "lindo=# vacuum analyze;\nNOTICE: Index probably_good_banner_myidx1: NUMBER OF INDEX' TUPLES (1) IS \nNOT THE SAME AS HEAP' (4).\n\tRecreate the index.\nNOTICE: Index probably_good_banner_myidx1: NUMBER OF INDEX' TUPLES (1) IS \nNOT THE SAME AS HEAP' (4).\n\tRecreate the index.\nNOTICE: Child itemid in update-chain marked as unused - can't continue \nrepair_frag\nVACUUM\n\n\nwhat i must do here?\n\nthanks,\nvalter\n_________________________________________________________________________\nGet Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.\n\n",
"msg_date": "Thu, 12 Jul 2001 11:41:14 +0200",
"msg_from": "\"V. M.\" <txian@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Child itemid in update-chain marked as unused - can't continue\n\trepair_frag"
},
{
"msg_contents": "> lindo=# vacuum analyze;\n> NOTICE: Index probably_good_banner_myidx1: NUMBER OF INDEX' TUPLES (1) IS \n> NOT THE SAME AS HEAP' (4).\n> \tRecreate the index.\n> NOTICE: Index probably_good_banner_myidx1: NUMBER OF INDEX' TUPLES (1) IS \n> NOT THE SAME AS HEAP' (4).\n> \tRecreate the index.\n> NOTICE: Child itemid in update-chain marked as unused - can't continue \n> repair_frag\n> VACUUM\n\nI would drop and recreate the index.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 12 Jul 2001 10:27:33 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Child itemid in update-chain marked as unused - can't\n\tcontinue repair_frag"
},
{
"msg_contents": "\"V. M.\" <txian@hotmail.com> writes:\n> lindo=# vacuum analyze;\n> NOTICE: Index probably_good_banner_myidx1: NUMBER OF INDEX' TUPLES (1) IS \n> NOT THE SAME AS HEAP' (4).\n> \tRecreate the index.\n> NOTICE: Index probably_good_banner_myidx1: NUMBER OF INDEX' TUPLES (1) IS \n> NOT THE SAME AS HEAP' (4).\n> \tRecreate the index.\n> NOTICE: Child itemid in update-chain marked as unused - can't continue \n> repair_frag\n> VACUUM\n\nInteresting --- can you show us the sequence that got you into this\nstate? A reproducible case that causes these messages would be very\nuseful for debugging.\n\n> what i must do here?\n\nDumping and reloading the table should fix it, if nothing else does.\n\nBTW, what Postgres version is this?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 12 Jul 2001 11:14:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Child itemid in update-chain marked as unused - can't continue\n\trepair_frag"
}
] |
[
{
"msg_contents": "Most, or at least half, of the error messages that libpq itself generates\nlook like \"PQwhatever(): this and that went wrong\", where PQwhatever is\nusually the function that generates the error message.\n\nI consider this practice ugly. If PQwhatever is an exported API function,\nthen the users knows perfectly well what function the message came from.\nIn fact, a common idiom is\n\n if (PQsomething() != OK)\n fprintf(stderr, \"PQsomething: %s\", PQerrorMessage(conn));\n\nwhich is obviously going to look funky.\n\nIf PQwhatever is an internal function, then this practice is just plain\nconfusing to the user. In some cases the code tries to be smart and pass\nin the name of \"front line\" API function, but this doesn't really end up\nhelping anybody.\n\nlibpq is not complex and large enough that it would be tedious for a\ndeveloper to locate any given error message or derive the location in case\nof a rare duplicate. (I understand that in the backend this premise does\nnot necessarily hold, but I'm only talking about libpq.)\n\nSo would anyone object if I get rid of this while doing the i18n pass over\nlibpq?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Thu, 12 Jul 2001 18:27:21 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Prefixing libpq error message with function names"
},
{
"msg_contents": "> Most, or at least half, of the error messages that libpq itself generates\n> look like \"PQwhatever(): this and that went wrong\", where PQwhatever is\n> usually the function that generates the error message.\n> \n> I consider this practice ugly. If PQwhatever is an exported API function,\n> then the users knows perfectly well what function the message came from.\n> In fact, a common idiom is\n> \n> if (PQsomething() != OK)\n> fprintf(stderr, \"PQsomething: %s\", PQerrorMessage(conn));\n> \n> which is obviously going to look funky.\n> \n> If PQwhatever is an internal function, then this practice is just plain\n> confusing to the user. In some cases the code tries to be smart and pass\n> in the name of \"front line\" API function, but this doesn't really end up\n> helping anybody.\n> \n> libpq is not complex and large enough that it would be tedious for a\n> developer to locate any given error message or derive the location in case\n> of a rare duplicate. (I understand that in the backend this premise does\n> not necessarily hold, but I'm only talking about libpq.)\n> \n> So would anyone object if I get rid of this while doing the i18n pass over\n> libpq?\n\nI vote it should be removed too.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 12 Jul 2001 13:09:10 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Prefixing libpq error message with function names"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> So would anyone object if I get rid of this while doing the i18n pass over\n> libpq?\n\nDon't forget to fix the numerous places where examples of these messages\nappear in the documentation ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 12 Jul 2001 13:28:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Prefixing libpq error message with function names "
}
] |
[
{
"msg_contents": "> > You forget about Tx Old! The point is that changes made by\n> > Tx Old *over* Tx New' changes effectively make those Tx New'\n> > changes *visible* to Tx S!\n> \n> Yes, but what's that got to do with the order of operations in\n> GetSnapshotData? The scenario you describe can occur anyway.\n\nTry to describe it step by step.\n\n> Only if Tx Old is running in Read Committed mode, of course.\n> But if it is, then it's capable of deciding to update a row updated\n> by Tx New. Whether Tx S's xmax value is before or after Tx New's ID\n> is not going to change the behavior of Tx Old.\n\n1. I consider particular case when Tx S' xmax is before Tx New' ID.\n1.1 For this case acquiring SInval lock before ReadNewTransactionId()\n changes behavior of Tx Old: it postpones change of Tx Old'\n (and Tx New') MyProc->xid in xact.c:CommitTransaction(), so Tx S\n will see Tx Old as running, ie Tx Old' changes will be invisible\n to Tx S on the base of analyzing MyProc.xid-s, just like Tx New'\n changes will be invisible on the base of analyzing next Tx ID.\n2. If you can find examples when current code is not able to provide\n consistent snapshot of running (out of interest) transactions\n let's think how to fix code. Untill then my example shows why\n we cannot move SInval lock request after ReadNewTransactionId().\n\n> I'd still like to change GetSnapshotData to read the nextXid before\n> it acquires SInvalLock, though. If we did that, it'd be safe to make\n> GetNewTransactionId be\n> \n> \tSpinAcquire(XidGenLockId);\n> \txid = nextXid++;\n> \tSpinAcquire(SInvalLockId);\n> \tMyProc->xid = xid;\n> \tSpinRelease(SInvalLockId);\n> \tSpinRelease(XidGenLockId);\n> \n> which is really necessary if you want to avoid assuming that\n> TransactionIds can be fetched and stored atomically.\n\nTo avoid that assumption one should add per MyProc spinlock.\n\nVadim\n",
"msg_date": "Thu, 12 Jul 2001 10:11:02 -0700",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: Re: Strangeness in xid allocation / snapshot setup "
},
{
"msg_contents": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n> 1.1 For this case acquiring SInval lock before ReadNewTransactionId()\n> changes behavior of Tx Old: it postpones change of Tx Old'\n> (and Tx New') MyProc->xid in xact.c:CommitTransaction(), so Tx S\n> will see Tx Old as running, ie Tx Old' changes will be invisible\n> to Tx S on the base of analyzing MyProc.xid-s, just like Tx New'\n> changes will be invisible on the base of analyzing next Tx ID.\n\nOh, now I get it: the point is to prevent Tx Old from exiting the set\nof \"still running\" xacts as seen by Tx S. Okay, it makes sense.\nI'll try to add some documentation to explain it.\n\nGiven this, I'm wondering why we bother with having a separate\nXidGenLock spinlock at all. Why not eliminate it and use SInval\nspinlock to lock GetNewTransactionId and ReadNewTransactionId?\n\nWhat did you think about reordering the vacuum qual tests and\nAbortTransaction sequence?\n\nBTW, I'm starting to think that it would be really nice if we could\nreplace our spinlocks with not just a semaphore, but something that has\na notion of \"shared\" and \"exclusive\" lock requests. For example,\nif GetSnapshotData could use a shared lock on SInvalLock, it'd\nimprove concurrency.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 12 Jul 2001 13:24:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: Strangeness in xid allocation / snapshot setup "
}
] |
[
{
"msg_contents": "Hi,\n\n I'd like to add another column to pg_rewrite, holding the\n string representation of the rewrite rule. A new utility\n command will then allow to recreate the rules (internally\n DROP/CREATE, but that doesn't matter).\n\n This would be a big help in case anything used in a view or\n other rules get's dropped and recreated (like underlying\n tables). There is of course a difference between the original\n CREATE RULE/VIEW statement and the string stored here. This\n is because we cannot rely on the actual query buffer but have\n to parseback the parsetree like done by the utility functions\n used for pg_rules. Thus, changing a column name of a base\n table will break the view either way.\n\n Anyway, what's the preferred syntax for triggering the rule\n recompilation? I thought about\n\n ALTER RULE {rulename|ALL} RECOMPILE;\n\n Where ALL triggers only those rules where the user actually\n has RULE access right on a relation.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Thu, 12 Jul 2001 13:28:59 -0400 (EDT)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": true,
"msg_subject": "Rule recompilation"
},
{
"msg_contents": "I remember awhile ago, someone floated the idea of a dependency view which\nwould list all objects and what OIDs they have in their plan. (i.e. what\ndo they depend on). \n\nI'm definitely no expert in this, but to me, one possible implementation\nwould be to enhance outfuncs to provide for creation tracking of all\nOIDs used in plan, and allow caller to receive this list and do something\nwith it. This would actually be very simple, as only _outOidList will need\nto be modified...(but then again, I'm known for oversimplifying things :)\n\nThen, we can add ev_depends/oidvector to pg_rewrite and store the\ndependency there, and for stored procedures, add a prodepends/oidvector to\npg_proc.\n\nThen, create union of pg_rewrite and pg_proc to list dependencies.\n\nThen, we would be able to provide warning when an object is dropped:\n'The following objects depend on this blah blah', and possibly an action\n\"alter database fixdepends oid\" which would recompile everything that\ndepends on that oid.\n\nHow's this sound?\n\nOn Thu, 12 Jul 2001, Jan Wieck wrote:\n\n> Hi,\n> \n> I'd like to add another column to pg_rewrite, holding the\n> string representation of the rewrite rule. A new utility\n> command will then allow to recreate the rules (internally\n> DROP/CREATE, but that doesn't matter).\n> \n> This would be a big help in case anything used in a view or\n> other rules get's dropped and recreated (like underlying\n> tables). There is of course a difference between the original\n> CREATE RULE/VIEW statement and the string stored here. This\n> is because we cannot rely on the actual query buffer but have\n> to parseback the parsetree like done by the utility functions\n> used for pg_rules. Thus, changing a column name of a base\n> table will break the view either way.\n> \n> Anyway, what's the preferred syntax for triggering the rule\n> recompilation? I thought about\n> \n> ALTER RULE {rulename|ALL} RECOMPILE;\n> \n> Where ALL triggers only those rules where the user actually\n> has RULE access right on a relation.\n> \n> \n> Jan\n> \n> --\n> \n> #======================================================================#\n> # It's easier to get forgiveness for being wrong than for being right. #\n> # Let's break this rule - forgive me. #\n> #================================================== JanWieck@Yahoo.com #\n> \n> \n> \n> _________________________________________________________\n> Do You Yahoo!?\n> Get your free @yahoo.com address at http://mail.yahoo.com\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n> \n\n\n\n",
"msg_date": "Thu, 12 Jul 2001 14:23:21 -0400 (EDT)",
"msg_from": "Alex Pilosov <alex@pilosoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Rule recompilation"
},
{
"msg_contents": "Alex Pilosov wrote:\n> I remember awhile ago, someone floated the idea of a dependency view which\n> would list all objects and what OIDs they have in their plan. (i.e. what\n> do they depend on).\n>\n> I'm definitely no expert in this, but to me, one possible implementation\n> would be to enhance outfuncs to provide for creation tracking of all\n> OIDs used in plan, and allow caller to receive this list and do something\n> with it. This would actually be very simple, as only _outOidList will need\n> to be modified...(but then again, I'm known for oversimplifying things :)\n>\n> Then, we can add ev_depends/oidvector to pg_rewrite and store the\n> dependency there, and for stored procedures, add a prodepends/oidvector to\n> pg_proc.\n>\n> Then, create union of pg_rewrite and pg_proc to list dependencies.\n>\n> Then, we would be able to provide warning when an object is dropped:\n> 'The following objects depend on this blah blah', and possibly an action\n> \"alter database fixdepends oid\" which would recompile everything that\n> depends on that oid.\n>\n> How's this sound?\n\n Er - oversimplified :-)\n\n I remember it well, because Bruce is mentioning it every so\n often and constantly tries to convince me to start a project\n about a dependency table. I just think it's better not to do\n it for 7.2 (didn't we wanted to have that released THIS\n year?).\n\n Anyway, there's alot more to look at. Functions can be\n referenced in views, indexes, operators, aggregates and maybe\n more places. Views/rules can reference allmost any object.\n And this only builds the permanent cross reference.\n\n We have to take a look at runtime information, telling which\n prepared/saved SPI plan uses a particular object and trigger\n automatic re-prepare for the plan in case.\n\n For most objects, there is no such \"recompile\" possible - at\n least not without storing alot more information than now.\n Create a function and based on that an operator. Then you\n drop the function and create another one. Hmmm, pg_operator\n doesn't have the function name and argument types, it only\n knows the old functions oid. How do you find the new function\n from here? So basically we'd need some sort of pg_dump\n snippet associated with every object and issue an internal\n DROP/CREATE using that string to recompile it.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Thu, 12 Jul 2001 14:39:21 -0400 (EDT)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": true,
"msg_subject": "Re: Rule recompilation"
},
{
"msg_contents": "Jan Wieck <JanWieck@Yahoo.com> writes:\n> There is of course a difference between the original\n> CREATE RULE/VIEW statement and the string stored here. This\n> is because we cannot rely on the actual query buffer but have\n> to parseback the parsetree like done by the utility functions\n> used for pg_rules.\n\nDid you see my comments about extending the parser to make it possible\nto extract the appropriate part of the query buffer? This would allow\nus to get rid of the reverse-lister (ruleutils.c) entirely, not to\nmention readfuncs.c (but we'd still want outfuncs.c for debugging, I\nsuppose).\n\n> Anyway, what's the preferred syntax for triggering the rule\n> recompilation? I thought about\n> ALTER RULE {rulename|ALL} RECOMPILE;\n> Where ALL triggers only those rules where the user actually\n> has RULE access right on a relation.\n\nThe proposed definition of ALL seems completely off-base. If I have\nchanged my table foo, which is referenced by a rule attached to\nJoe's table bar, I would like to be able to force recompilation of\nJoe's rule. If I can't do that, a RECOMPILE command is useless.\nI might as well just restart my backend.\n\nBTW, a RECOMPILE command that affects only the current backend is pretty\nuseless anyway. How are you going to propagate the recompile request to\nother backends?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 12 Jul 2001 14:50:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rule recompilation "
},
{
"msg_contents": "Tom Lane wrote:\n> Jan Wieck <JanWieck@Yahoo.com> writes:\n> > There is of course a difference between the original\n> > CREATE RULE/VIEW statement and the string stored here. This\n> > is because we cannot rely on the actual query buffer but have\n> > to parseback the parsetree like done by the utility functions\n> > used for pg_rules.\n>\n> Did you see my comments about extending the parser to make it possible\n> to extract the appropriate part of the query buffer? This would allow\n> us to get rid of the reverse-lister (ruleutils.c) entirely, not to\n> mention readfuncs.c (but we'd still want outfuncs.c for debugging, I\n> suppose).\n\n Missed that, but sounds good!\n\n>\n> > Anyway, what's the preferred syntax for triggering the rule\n> > recompilation? I thought about\n> > ALTER RULE {rulename|ALL} RECOMPILE;\n> > Where ALL triggers only those rules where the user actually\n> > has RULE access right on a relation.\n>\n> The proposed definition of ALL seems completely off-base. If I have\n> changed my table foo, which is referenced by a rule attached to\n> Joe's table bar, I would like to be able to force recompilation of\n> Joe's rule. If I can't do that, a RECOMPILE command is useless.\n> I might as well just restart my backend.\n>\n> BTW, a RECOMPILE command that affects only the current backend is pretty\n> useless anyway. How are you going to propagate the recompile request to\n> other backends?\n\n Create a user table (for testing) and save the\n pg_get_ruledef() output of all rules into there. Then write a\n little PL/pgSQL function that loops over that table and for\n each row does\n\n EXECUTE ''drop rule '' || ...\n EXECUTE row.ruledef;\n\n Break a view by dropping and recreating an underlying table.\n Then see what happens when executing the stored proc ...\n including what happens in the relcache and other backends.\n\n This isn't local recompilation in current backend. It's\n recreation of the pg_rewrite entry for a relation, including\n propagation.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Thu, 12 Jul 2001 15:10:19 -0400 (EDT)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": true,
"msg_subject": "Re: Rule recompilation"
},
{
"msg_contents": "Jan Wieck <JanWieck@yahoo.com> writes:\n> This isn't local recompilation in current backend. It's\n> recreation of the pg_rewrite entry for a relation, including\n> propagation.\n\nWhere I'd like to go (see my previous mail) is that pg_rewrite,\npg_attrdef, and friends store *only* the source text of rules,\ndefault expressions, etc. No compiled trees at all in the database.\nSo there's no need to update the database entries, but there is a\nneed for something like a shared-cache-invalidation procedure to cause\nbackends to recompile things that depend on updated relations.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 12 Jul 2001 15:11:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rule recompilation "
},
{
"msg_contents": "Tom Lane wrote:\n> Jan Wieck <JanWieck@yahoo.com> writes:\n> > This isn't local recompilation in current backend. It's\n> > recreation of the pg_rewrite entry for a relation, including\n> > propagation.\n>\n> Where I'd like to go (see my previous mail) is that pg_rewrite,\n> pg_attrdef, and friends store *only* the source text of rules,\n> default expressions, etc. No compiled trees at all in the database.\n> So there's no need to update the database entries, but there is a\n> need for something like a shared-cache-invalidation procedure to cause\n> backends to recompile things that depend on updated relations.\n\nHmmm,\n\n are you sure that this doesn't have a severe performance\n impact?\n\n When and how often are these parsetrees read? IIRC these\n parsetree strings are interpreted somehow during heap_open().\n Now you want to run a flex/bison plus tons of syscache\n lookups for operator and function candidates and possible\n casting in this place?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Thu, 12 Jul 2001 15:25:20 -0400 (EDT)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": true,
"msg_subject": "Re: Rule recompilation"
},
{
"msg_contents": "Jan Wieck <JanWieck@yahoo.com> writes:\n> are you sure that this doesn't have a severe performance\n> impact?\n\nIt's not provable, of course, until we try it ... but I think the\nperformance impact would be small. Has anyone complained about the\nfact that plpgsql functions are stored as source not precompiled\ntrees? Seems like the same tradeoff.\n\n> When and how often are these parsetrees read? IIRC these\n> parsetree strings are interpreted somehow during heap_open().\n\nCurrently we load them during relcache load, but that's only because\nlittle work need be expended to make it happen. My vision of how\nthis should work is that the relcache would load the source text\nright away, but computation of the derived state would only happen\nwhen someone demands it, and then the relcache would cache the result.\nTake a look at how the list of indexes for each relation is handled\nin current sources --- same principle, we don't scan pg_index until\nand unless we have to.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 12 Jul 2001 15:39:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rule recompilation "
},
{
"msg_contents": "Jan Wieck <JanWieck@Yahoo.com> writes:\n> For most objects, there is no such \"recompile\" possible - at\n> least not without storing alot more information than now.\n> Create a function and based on that an operator. Then you\n> drop the function and create another one. Hmmm, pg_operator\n> doesn't have the function name and argument types, it only\n> knows the old functions oid. How do you find the new function\n> from here?\n\nWhat new function? The correct system behavior (as yet unimplemented)\nwould be to *drop* the operator the instant someone drops the underlying\nfunction.\n\nWhat is more interesting here is an (also unimplemented, but should\nexist) ALTER FUNCTION command that can replace the definition text\nof an existing function object. The link from the operator to the\nfunction then does not change --- but we'd like to cause cached plans,\netc, to be rebuilt if they depend on the old function definition via\nthe operator.\n\nI think it's wrong to see the problem as relinking primary definitions\nto point at new objects. The primary definition of an object does not\nneed to change, what we need is to be able to update derived data.\npg_rewrite is currently broken in the sense that it's not storing a\nprimary definition (ie, rule source text).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 12 Jul 2001 15:46:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rule recompilation "
},
{
"msg_contents": "On Thu, 12 Jul 2001, Jan Wieck wrote:\n\n> Alex Pilosov wrote:\n> > I remember awhile ago, someone floated the idea of a dependency view which\n> > would list all objects and what OIDs they have in their plan. (i.e. what\n> > do they depend on).\n> >\n> > I'm definitely no expert in this, but to me, one possible implementation\n> > would be to enhance outfuncs to provide for creation tracking of all\n> > OIDs used in plan, and allow caller to receive this list and do something\n> > with it. This would actually be very simple, as only _outOidList will need\n> > to be modified...(but then again, I'm known for oversimplifying things :)\n> >\n> > Then, we can add ev_depends/oidvector to pg_rewrite and store the\n> > dependency there, and for stored procedures, add a prodepends/oidvector to\n> > pg_proc.\n> >\n> > Then, create union of pg_rewrite and pg_proc to list dependencies.\n> >\n> > Then, we would be able to provide warning when an object is dropped:\n> > 'The following objects depend on this blah blah', and possibly an action\n> > \"alter database fixdepends oid\" which would recompile everything that\n> > depends on that oid.\n> >\n> > How's this sound?\n> \n> Er - oversimplified :-)\nYeah, most of my ideas end up like that, however see below ;)\n> \n> I remember it well, because Bruce is mentioning it every so\n> often and constantly tries to convince me to start a project\n> about a dependency table. I just think it's better not to do\n> it for 7.2 (didn't we wanted to have that released THIS\n> year?).\n>\n> Anyway, there's alot more to look at. Functions can be\n> referenced in views, indexes, operators, aggregates and maybe\n> more places. Views/rules can reference allmost any object.\n> And this only builds the permanent cross reference.\n\nFor views, the necessary information (what does a view depend on) is in\npg_rewrite anyway, which we can track with my proposal.\n\nFor indices/operators/aggregates, pg_depends view may simply union the\nnecessary information from the existing tables, no additional tracking is\nnecessary. (example, if index depends on a proc, we already have that proc\noid as indproc).\n\nIf you are talking that tracking nested dependencies is hard, I don't\ndisagree there, its a pain to do recursive queries in SQL, but the\nsolution is to have (non-sql) function list_deep_depend(oid) which would\nrecurse down the pg_depend and find what depends on an object...\n\n> We have to take a look at runtime information, telling which\n> prepared/saved SPI plan uses a particular object and trigger\n> automatic re-prepare for the plan in case.\nThis doesn't bother me that much. Restart of postmaster is an acceptable\nthing to clear [really strange] things up.\n\nI'm actually not looking for 100% recompilation when an underlying object\nis changed, I'm looking for 100% reliable dependency information and a\nwarning listing all objects that will break if I delete an object.\n\nYour proposal (automatic recompilation for rules) is orthogonal (but\nrelated) to what I'm suggesting. Having an ability to recompile a rule is\ngreat. Having an ability to see what rules depend on a given object is\nalso great. Having an ability to recompile all rules that depend on a\ngiven object is even better ;) \n\nHaving an ability to recompile _everything_ that depends on a given object\nis priceless, but we can take that one step at a time, first tackling\nrules...\n\n> For most objects, there is no such \"recompile\" possible - at\n> least not without storing alot more information than now.\n> Create a function and based on that an operator. Then you\n> drop the function and create another one. Hmmm, pg_operator\n> doesn't have the function name and argument types, it only\n> knows the old functions oid. How do you find the new function\n> from here? So basically we'd need some sort of pg_dump\n> snippet associated with every object and issue an internal\n> DROP/CREATE using that string to recompile it.\n\nWhich may not be all that hard now, as most things that pg_dump does now\nare integrated in the backend, and all pg_dump does is call an appropriate\nfunction (ala pg_get_viewdef/pg_get_ruledef). But I am content leaving it\nfor the next time, tackling rules for now.\n\n\n",
"msg_date": "Thu, 12 Jul 2001 15:46:33 -0400 (EDT)",
"msg_from": "Alex Pilosov <alex@pilosoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Rule recompilation"
},
{
"msg_contents": "Jan Wieck writes:\n\n> For most objects, there is no such \"recompile\" possible - at\n> least not without storing alot more information than now.\n> Create a function and based on that an operator. Then you\n> drop the function and create another one. Hmmm, pg_operator\n> doesn't have the function name and argument types, it only\n> knows the old functions oid. How do you find the new function\n> from here?\n\nIn these cases it'd be a lot simpler (and SQL-comforming) to implement the\nDROP THING ... { RESTRICT | CASCADE } options. This would probably catch\nmost honest user errors more cleanly than trying to automatically\nrecompile things that perhaps aren't even meant to fit together any\nlonger.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Thu, 12 Jul 2001 21:51:01 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Rule recompilation"
},
{
"msg_contents": "On Thu, 12 Jul 2001, Peter Eisentraut wrote:\n\n> Jan Wieck writes:\n> \n> > For most objects, there is no such \"recompile\" possible - at\n> > least not without storing alot more information than now.\n> > Create a function and based on that an operator. Then you\n> > drop the function and create another one. Hmmm, pg_operator\n> > doesn't have the function name and argument types, it only\n> > knows the old functions oid. How do you find the new function\n> > from here?\n> \n> In these cases it'd be a lot simpler (and SQL-comforming) to implement the\n> DROP THING ... { RESTRICT | CASCADE } options. This would probably catch\n> most honest user errors more cleanly than trying to automatically\n> recompile things that perhaps aren't even meant to fit together any\n> longer.\nYes, I absolutely agree, and that's the aim of what I'm suggesting...\n\n-alex\n\n",
"msg_date": "Thu, 12 Jul 2001 16:05:05 -0400 (EDT)",
"msg_from": "Alex Pilosov <alex@pilosoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Rule recompilation"
}
] |
[
{
"msg_contents": "> Oh, now I get it: the point is to prevent Tx Old from exiting the set\n> of \"still running\" xacts as seen by Tx S. Okay, it makes sense.\n> I'll try to add some documentation to explain it.\n\nTIA! I had no time from '99 -:)\n\n> Given this, I'm wondering why we bother with having a separate\n> XidGenLock spinlock at all. Why not eliminate it and use SInval\n> spinlock to lock GetNewTransactionId and ReadNewTransactionId?\n\nReading all MyProc in GetSnashot may take long time - why disallow\nnew Tx to begin.\n\n> What did you think about reordering the vacuum qual tests and\n> AbortTransaction sequence?\n\nSorry, no time at the moment.\n\n> BTW, I'm starting to think that it would be really nice if we could\n> replace our spinlocks with not just a semaphore, but something that has\n> a notion of \"shared\" and \"exclusive\" lock requests. For example,\n> if GetSnapshotData could use a shared lock on SInvalLock, it'd\n> improve concurrency.\n\nYes, we already told about light lock manager (no deadlock detection etc).\n\nVadim\n",
"msg_date": "Thu, 12 Jul 2001 10:36:30 -0700",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: Re: Strangeness in xid allocation / snapshot setup "
},
{
"msg_contents": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n>> Given this, I'm wondering why we bother with having a separate\n>> XidGenLock spinlock at all. Why not eliminate it and use SInval\n>> spinlock to lock GetNewTransactionId and ReadNewTransactionId?\n\n> Reading all MyProc in GetSnashot may take long time - why disallow\n> new Tx to begin.\n\nBecause we need to synchronize? It bothers me that we're assuming\nthat fetching/storing XIDs is atomic. There's no possibility at all\nof going to 8-byte XIDs as long as the code is like this.\n\nI doubt that a spinlock per PROC structure would be a better answer,\neither; the overhead of getting and releasing each lock would be\nnontrivial, considering the small number of instructions spent at\neach PROC in these routines.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 12 Jul 2001 14:37:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: Strangeness in xid allocation / snapshot setup "
}
] |
[
{
"msg_contents": "> Anyway, what's the preferred syntax for triggering the rule\n> recompilation? I thought about\n> \n> ALTER RULE {rulename|ALL} RECOMPILE;\n> \n> Where ALL triggers only those rules where the user actually\n> has RULE access right on a relation.\n\nIn good world rules (PL functions etc) should be automatically\nmarked as dirty (ie recompilation required) whenever referenced\nobjects are changed.\n\nVadim\n",
"msg_date": "Thu, 12 Jul 2001 10:41:00 -0700",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: Rule recompilation"
},
{
"msg_contents": "Mikheev, Vadim wrote:\n> > Anyway, what's the preferred syntax for triggering the rule\n> > recompilation? I thought about\n> >\n> > ALTER RULE {rulename|ALL} RECOMPILE;\n> >\n> > Where ALL triggers only those rules where the user actually\n> > has RULE access right on a relation.\n>\n> In good world rules (PL functions etc) should be automatically\n> marked as dirty (ie recompilation required) whenever referenced\n> objects are changed.\n\n Yepp, and it'd be possible for rules (just not right now).\n But we're not in a really good world, so it'll not be\n possible for PL's.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Thu, 12 Jul 2001 13:57:02 -0400 (EDT)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Rule recompilation"
}
] |
[
{
"msg_contents": "One day i found these in my logs, and the vacuum process hung, effectively\nlocking everybody out of some tables...\nVersion 7.1.2\n\nVACUUM ANALYZE\nNOTICE: RegisterSharedInvalid: SI buffer overflow\nNOTICE: InvalidateSharedInvalid: cache state reset\n\nIt was sleeping in semop().\n\nAny ideas, or fixes is welcome...\n\ncheers\n\nMagnus\n\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n Programmer/Networker [|] Magnus Naeslund\n PGP Key: http://www.genline.nu/mag_pgp.txt\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n\n\n",
"msg_date": "Thu, 12 Jul 2001 19:52:01 +0200",
"msg_from": "\"Magnus Naeslund\\(f\\)\" <mag@fbab.net>",
"msg_from_op": true,
"msg_subject": "Vacuum errors"
}
] |
[
{
"msg_contents": "> > In good world rules (PL functions etc) should be automatically\n> > marked as dirty (ie recompilation required) whenever referenced\n> > objects are changed.\n> \n> Yepp, and it'd be possible for rules (just not right now).\n> But we're not in a really good world, so it'll not be\n> possible for PL's.\n\nWhy is it possible in Oracle' world? -:)\n\nVadim\n",
"msg_date": "Thu, 12 Jul 2001 10:55:30 -0700",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: Rule recompilation"
},
{
"msg_contents": "Mikheev, Vadim wrote:\n> > > In good world rules (PL functions etc) should be automatically\n> > > marked as dirty (ie recompilation required) whenever referenced\n> > > objects are changed.\n> >\n> > Yepp, and it'd be possible for rules (just not right now).\n> > But we're not in a really good world, so it'll not be\n> > possible for PL's.\n>\n> Why is it possible in Oracle' world? -:)\n\n Because of there limited features?\n\n Think about a language like PL/Tcl. At the time you call a\n script for execution, you cannot even be sure that the Tcl\n bytecode compiler parsed anything, so how will you ever know\n the complete set of objects referenced from this function?\n\n And PL/pgSQL? We don't prepare all the statements into SPI\n plans at compile time. We wait until the separate branches\n are needed, so how do you know offhand here?\n\n In the PL/pgSQL case it *might* be possible. But is it worth\n it?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Thu, 12 Jul 2001 14:12:14 -0400 (EDT)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Rule recompilation"
},
{
"msg_contents": "On Thu, 12 Jul 2001, Jan Wieck wrote:\n\n> Mikheev, Vadim wrote:\n> > > > In good world rules (PL functions etc) should be automatically\n> > > > marked as dirty (ie recompilation required) whenever referenced\n> > > > objects are changed.\n> > >\n> > > Yepp, and it'd be possible for rules (just not right now).\n> > > But we're not in a really good world, so it'll not be\n> > > possible for PL's.\n> >\n> > Why is it possible in Oracle' world? -:)\n> \n> Because of there limited features?\n> \n> Think about a language like PL/Tcl. At the time you call a\n> script for execution, you cannot even be sure that the Tcl\n> bytecode compiler parsed anything, so how will you ever know\n> the complete set of objects referenced from this function?\n> And PL/pgSQL? We don't prepare all the statements into SPI\n> plans at compile time. We wait until the separate branches\n> are needed, so how do you know offhand here?\nIf plan hasn't been made (oid has not been referenced), does it really\ndepend on an object?\n\n> In the PL/pgSQL case it *might* be possible. But is it worth\n> it?\nIt'd be possible in general, as long as pl compilers properly keep track\nwhat their objects depend on in pg_proc. (as in my above email).\n\n-alex \n\n",
"msg_date": "Thu, 12 Jul 2001 14:30:28 -0400 (EDT)",
"msg_from": "Alex Pilosov <alex@pilosoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Rule recompilation"
},
{
"msg_contents": "Jan Wieck <JanWieck@Yahoo.com> writes:\n> And PL/pgSQL? We don't prepare all the statements into SPI\n> plans at compile time. We wait until the separate branches\n> are needed, so how do you know offhand here?\n\nIf we haven't prepared a statement yet, then we don't need to reprepare\nit, hmm? So it'd be sufficient to keep track of a list of all objects\nreferenced *so far* by each plpgsql function.\n\nYour complaints about pltcl and plperl are irrelevant because they don't\nsave prepared plans. For the languages that do save prepared plans, it\nseems possible to keep track of a list of all objects that each plan\ndepends on. So I think that we should try to do it right, rather than\nassuming from the start that we can't.\n\n> In the PL/pgSQL case it *might* be possible. But is it worth\n> it?\n\nYes. If we're not going to do it right, I think we needn't bother to do\nit at all. \"Restart your backend\" is just as good an answer, probably\nbetter, than \"issue a RECOMPILE against everything affected by whatever\nyou changed\". If the system can't keep track of that, how likely is it\nthat the user can?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 12 Jul 2001 14:55:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rule recompilation "
},
{
"msg_contents": "Tom Lane wrote:\n> Jan Wieck <JanWieck@Yahoo.com> writes:\n>\n> > In the PL/pgSQL case it *might* be possible. But is it worth\n> > it?\n>\n> Yes. If we're not going to do it right, I think we needn't bother to do\n> it at all. \"Restart your backend\" is just as good an answer, probably\n> better, than \"issue a RECOMPILE against everything affected by whatever\n> you changed\". If the system can't keep track of that, how likely is it\n> that the user can?\n\n Stop!\n\n We're talking about two different things here.\n\n Recompilation (or better fixing Oid references in system\n catalog entries) is required to correct a system catalog that\n got inconsistent due to dropping and recreating a particular\n object.\n\n Regeneration of runtime things like saved SPI plans might be\n related to that, but it's not exactly the same. That surely\n is corrected by restarting the backend. But you cannot\n correct a broken view with a backend restart, can you?\n\n And pardon, but PL/Tcl can save SPI plans. At least it had\n that capability when I wrote the language handler, so if it\n cannot any more WHO DID THAT?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Thu, 12 Jul 2001 15:18:15 -0400 (EDT)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Rule recompilation"
},
{
"msg_contents": "Jan Wieck <JanWieck@yahoo.com> writes:\n> Stop!\n> We're talking about two different things here.\n\nYou're right: fixing obsoleted querytrees stored in pg_rewrite and\nsimilar catalogs is not the same thing as invalidating cached\nquery plans in plpgsql, SPI, etc.\n\nHowever, we could turn them into the same problem if we rearrange the\ncatalogs to store only source text. Then there's no need to update any\npermanent state, only a need to cause invalidation of derived state\ninside various backends.\n\nEach piece of derived state could (and should IMHO) be tagged with a\nlist of all the objects it depends on; then an invalidation message for\nany of those objects would cause that piece of state to be thrown away\nand rebuilt at next use. Just like the catalog caches ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 12 Jul 2001 15:33:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rule recompilation "
},
{
"msg_contents": "IMHO we are trying to have a compiled language behave like an interpreted \nlanguage.\nThis is a bottom to top approach with no real future. Here is a proposal of \na top to bottom approach.\n\nWhat we do in pgAdmin is that we store objects (functions, views and \ntriggers) in separate tables called Development tables.\nThe production objects (which you are talking about) are running safe \n*without* modification. At any moment, it is possible to recompile the\ndevelopment objects (functions, triggers and views modified by the user) \nfrom development tables.\n\npgAdmin then checks dependencies a goes through a whole compilation process.\nBUT ONLY AT USER REQUEST.\n\nWho would honestly work on a production server? This is too dangerous in a \nprofessional environment.\nIn a near future, we will offer the ability to store PostgreSQL objects on \nseparate servers (called code repository).\n\nYou will then be able to move objects from the development server to the \nproduction servers. Think of replication.\nAlso, pgAdmin will include advanced team work features and code serialization.\n\npgAdmin is already an *old* product as we are working on exciting new things:\nhttp://www.greatbridge.org/project/pgadmin/cvs/cvs.php/pgadmin/help/todo.html\n\nBefore downloading pgAdmin from CVS, read this:\nhttp://www.greatbridge.org/project/pgadmin/cvs/cvs.php/binaries/readme.html\n\nWe are looking for feedback and help from the community.\nGreetings from Jean-Michel POURE, Paris, France\n\n\n",
"msg_date": "Thu, 12 Jul 2001 23:07:24 +0200",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": false,
"msg_subject": "Re: Rule recompilation "
}
] |
[
{
"msg_contents": "> > Why is it possible in Oracle' world? -:)\n> \n> Because of there limited features?\n\nAnd now we limit our additional advanced features -:)\n\n> Think about a language like PL/Tcl. At the time you call a\n> script for execution, you cannot even be sure that the Tcl\n> bytecode compiler parsed anything, so how will you ever know\n> the complete set of objects referenced from this function?\n> \n> And PL/pgSQL? We don't prepare all the statements into SPI\n> plans at compile time. We wait until the separate branches\n> are needed, so how do you know offhand here?\n\nAt the time of creation function body could be parsed and referenced\nobjects stored in system table (or function could be marked as dirty\nand referenced objects would stored at first compilation and after\neach subsequent successful after-dirtied-compilation).\nIsn't it possible for PL/_ANY_L_ too?\n\n> In the PL/pgSQL case it *might* be possible. But is it worth\n> it?\n\nSure.\n\nVadim\n",
"msg_date": "Thu, 12 Jul 2001 11:23:56 -0700",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: Rule recompilation"
},
{
"msg_contents": "Mikheev, Vadim wrote:\n> > > Why is it possible in Oracle' world? -:)\n> >\n> > Because of there limited features?\n>\n> And now we limit our additional advanced features -:)\n>\n> > Think about a language like PL/Tcl. At the time you call a\n> > script for execution, you cannot even be sure that the Tcl\n> > bytecode compiler parsed anything, so how will you ever know\n> > the complete set of objects referenced from this function?\n> >\n> > And PL/pgSQL? We don't prepare all the statements into SPI\n> > plans at compile time. We wait until the separate branches\n> > are needed, so how do you know offhand here?\n>\n> At the time of creation function body could be parsed and referenced\n> objects stored in system table (or function could be marked as dirty\n> and referenced objects would stored at first compilation and after\n> each subsequent successful after-dirtied-compilation).\n> Isn't it possible for PL/_ANY_L_ too?\n\n Nonononono!\n\n PL/Tcl is a very good example for that. To load a function,\n basically a \"proc\" command is executed in a Tcl interpreter.\n But execution of Tcl's \"proc\" command doesn't cause the\n bytecode compiler to kick in and actually parse the\n procedures body. So until the first actual call of the\n function, the Tcl interpreter just holds a string for the\n body. Now a procedure body in Tcl is basically a list of\n commands with possible sublists. On call, only the topmost\n level of this list hierarchy is parsed and compiled, command\n per command. Plus recursively those sublists, needed for this\n invocation.\n\n You cannot control Tcl's bytecode compiler from the outside.\n There's no API for that. And Tcl is a dynamic language. A\n function might execute dynamic code found in some user table?\n\n Since we don't save bytecode for PL objects, these all are\n IMHO runtime dependencies and most of them could be solved if\n we fix SPI to deal with it correctly.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Thu, 12 Jul 2001 15:00:44 -0400 (EDT)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Rule recompilation"
},
{
"msg_contents": "Jan Wieck <JanWieck@Yahoo.com> writes:\n> You cannot control Tcl's bytecode compiler from the outside.\n\nAn excellent example. You don't *need* to control Tcl's bytecode\ncompiler from the outside, because *Tcl gets it right without help*.\nIt takes care of the function-text-to-derived-form dependency\ninternally: when you redefine the function, the internal form is\ndiscarded and rebuilt. You don't have to worry about it.\n\nWhat everyone else is telling you is that we should strive to do the\nsame, not punt and make the user tell us when to recompile.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 12 Jul 2001 15:50:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rule recompilation "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Jan Wieck <JanWieck@Yahoo.com> writes:\n> \n> What everyone else is telling you is that we should strive to do the\n> same, not punt and make the user tell us when to recompile.\n> \n\nIn Oracle, objects like views, functions and triggers are\njust marked INVALID when an object to which they make\nreference is changed. INVALID objects are recompiled when\nthey are needed. in particular, if a table was dropped and\na table is created with the same name then the objects which\nmake reference (directly/indirectly) to the table would\nrevive.\nWe would have to reconsider *alter table .. rename ..* ..\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Fri, 13 Jul 2001 09:42:19 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Rule recompilation"
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> We would have to reconsider *alter table .. rename ..* ..\n\nYeah, that's one thing that would act differently if we adopt my idea of\nconsidering the source text of the rule to be the primary definition.\nIt's not clear if this is good or bad, however. Consider:\n\n\tcreate table foo (f1 int, f2 text);\n\n\tcreate view v1 as select f1 from foo;\n\n\talter table foo rename column f1 to fx;\n\n\talter table foo rename column f2 to f1;\n\nAt this point, what would you expect v1 to return, and why? How\nwould you justify it in terms of \"what the user would expect\",\nas opposed to \"what we can conveniently implement\"?\n\nAnother interesting case is:\n\n\tcreate table foo (f1 int, f2 text);\n\n\tcreate view v1 as select * from foo;\n\n\talter table foo add column f3 float;\n\nShould v1 now have three columns? If not, how do you justify it?\nIf so, how do you implement it (v1 has already got its pg_attribute\nrows)?\n\nMessy any way you look at it, I fear. But clearly my idea needs\nmore thought ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 12 Jul 2001 21:37:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rule recompilation "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > We would have to reconsider *alter table .. rename ..* ..\n> \n> Yeah, that's one thing that would act differently if we adopt my idea of\n> considering the source text of the rule to be the primary definition.\n> It's not clear if this is good or bad, however. Consider:\n> \n> create table foo (f1 int, f2 text);\n> \n> create view v1 as select f1 from foo;\n> \n> alter table foo rename column f1 to fx;\n> \n> alter table foo rename column f2 to f1;\n> \n> At this point, what would you expect v1 to return, and why? How\n> would you justify it in terms of \"what the user would expect\",\n> as opposed to \"what we can conveniently implement\"?\n> \n\nThe view v1 is INVALIDated by the first ALTER command.\nIt is still INVALID after the second ALTER command.\nWhen *select * from v1* is called, the re-compilation \nwould translate it into *select f1(originally f2) from foo*.\nThe behavior is different from that the current.\nThe current *reference by id* approach is suitable\nfor the current *rename* behavior but *reference by\nname* approach isn't. *rename* isn't that easy from\nthe first IMHO.\n\n> Another interesting case is:\n> \n> create table foo (f1 int, f2 text);\n> \n> create view v1 as select * from foo;\n> \n> alter table foo add column f3 float;\n> \n> Should v1 now have three columns? \n\nYes. We could create the view v1 as *select f1, f2 \nfrom foo* from the first if we hate the side effect. \n\n> If not, how do you justify it?\n> If so, how do you implement it (v1 has already got its pg_attribute\n> rows)?\n> \n\nIsn't the creation of pg_attribute tuples a part of\n(re-)compilation ?\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Fri, 13 Jul 2001 11:16:55 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Rule recompilation"
}
] |
[
{
"msg_contents": "What's the fastest way to select the number of rows in a table? If I\nuse count(*) with no whereclause, it uses a seq_scan and takes 4 secs\n(122k rows). With a where clause, it uses an index and returns in < 1\nsec. Selecting count(requestnumber), which is an indexed column, with\nno where clause again takes 4 secs. This latter version, I thought,\nwould use the index. The values of requestnumber are very distributed.\n\nThanks\n\n",
"msg_date": "Thu, 12 Jul 2001 14:45:09 -0400",
"msg_from": "\"P. Dwayne Miller\" <dmiller@espgroup.net>",
"msg_from_op": true,
"msg_subject": "select count..."
},
{
"msg_contents": "\"P. Dwayne Miller\" <dmiller@espgroup.net> writes:\n\n> What's the fastest way to select the number of rows in a table? If I\n> use count(*) with no whereclause, it uses a seq_scan and takes 4 secs\n> (122k rows). With a where clause, it uses an index and returns in < 1\n> sec. Selecting count(requestnumber), which is an indexed column, with\n> no where clause again takes 4 secs. This latter version, I thought,\n> would use the index. The values of requestnumber are very distributed.\n\nExactly how would you expect to get a count of all the rows in the\ntable (no WHERE clause) without a sequential scan? I don't see any\nproblem with the above results.\n\nThe only case in which COUNT(requestnumber) might use the index would\nbe if there were a significant number of NULLs in that column, but you \ndon't give any information on that.\n\n-Doug\n-- \nThe rain man gave me two cures; he said jump right in,\nThe first was Texas medicine--the second was just railroad gin,\nAnd like a fool I mixed them, and it strangled up my mind,\nNow people just get uglier, and I got no sense of time... --Dylan\n",
"msg_date": "13 Jul 2001 09:16:35 -0400",
"msg_from": "Doug McNaught <doug@wireboard.com>",
"msg_from_op": false,
"msg_subject": "Re: select count..."
},
{
"msg_contents": "I think 4 seconds is way too long to return the results. And NULLs in a\ncolumn should not change the answer. It seems logical that even a sequential\nscan of an index would be much faster than a scan of the table (in this case\nthe record size is fairly large).\n\nI'm trying to optimize queries that are being ported from another DBMS, where\nthe same query above returns in 10s of milliseconds. 4 secs is simply too\nlong. So I'm looking for a way to do it faster.\n\nMS SQL Server docs have optimization hints for such a query and using the\n'count(requestnumber)' syntax, where requestnumber is an indexed field, was\nsuggested.\n\nIt's me and Postgres against another developer and MS SQL Server to see who\ngets the port done fastest, with the best performance after the port. I don't\nwant to lose!\n\nD\n\nDoug McNaught wrote:\n\n> \"P. Dwayne Miller\" <dmiller@espgroup.net> writes:\n>\n> > What's the fastest way to select the number of rows in a table? If I\n> > use count(*) with no whereclause, it uses a seq_scan and takes 4 secs\n> > (122k rows). With a where clause, it uses an index and returns in < 1\n> > sec. Selecting count(requestnumber), which is an indexed column, with\n> > no where clause again takes 4 secs. This latter version, I thought,\n> > would use the index. The values of requestnumber are very distributed.\n>\n> Exactly how would you expect to get a count of all the rows in the\n> table (no WHERE clause) without a sequential scan? I don't see any\n> problem with the above results.\n>\n> The only case in which COUNT(requestnumber) might use the index would\n> be if there were a significant number of NULLs in that column, but you\n> don't give any information on that.\n>\n> -Doug\n> --\n> The rain man gave me two cures; he said jump right in,\n> The first was Texas medicine--the second was just railroad gin,\n> And like a fool I mixed them, and it strangled up my mind,\n> Now people just get uglier, and I got no sense of time... --Dylan\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n",
"msg_date": "Fri, 13 Jul 2001 09:51:55 -0400",
"msg_from": "\"P. Dwayne Miller\" <dmiller@espgroup.net>",
"msg_from_op": true,
"msg_subject": "Re: select count..."
},
{
"msg_contents": "\"P. Dwayne Miller\" <dmiller@espgroup.net> writes:\n> I think 4 seconds is way too long to return the results. And NULLs in a\n> column should not change the answer.\n\nIf you're doing count(foo) then NULLs in column foo definitely *should*\nchange the answer. count(foo) does not count nulls.\n\nIt seemed to me that your original question was comparing apples and\noranges. count(*) with no where clause will count all the rows in\nthe table, sure enough, but if you add a where clause then it's not\ncounting all the rows anymore, so why shouldn't that take less time?\n\nBut possibly the answer you need is just that Postgres does not maintain\nan accurate count of the rows in a table, so it has to scan the table\nto compute count(*). Some other DBMSes do maintain such a count and so\nthey can return count(*) essentially instantaneously. But they pay for\nthat speed with a distributed slowdown in all updates of the table. If\nyou have a database application that's designed around the assumption\nthat count(*) is free, you'll probably need to rethink that assumption\nto get good performance with Postgres.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 13 Jul 2001 11:44:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: select count... "
},
{
"msg_contents": "\"P. Dwayne Miller\" wrote:\n> \n> I think 4 seconds is way too long to return the results. And NULLs in a\n> column should not change the answer. It seems logical that even a sequential\n> scan of an index would be much faster than a scan of the table (in this case\n> the record size is fairly large).\n> \n> I'm trying to optimize queries that are being ported from another DBMS, where\n> the same query above returns in 10s of milliseconds. 4 secs is simply too\n> long. So I'm looking for a way to do it faster.\n> \n> MS SQL Server docs have optimization hints for such a query and using the\n> 'count(requestnumber)' syntax, where requestnumber is an indexed field, was\n> suggested.\n\nCould you possibly mean \"select(distinct requestnumber)\" ?\n\nIf the performance of count(xxx) is critical for your app, I suggest\nkeeping the \ncounts in a separate table with a trigger. Postgres can not optimise to\nuse \nindexes _only_ , as indexes don't keep commit information - it must be\nchecked \nfrom data heap.\n\n---------------\nHannu\n",
"msg_date": "Fri, 13 Jul 2001 18:09:47 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Re: select count..."
}
] |
[
{
"msg_contents": "> >> Given this, I'm wondering why we bother with having a separate\n> >> XidGenLock spinlock at all. Why not eliminate it and use SInval\n> >> spinlock to lock GetNewTransactionId and ReadNewTransactionId?\n> \n> > Reading all MyProc in GetSnashot may take long time - why disallow\n> > new Tx to begin.\n> \n> Because we need to synchronize? It bothers me that we're assuming\n> that fetching/storing XIDs is atomic. There's no possibility at all\n> of going to 8-byte XIDs as long as the code is like this.\n> \n> I doubt that a spinlock per PROC structure would be a better answer,\n> either; the overhead of getting and releasing each lock would be\n> nontrivial, considering the small number of instructions spent at\n> each PROC in these routines.\n\nIsn't spinlock just a few ASM instructions?... on most platforms...\n\nVadim\n",
"msg_date": "Thu, 12 Jul 2001 11:48:27 -0700",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: Re: Strangeness in xid allocation / snapshot setup "
},
{
"msg_contents": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n> Isn't spinlock just a few ASM instructions?... on most platforms...\n\nIf we change over to something that supports read vs write locking,\nit's probably going to be rather more than that ... right now, I'm\npretty dissatisfied with the performance of our spinlocks under load.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 12 Jul 2001 15:04:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: Strangeness in xid allocation / snapshot setup "
}
] |
[
{
"msg_contents": "> > Isn't spinlock just a few ASM instructions?... on most platforms...\n> \n> If we change over to something that supports read vs write locking,\n> it's probably going to be rather more than that ... right now, I'm\n> pretty dissatisfied with the performance of our spinlocks under load.\n\nWe shouldn't use light locks everywhere. Updating/reading MyProc.xid\nis very good place to use simple spinlocks... or even better mutexes.\n\nVadim\n",
"msg_date": "Thu, 12 Jul 2001 12:11:58 -0700",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: Re: Strangeness in xid allocation / snapshot setup "
}
] |
[
{
"msg_contents": "Hello all,\n\n>At the time of creation function body could be parsed and referenced\n>objects stored in system table (or function could be marked as dirty\n>and referenced objects would stored at first compilation and after\n>each subsequent successful after-dirtied-compilation).\n>Isn't it possible for PL/_ANY_L_ too?\n\nThis is what the latest CVS version of pgAdmin does in a limited way:\nhttp://www.greatbridge.org/project/pgadmin/cvs/cvs.php/binaries/readme.html\n\nWhen a function is modified with DROP/CREATE, it is marked dirty.\npgAdmin checks dependencies between functions, triggers and views\nand goes through a complete rebuilding process.\n\nMy database has more than 150 PL/pgSQL functions along with views and triggers.\nA normal human cannot keep track of dependencies by his own means.\n\nDave Page and I added this feature to pgAdmin because we were normal humans\nand could not wait too long. When will dependency tracking be available \nserver-side?\n\nWe are now working on more advanced features. See:\nhttp://www.greatbridge.org/project/pgadmin/cvs/cvs.php/pgadmin/help/todo.html\n\nBest regards,\nJean-Michel POURE\npgAdmin Development Team\n",
"msg_date": "Thu, 12 Jul 2001 21:26:49 +0200",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": true,
"msg_subject": "Dependency tracking"
}
] |
[
{
"msg_contents": "\n> I also think we have to leave VACUUM alone and come up with a new name\n> for our light VACUUM. That way, people who do VACUUM at night when no\n> one is on the system can keep doing that, and just add something to run\n> light vacuum periodically during the day.\n\nIf I understood what VACUUM light does, I do not think that people\nwill need to actually do the conventional VACUUM as often anymore.\nI understood, that VACUUM light makes outdated tuple heap space available\nfor reuse, and removes the corresponding index entries.\nIt does not make space available to other tables or the OS,\nbut most other DB's do not do that eighter.\nThe conventional VACUUM would then be something you do as part of a DB \nreorganization (maybe once every month or so).\n\nAndreas\n",
"msg_date": "Fri, 13 Jul 2001 10:19:59 +0200",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: Re: [GENERAL] Vacuum and Transactions"
},
{
"msg_contents": "> \n> > I also think we have to leave VACUUM alone and come up with a new name\n> > for our light VACUUM. That way, people who do VACUUM at night when no\n> > one is on the system can keep doing that, and just add something to run\n> > light vacuum periodically during the day.\n> \n> If I understood what VACUUM light does, I do not think that people\n> will need to actually do the conventional VACUUM as often anymore.\n> I understood, that VACUUM light makes outdated tuple heap space available\n> for reuse, and removes the corresponding index entries.\n> It does not make space available to other tables or the OS,\n> but most other DB's do not do that eighter.\n> The conventional VACUUM would then be something you do as part of a DB \n> reorganization (maybe once every month or so).\n\nYes, but in other DB's if you UPDATE all rows in the table, you don't\ndouble the disk space. They also reuse DELETEd space automatically.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 13 Jul 2001 10:26:58 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: AW: Re: [GENERAL] Vacuum and Transactions"
}
] |
[
{
"msg_contents": "\n> When the system is too heavily loaded (however measured), any further \n> login attempts will fail. What I suggested is, instead of the \n> postmaster accept()ing the connection, why not leave the connection \n> attempt in the queue until we can afford a back end to handle it? \n\nBecause the clients would time out ?\n\n> Then, the argument to listen() will determine how many attempts can \n> be in the queue before the network stack itself rejects them without \n> the postmaster involved.\n\nYou cannot change the argument to listen() at runtime, or are you suggesting\nto close and reopen the socket when maxbackends is reached ? I think \nthat would be nonsense.\n\nI liked the idea of min(MaxBackends, PG_SOMAXCONN), since there is no use in \naccepting more than your total allowed connections concurrently.\n\nAndreas\n",
"msg_date": "Fri, 13 Jul 2001 10:36:13 +0200",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: Re: SOMAXCONN (was Re: Solaris source code)"
},
{
"msg_contents": "Zeugswetter Andreas SB wrote:\n> \n> > When the system is too heavily loaded (however measured), any further\n> > login attempts will fail. What I suggested is, instead of the\n> > postmaster accept()ing the connection, why not leave the connection\n> > attempt in the queue until we can afford a back end to handle it?\n> \n> Because the clients would time out ?\n> \n> > Then, the argument to listen() will determine how many attempts can\n> > be in the queue before the network stack itself rejects them without\n> > the postmaster involved.\n> \n> You cannot change the argument to listen() at runtime, or are you suggesting\n> to close and reopen the socket when maxbackends is reached ? I think\n> that would be nonsense.\n> \n> I liked the idea of min(MaxBackends, PG_SOMAXCONN), since there is no use in\n> accepting more than your total allowed connections concurrently.\n> \n> Andreas\n\nI have been following this thread and I am confused why the queue argument to\nlisten() has anything to do with Max backends. All the parameter to listen does\nis specify how long a list of sockets open and waiting for connection can be.\nIt has nothing to do with the number of back end sockets which are open.\n\nIf you have a limit of 128 back end connections, and you have 127 of them open.\nA listen with queue size of 128 will still allow 128 sockets to wait for\nconnection before turning others away. \n\nIt should be a parameter based on the time out of a socket connection vs the\nability to answer connection requests within that period of time. \n\nThere are two was to think about this. Either you make this parameter tunable\nto give a proper estimate of the usability of the system, i.e. tailor the\nlisten queue parameter to reject sockets when some number of sockets are\nwaiting, or you say no one should ever be denied, accept everyone and let them\ntime out if we are not fast enough.\n\nThis debate could go on, why not make it a parameter in the config file that\ndefaults to some system variable, i.e. SOMAXCONN.\n\nBTW: on linux, the backlog queue parameter is silently truncated to 128 anyway.\n",
"msg_date": "Fri, 13 Jul 2001 07:53:02 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: AW: Re: SOMAXCONN (was Re: Solaris source code)"
},
{
"msg_contents": "On Fri, Jul 13, 2001 at 10:36:13AM +0200, Zeugswetter Andreas SB wrote:\n> \n> > When the system is too heavily loaded (however measured), any further \n> > login attempts will fail. What I suggested is, instead of the \n> > postmaster accept()ing the connection, why not leave the connection \n> > attempt in the queue until we can afford a back end to handle it? \n> \n> Because the clients would time out ?\n\nIt takes a long time for half-open connections to time out, by default.\nProbably most clients would time out, themselves, first, if PG took too\nlong to get to them. That would be a Good Thing.\n\nOnce the SOMAXCONN threshold is reached (which would only happen when \nthe system is very heavily loaded, because when it's not then nothing \nstays in the queue for long), new connection attempts would fail \nimmediately, another Good Thing. When the system is very heavily \nloaded, we don't want to spare attention for clients we can't serve.\n\n> > Then, the argument to listen() will determine how many attempts can \n> > be in the queue before the network stack itself rejects them without \n> > the postmaster involved.\n> \n> You cannot change the argument to listen() at runtime, or are you suggesting\n> to close and reopen the socket when maxbackends is reached ? I think \n> that would be nonsense.\n\nOf course that would not work, and indeed nobody suggested it.\n\nIf postmaster behaved a little differently, not accept()ing when\nthe system is too heavily loaded, then it would be reasonable to\ncall listen() (once!) with PG_SOMAXCONN set to (e.g.) N=20. \n\nWhere the system is not too heavily-loaded, the postmaster accept()s\nthe connection attempts from the queue very quickly, and the number\nof half-open connections never builds up to N. (This is how PG has\nbeen running already, under light load -- except that on Solaris with \nUnix sockets N has been too small.)\n\nWhen the system *is* heavily loaded, the first N attempts would be \nqueued, and then the OS would automatically reject the rest. This \nis better than accept()ing any number of attempts and then refusing \nto authenticate. The N half-open connections in the queue would be \npicked up by postmaster as existing back ends drop off, or time out \nand give up if that happens too slowly. \n\n> I liked the idea of min(MaxBackends, PG_SOMAXCONN), since there is no\n> use in accepting more than your total allowed connections concurrently.\n\nThat might not have the effect you imagine, where many short-lived\nconnections are being made. In some cases it would mean that clients \nare rejected that could have been served after a very short delay.\n\nNathan Myers\nncm@zembu.com\n",
"msg_date": "Fri, 13 Jul 2001 07:40:13 -0700",
"msg_from": "ncm@zembu.com (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: Re: SOMAXCONN"
},
{
"msg_contents": "On Fri, Jul 13, 2001 at 07:53:02AM -0400, mlw wrote:\n> Zeugswetter Andreas SB wrote:\n> > I liked the idea of min(MaxBackends, PG_SOMAXCONN), since there is no use in\n> > accepting more than your total allowed connections concurrently.\n> \n> I have been following this thread and I am confused why the queue\n> argument to listen() has anything to do with Max backends. All the\n> parameter to listen does is specify how long a list of sockets open\n> and waiting for connection can be. It has nothing to do with the\n> number of back end sockets which are open.\n\nCorrect.\n\n> If you have a limit of 128 back end connections, and you have 127\n> of them open, a listen with queue size of 128 will still allow 128\n> sockets to wait for connection before turning others away.\n\nCorrect.\n\n> It should be a parameter based on the time out of a socket connection\n> vs the ability to answer connection requests within that period of\n> time.\n\nIt's not really meaningful at all, at present.\n\n> There are two was to think about this. Either you make this parameter\n> tunable to give a proper estimate of the usability of the system, i.e.\n> tailor the listen queue parameter to reject sockets when some number\n> of sockets are waiting, or you say no one should ever be denied,\n> accept everyone and let them time out if we are not fast enough.\n>\n> This debate could go on, why not make it a parameter in the config\n> file that defaults to some system variable, i.e. SOMAXCONN.\n\nWith postmaster's current behavior there is no benefit in setting\nthe listen() argument to anything less than 1000. With a small\nchange in postmaster behavior, a tunable system variable becomes\nuseful.\n\nBut using SOMAXCONN blindly is always wrong; that is often 5, which\nis demonstrably too small.\n\n> BTW: on linux, the backlog queue parameter is silently truncated to\n> 128 anyway.\n\nThe 128 limit is common, applied on BSD and Solaris as well.\nIt will probably increase in future releases.\n\nNathan Myers\nncm@zembu.com\n",
"msg_date": "Fri, 13 Jul 2001 07:53:29 -0700",
"msg_from": "ncm@zembu.com (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: SOMAXCONN (was Re: Solaris source code)"
},
{
"msg_contents": "Nathan Myers wrote:\n> > There are two was to think about this. Either you make this parameter\n> > tunable to give a proper estimate of the usability of the system, i.e.\n> > tailor the listen queue parameter to reject sockets when some number\n> > of sockets are waiting, or you say no one should ever be denied,\n> > accept everyone and let them time out if we are not fast enough.\n> >\n> > This debate could go on, why not make it a parameter in the config\n> > file that defaults to some system variable, i.e. SOMAXCONN.\n> \n> With postmaster's current behavior there is no benefit in setting\n> the listen() argument to anything less than 1000. With a small\n> change in postmaster behavior, a tunable system variable becomes\n> useful.\n> \n> But using SOMAXCONN blindly is always wrong; that is often 5, which\n> is demonstrably too small.\n\nIt is rumored that many BSD version are limited to 5.\n> \n> > BTW: on linux, the backlog queue parameter is silently truncated to\n> > 128 anyway.\n> \n> The 128 limit is common, applied on BSD and Solaris as well.\n> It will probably increase in future releases.\n\nThis point I am trying to make is that the parameter passed to listen() is OS\ndependent, on both what it means and its defaults. Trying to tie this to\nmaxbackends is not the right thought process. It has nothing to do, at all,\nwith maxbackends.\n\nPassing listen(5) would probably be sufficient for Postgres. Will there ever be\n5 sockets in the listen() queue prior to \"accept()?\" probably not. SOMAXCONN\nis a system limit, setting a listen() value greater than this, is probably\nsilently adjusted down to the defined SOMAXCONN.\n\nBy making it a parameter, and defaulting to SOMAXCONN, this allows the maximum\nnumber of connections a system can handle, while still allowing the DBA to fine\ntune connection behavior on high load systems.\n",
"msg_date": "Fri, 13 Jul 2001 23:24:39 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: SOMAXCONN (was Re: Solaris source code)"
},
{
"msg_contents": "mlw <markw@mohawksoft.com> writes:\n> Nathan Myers wrote:\n>> But using SOMAXCONN blindly is always wrong; that is often 5, which\n>> is demonstrably too small.\n\n> It is rumored that many BSD version are limited to 5.\n\nBSD systems tend to claim SOMAXCONN = 5 in the header files, but *not*\nto have such a small limit in the kernel. The real step forward that\nwe have made in this discussion is to realize that we cannot trust\n<sys/socket.h> to tell us what the kernel limit actually is.\n\n> Passing listen(5) would probably be sufficient for Postgres.\n\nIt demonstrably is not sufficient. Set it that way in pqcomm.c\nand run the parallel regression tests. Watch them fail.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 14 Jul 2001 00:41:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: SOMAXCONN (was Re: Solaris source code) "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> mlw <markw@mohawksoft.com> writes:\n> > Nathan Myers wrote:\n> >> But using SOMAXCONN blindly is always wrong; that is often 5, which\n> >> is demonstrably too small.\n> \n> > It is rumored that many BSD version are limited to 5.\n> \n> BSD systems tend to claim SOMAXCONN = 5 in the header files, but *not*\n> to have such a small limit in the kernel. The real step forward that\n> we have made in this discussion is to realize that we cannot trust\n> <sys/socket.h> to tell us what the kernel limit actually is.\n> \n> > Passing listen(5) would probably be sufficient for Postgres.\n> \n> It demonstrably is not sufficient. Set it that way in pqcomm.c\n> and run the parallel regression tests. Watch them fail.\n>\n\nThat's interesting, I would not have guessed that. I have written a number of\nserver applications which can handle, litterally, over a thousand\nconnection/operations a second, which only has a listen(5). (I do have it as a\nconfiguration parameter, but have never seen a time when I have had to change\nit.)\n\nI figured the closest one could come to an expert in all things socket related\nwould have to be the Apache web server source. They have a different take on\nthe listen() parameter:\n\n>>>>> from httpd.h >>>>>>>>>>>\n 402 /* The maximum length of the queue of pending connections, as defined\n 403 * by listen(2). Under some systems, it should be increased if you\n 404 * are experiencing a heavy TCP SYN flood attack.\n 405 *\n 406 * It defaults to 511 instead of 512 because some systems store it\n 407 * as an 8-bit datatype; 512 truncated to 8-bits is 0, while 511 is\n 408 * 255 when truncated.\n 409 */\n 410\n 411 #ifndef DEFAULT_LISTENBACKLOG\n 412 #define DEFAULT_LISTENBACKLOG 511\n 413 #endif\n<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<\n\nI have not found any other location in which DEFAULT_LISTENBACKLOG is defined,\nbut it is a configuration parameter, and here is what the Apache docs claim:\n\n>>>>>>>>>>>> http://httpd.apache.org/docs/mod/core.html >>>>>>>>>>>>\n\nListenBacklog directive\n\nSyntax: ListenBacklog backlog\nDefault: ListenBacklog 511\nContext: server config\nStatus: Core\nCompatibility: ListenBacklog is only available in Apache versions after 1.2.0. \n\nThe maximum length of the queue of pending connections. Generally no tuning is\nneeded or desired, however on some systems it is desirable to increase this\nwhen under a TCP SYN flood attack. See the backlog parameter to the listen(2)\nsystem call. \n\nThis will often be limited to a smaller number by the operating system. This\nvaries from OS to OS. Also note that many OSes do not use exactly what is\nspecified as the backlog, but use a number based on (but normally larger than)\nwhat is set.\n \n<<<<<<<<<<<<<<<<<<<<<<<\n\nAnyway, why not just do what apache does, set it to some extreme default\nsetting, which even when truncated, is still pretty big, and allow the end user\nto change this value in postgresql.conf.\n",
"msg_date": "Sat, 14 Jul 2001 07:48:26 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: SOMAXCONN (was Re: Solaris source code)"
},
{
"msg_contents": "mlw <markw@mohawksoft.com> writes:\n> Tom Lane wrote:\n>>> Passing listen(5) would probably be sufficient for Postgres.\n>> \n>> It demonstrably is not sufficient. Set it that way in pqcomm.c\n>> and run the parallel regression tests. Watch them fail.\n\n> That's interesting, I would not have guessed that. I have written a number of\n> server applications which can handle, litterally, over a thousand\n> connection/operations a second, which only has a listen(5).\n\nThe problem should be considerably reduced in latest sources, since\nas of a week or three ago, the top postmaster process' outer loop is\nbasically just accept() and fork() --- client authentication is now\nhandled after the fork, instead of before. Still, we now know that\n(a) SOMAXCONN is a lie on many systems, and (b) values as small as 5\nare pushing our luck, even though it might not fail so easily anymore.\n\nThe state of affairs in current sources is that the listen queue\nparameter is MIN(MaxBackends * 2, PG_SOMAXCONN), where PG_SOMAXCONN\nis a constant defined in config.h --- it's 10000, hence a non-factor,\nby default, but could be reduced if you have a kernel that doesn't\ncope well with large listen-queue requests. We probably won't know\nif there are any such systems until we get some field experience with\nthe new code, but we could have \"configure\" select a platform-dependent\nvalue if we find such problems.\n\nI believe that this is fine and doesn't need any further tweaking,\npending field experience. What's still open for discussion is Nathan's\nthought that the postmaster ought to stop issuing accept() calls once\nit has so many children that it will refuse to fork any more. I was\ninitially against that, but on further reflection I think it might be\na good idea after all, because of another recent change related to the\nauthenticate-after-fork change. Since the top postmaster doesn't really\nknow which children have become working backends and which are still\nengaged in authentication dialogs, it cannot enforce the MaxBackends\nlimit directly. Instead, MaxBackends is checked when the child process\nis done with authentication and is trying to join the PROC pool in\nshared memory. The postmaster will spawn up to 2 * MaxBackends child\nprocesses before refusing to spawn more --- this allows there to be\nup to MaxBackends children engaged in auth dialog but not yet working\nbackends. (It's reasonable to allow extra children since some may fail\nthe auth dialog, or an extant backend may have quit by the time they\nfinish auth dialog. Whether 2*MaxBackends is the best choice is\ndebatable, but that's what we're using at the moment.)\n\nFurthermore, we intend to install a pretty tight timeout on the overall\ntime spent in auth phase (a few seconds I imagine, although we haven't\nyet discussed that number either).\n\nGiven this setup, if the postmaster has reached its max-children limit\nthen it can be quite certain that at least some of those children will\nquit within approximately the auth timeout interval. Therefore, not\naccept()ing is a state that will probably *not* persist for long enough\nto cause the new clients to timeout. By not accept()ing at a time when\nwe wouldn't fork, we can convert the behavior clients see at peak load\nfrom quick rejection into a short delay before authentication dialog.\n\nOf course, if you are at MaxBackends working backends, then the new\nclient is still going to get a \"too many clients\" error; all we have\naccomplished with the change is to expend a fork() and an authentication\ncycle before issuing the error. So if the intent is to reduce overall\nsystem load then this isn't necessarily an improvement.\n\nIIRC, the rationale for using 2*MaxBackends as the maximum child count\nwas to make it unlikely that the postmaster would refuse to fork; given\na short auth timeout it's unlikely that as many as MaxBackends clients\nwill be engaged in auth dialog at any instant. So unless we tighten\nthat max child count considerably, holding off accept() at max child\ncount is unlikely to change the behavior under any but worst-case\nscenarios anyway. And in a worst-case scenario, shedding load by\nrejecting connections quickly is probably just what you want to do.\n\nSo, having thought that through, I'm still of the opinion that holding\noff accept is of little or no benefit to us. But it's not as simple\nas it looks at first glance. Anyone have a different take on what the\nbehavior is likely to be?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 14 Jul 2001 11:38:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SOMAXCONN (was Re: Solaris source code) "
},
{
"msg_contents": "On Sat, Jul 14, 2001 at 11:38:51AM -0400, Tom Lane wrote:\n> \n> The state of affairs in current sources is that the listen queue\n> parameter is MIN(MaxBackends * 2, PG_SOMAXCONN), where PG_SOMAXCONN\n> is a constant defined in config.h --- it's 10000, hence a non-factor,\n> by default, but could be reduced if you have a kernel that doesn't\n> cope well with large listen-queue requests. We probably won't know\n> if there are any such systems until we get some field experience with\n> the new code, but we could have \"configure\" select a platform-dependent\n> value if we find such problems.\n\nConsidering the Apache comment about some systems truncating instead\nof limiting... 10000&0xff is 16. Maybe 10239 would be a better choice, \nor 16383. \n\n> So, having thought that through, I'm still of the opinion that holding\n> off accept is of little or no benefit to us. But it's not as simple\n> as it looks at first glance. Anyone have a different take on what the\n> behavior is likely to be?\n\nAfter doing some more reading, I find that most OSes do not reject\nconnect requests that would exceed the specified backlog; instead,\nthey ignore the connection request and assume the client will retry \nlater. Therefore, it appears cannot use a small backlog to shed load \nunless we assume that clients will time out quickly by themselves.\n\nOTOH, maybe it's reasonable to assume that clients will time out,\nand that in the normal case authentication happens quickly.\n\nThen we can use a small listen() backlog, and never accept() if we\nhave more than MaxBackend back ends. The OS will keep a small queue\ncorresponding to our small backlog, and the clients will do our load \nshedding for us.\n\nNathan Myers\nncm@zembu.com\n",
"msg_date": "Mon, 16 Jul 2001 14:04:32 -0700",
"msg_from": "ncm@zembu.com (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: Re: SOMAXCONN (was Re: Solaris source code)"
},
{
"msg_contents": "ncm@zembu.com (Nathan Myers) writes:\n> Considering the Apache comment about some systems truncating instead\n> of limiting... 10000&0xff is 16. Maybe 10239 would be a better choice, \n> or 16383. \n\nHmm. If the Apache comment is real, then that would not help on those\nsystems. Remember that the actual listen request is going to be\n2*MaxBackends in practically all cases. The only thing that would save\nyou from getting an unexpectedly small backlog parameter in such a case\nis to set PG_SOMAXCONN to 255.\n\nPerhaps we should just do that and not worry about whether the Apache\ninfo is accurate or not. But I'd kind of like to see chapter and verse,\nie, at least one specific system that demonstrably fails to perform the\nclamp-to-255 for itself, before we lobotomize the code that way. ISTM a\nconformant implementation of listen() would limit the given value to 255\nbefore storing it into an 8-bit field, not just lose high order bits.\n\n\n> After doing some more reading, I find that most OSes do not reject\n> connect requests that would exceed the specified backlog; instead,\n> they ignore the connection request and assume the client will retry \n> later. Therefore, it appears cannot use a small backlog to shed load \n> unless we assume that clients will time out quickly by themselves.\n\nHm. newgate is a machine on my local net that's not currently up.\n\n$ time psql -h newgate postgres\npsql: could not connect to server: Connection timed out\n Is the server running on host newgate and accepting\n TCP/IP connections on port 5432?\n\nreal 1m13.33s\nuser 0m0.02s\nsys 0m0.01s\n$\n\nThat's on HPUX 10.20. On an old Linux distro, the same timeout\nseems to be about 21 seconds, which is still pretty long by some\nstandards. Do the TCP specs recommend anything particular about\nno-response-to-SYN timeouts?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 16 Jul 2001 22:12:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: SOMAXCONN (was Re: Solaris source code) "
}
] |
[
{
"msg_contents": "Hi,\n\nAs promised, here is a patch to remove all support for the (non-functional\nanyway) EXTEND INDEX statement. See earlier emails for more explanations.\n\nNote, this patch makes it as if it never existed. So, if you think some of\nthe code may be useful, now is the time to speak up! :)\n\nSince this patch conflicts with my currently pending patch for partial\nindices, I put it here for review. I will submit a new patch to -patches\nonce the other is in.\n\nhttp://svana.org/kleptog/pgsql/remove-extend.patch\n-- \nMartijn van Oosterhout <kleptog@svana.org>\nhttp://svana.org/kleptog/\n> It would be nice if someone came up with a certification system that\n> actually separated those who can barely regurgitate what they crammed over\n> the last few weeks from those who command secret ninja networking powers.\n",
"msg_date": "Fri, 13 Jul 2001 23:51:40 +1000",
"msg_from": "Martijn van Oosterhout <kleptog@svana.org>",
"msg_from_op": true,
"msg_subject": "[PATCH] To remove EXTEND INDEX"
},
{
"msg_contents": "> Note, this patch makes it as if it never existed. So, if you think some of\n> the code may be useful, now is the time to speak up! :)\n\nShouldn't this conversation be happening on the -hackers list? TIA\n\n - Thomas\n",
"msg_date": "Fri, 13 Jul 2001 15:06:09 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] To remove EXTEND INDEX"
},
{
"msg_contents": "> > Note, this patch makes it as if it never existed. So, if you think some of\n> > the code may be useful, now is the time to speak up! :)\n> \n> Shouldn't this conversation be happening on the -hackers list? TIA\n\nActually, because it had a patch attached, it should go to patches,\nright?\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 13 Jul 2001 12:07:22 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] To remove EXTEND INDEX"
},
{
"msg_contents": "> > > Note, this patch makes it as if it never existed. So, if you think some of\n> > > the code may be useful, now is the time to speak up! :)\n> > Shouldn't this conversation be happening on the -hackers list? TIA\n> Actually, because it had a patch attached, it should go to patches,\n> right?\n\nimho, no. Because there needs to be a discussion about whether to remove\ncode from the tree, and whether that code may be useful for something\nelse.\n\n-patches is designed to take actual patch files (to reduce bandwidth on\n-hackers) but not to host planning discussions. If this had been a\nsimple patch without feature changes or other larger ramifications, then\nit is more clearly a patch-only topic.\n\n - Thomas\n",
"msg_date": "Fri, 13 Jul 2001 20:56:12 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] [PATCH] To remove EXTEND INDEX"
},
{
"msg_contents": "> > > > Note, this patch makes it as if it never existed. So, if you think some of\n> > > > the code may be useful, now is the time to speak up! :)\n> > > Shouldn't this conversation be happening on the -hackers list? TIA\n> > Actually, because it had a patch attached, it should go to patches,\n> > right?\n> \n> imho, no. Because there needs to be a discussion about whether to remove\n> code from the tree, and whether that code may be useful for something\n> else.\n> \n> -patches is designed to take actual patch files (to reduce bandwidth on\n> -hackers) but not to host planning discussions. If this had been a\n> simple patch without feature changes or other larger ramifications, then\n> it is more clearly a patch-only topic.\n\nWhat some people do is split the patch text with the actual patch. I\ndon't like it sometimes, but I hesistate to post big patches to hachers,\neven if they require discussion.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 13 Jul 2001 17:00:22 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] To remove EXTEND INDEX"
},
{
"msg_contents": "> > > > Note, this patch makes it as if it never existed. So, if you think some of\n> > > > the code may be useful, now is the time to speak up! :)\n> > > Shouldn't this conversation be happening on the -hackers list? TIA\n> > Actually, because it had a patch attached, it should go to patches,\n> > right?\n> \n> imho, no. Because there needs to be a discussion about whether to remove\n> code from the tree, and whether that code may be useful for something\n> else.\n> \n> -patches is designed to take actual patch files (to reduce bandwidth on\n> -hackers) but not to host planning discussions. If this had been a\n> simple patch without feature changes or other larger ramifications, then\n> it is more clearly a patch-only topic.\n\nSorry. I thought the posting had an attached patch. I now see it is a\nURL. I was wrong.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 13 Jul 2001 17:13:10 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: [PATCH] To remove EXTEND INDEX"
},
{
"msg_contents": "Let's drop the meta-discussions and cut to the chase: given that we are\nabout to re-enable partial indexes, should we try to make EXTEND INDEX\nwork too, or just remove it?\n\nThe idea of EXTEND INDEX is to allow replacement of a partial index's\npredicate. However, the implementation only supports weakening the\npredicate, ie, it can only add tuples to the index not remove them.\nThe index's predicate is actually turned into (old predicate OR new\npredicate), which seems counterintuitive to me.\n\nI am not sure that EXTEND INDEX is actually broken. I originally\nthought that the new predicate would replace the old, which would be\nwrong --- but I now see the OR-ing behavior in UpdateIndexPredicate, so\nit's not necessarily busted. The question is whether the feature has\nenough usefulness to be worth supporting and documenting forevermore.\nYou can accomplish the same things, and more, by dropping the index and\nbuilding a new one; what's more, at least in the btree case building a\nnew one is likely to be much faster (the EXTEND code has to do retail\ninsertion, not a SORT-based build).\n\nSo, is it worth expending any effort on EXTEND INDEX? It seems to me\nthat it's a fair amount of code bulk and complexity for very very\nmarginal return. I'd like to simplify the index AM API by getting\nrid of the concept.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 13 Jul 2001 17:49:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCH] To remove EXTEND INDEX "
},
{
"msg_contents": "> Let's drop the meta-discussions and cut to the chase: given that we are\n> about to re-enable partial indexes, should we try to make EXTEND INDEX\n> work too, or just remove it?\n> \n> The idea of EXTEND INDEX is to allow replacement of a partial index's\n> predicate. However, the implementation only supports weakening the\n> predicate, ie, it can only add tuples to the index not remove them.\n> The index's predicate is actually turned into (old predicate OR new\n> predicate), which seems counterintuitive to me.\n> \n> I am not sure that EXTEND INDEX is actually broken. I originally\n> thought that the new predicate would replace the old, which would be\n> wrong --- but I now see the OR-ing behavior in UpdateIndexPredicate, so\n> it's not necessarily busted. The question is whether the feature has\n> enough usefulness to be worth supporting and documenting forevermore.\n> You can accomplish the same things, and more, by dropping the index and\n> building a new one; what's more, at least in the btree case building a\n> new one is likely to be much faster (the EXTEND code has to do retail\n> insertion, not a SORT-based build).\n> \n> So, is it worth expending any effort on EXTEND INDEX? It seems to me\n> that it's a fair amount of code bulk and complexity for very very\n> marginal return. I'd like to simplify the index AM API by getting\n> rid of the concept.\n\nWe don't let people add columns to an existing index so I don't see why\nwe should have EXTEND INDEX unless index twiddling is more common with\npartial indexes.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 13 Jul 2001 18:34:22 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCH] To remove EXTEND INDEX"
},
{
"msg_contents": "On Fri, Jul 13, 2001 at 05:49:56PM -0400, Tom Lane wrote:\n> Let's drop the meta-discussions and cut to the chase: given that we are\n> about to re-enable partial indexes, should we try to make EXTEND INDEX\n> work too, or just remove it?\n\nJust a few clarifications:\n\n* The reason it didn't go to -hackers was because I wasn't subscribed to it\nand hence couldn't post to it. The only reason I can now is because I\nsubscribed (nopost) about 2 minutes ago.\n\n* I discussed this with Tom Lane on -general a few days ago. I'm not sure\nhow many people saw that though. Are most of the people on -hackers\nsubscribed to -general as well?\n\n* I agree with Tom's assertion that it's an awful lot of complexity for such\na marginal gain. Look at the size of the patch and the fact that it has all\nbeen useless for the last few years.\n\n* I didn't send it to -patches because it's not ready yet.\n\n* Only posted a URL, not the patch itself. Sorry for the confusion.\n\nTom actually suggested doing this at the same time as re-enabling partial\nindices but I favoured a separate patch considering the large number of\nscattered changes.\n\nAnyway, is there a concensus, or shall I forget the whole thing?\n\n-- \nMartijn van Oosterhout <kleptog@svana.org>\nhttp://svana.org/kleptog/\n> It would be nice if someone came up with a certification system that\n> actually separated those who can barely regurgitate what they crammed over\n> the last few weeks from those who command secret ninja networking powers.\n",
"msg_date": "Sat, 14 Jul 2001 10:34:06 +1000",
"msg_from": "Martijn van Oosterhout <kleptog@svana.org>",
"msg_from_op": true,
"msg_subject": "Re: Re: [PATCH] To remove EXTEND INDEX"
},
{
"msg_contents": "On Fri, Jul 13, 2001 at 06:34:22PM -0400, Bruce Momjian wrote:\n> > Let's drop the meta-discussions and cut to the chase: given that we are\n> > about to re-enable partial indexes, should we try to make EXTEND INDEX\n> > work too, or just remove it?\n> \n> We don't let people add columns to an existing index so I don't see why\n> we should have EXTEND INDEX unless index twiddling is more common with\n> partial indexes.\n\nWe don't allow people currently to fiddle with indices at all. I don't\nunderstand the origin of EXTEND INDEX since I can't think of a situation\nwhere it would actually be useful.\n\n-- \nMartijn van Oosterhout <kleptog@svana.org>\nhttp://svana.org/kleptog/\n> It would be nice if someone came up with a certification system that\n> actually separated those who can barely regurgitate what they crammed over\n> the last few weeks from those who command secret ninja networking powers.\n",
"msg_date": "Sat, 14 Jul 2001 10:36:28 +1000",
"msg_from": "Martijn van Oosterhout <kleptog@svana.org>",
"msg_from_op": true,
"msg_subject": "Re: Re: [PATCH] To remove EXTEND INDEX"
},
{
"msg_contents": "> * I agree with Tom's assertion that it's an awful lot of complexity for such\n> a marginal gain. Look at the size of the patch and the fact that it has all\n> been useless for the last few years.\n> \n> * I didn't send it to -patches because it's not ready yet.\n> \n> * Only posted a URL, not the patch itself. Sorry for the confusion.\n> \n> Tom actually suggested doing this at the same time as re-enabling partial\n> indices but I favoured a separate patch considering the large number of\n> scattered changes.\n> \n> Anyway, is there a concensus, or shall I forget the whole thing?\n\nI vote for removing the feature. Removing stuff of doubtful usefulness\nis a big gain.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 13 Jul 2001 20:57:08 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCH] To remove EXTEND INDEX"
},
{
"msg_contents": "\n> We don't let people add columns to an existing index so I don't see why\n> we should have EXTEND INDEX unless index twiddling is more common with\n> partial indexes.\n> \n\nNothing is common with partial indexes at the moment -- the feature is\nnot currently implemented, and I don't think other databases adopted the\nidea.\n\n From memory (*), Stonebraker's original intention for partial indexes\nwas that they would be used with really large tables in a situation\nwhere you would might to process the table incrementally, a chunk at a\ntime, an example might be archiving historical data based on date. You\nonly want to archive information older than a certain date, so you use a\npartial index predicated on t < t_0. You then do your archive processing\non those tuples, delete the tuples from the table, and extend the\npredicate forward by an interval in aticipation of the next archiving\ncycle. \n\nThe example is not perfect, but I think that it indicates what the\noriginal author's were thinking. You also have to ask yourself when \nwould this approach be better than just indexing the whole table, and\nuse the predicate in the query qualification. \n\nBernie\n\n(*) The partial indexes are mentioned briefly in one of the Stonebraker\npapers. Sorry, I don't have an exact reference, but it is probably in\none of the Stonebraker publications referenced by\n\n http://techdocs.postgresql.org/oresources.php\n",
"msg_date": "Sat, 14 Jul 2001 09:45:50 -0400",
"msg_from": "Bernard Frankpitt <frankpit@erols.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCH] To remove EXTEND INDEX"
},
{
"msg_contents": "Your patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n> Hi,\n> \n> As promised, here is a patch to remove all support for the (non-functional\n> anyway) EXTEND INDEX statement. See earlier emails for more explanations.\n> \n> Note, this patch makes it as if it never existed. So, if you think some of\n> the code may be useful, now is the time to speak up! :)\n> \n> Since this patch conflicts with my currently pending patch for partial\n> indices, I put it here for review. I will submit a new patch to -patches\n> once the other is in.\n> \n> http://svana.org/kleptog/pgsql/remove-extend.patch\n> -- \n> Martijn van Oosterhout <kleptog@svana.org>\n> http://svana.org/kleptog/\n> > It would be nice if someone came up with a certification system that\n> > actually separated those who can barely regurgitate what they crammed over\n> > the last few weeks from those who command secret ninja networking powers.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 15 Jul 2001 10:35:18 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] To remove EXTEND INDEX"
}
] |
[
{
"msg_contents": "> > The conventional VACUUM would then be something you do as part of a DB \n> > reorganization (maybe once every month or so).\n> \n> Yes, but in other DB's if you UPDATE all rows in the table, you don't\n> double the disk space.\n\nSure, but what is wrong with keeping the space allocated for the next \n\"UPDATE all rows\", if that is something the application needs to do frequently ?\nPostgreSQL needs more space on disc, but we knew that already :-)\n\nAndreas\n",
"msg_date": "Fri, 13 Jul 2001 17:46:43 +0200",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: AW: Re: [GENERAL] Vacuum and Transactions"
},
{
"msg_contents": "> > > The conventional VACUUM would then be something you do as part of a DB\n> > > reorganization (maybe once every month or so).\n> >\n> > Yes, but in other DB's if you UPDATE all rows in the table, you don't\n> > double the disk space.\n> \n> Sure, but what is wrong with keeping the space allocated for\n> the next \"UPDATE all rows\", if that is something the application\n> needs to do frequently ? PostgreSQL needs more space on disc,\n> but we knew that already :-)\n\nIn many cases, a VACUUM will not have been run before more space is\nneeded in the table so you will get ever-increasing sizes until a full\nVACUUM. Only in an optimial light VACUUM state would a table that gets\ncontinually updated _not_ continue to grow.\n\n--\n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 13 Jul 2001 12:10:29 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: Re: [GENERAL] Vacuum and Transactions"
}
] |
[
{
"msg_contents": "Hello,\n\nI have a problem with rules in postgres, it may be a bug, or maybe I'm doing \nsomething wrong. I'm running version 7.1.2 on a freebsd 4.3 box.\n\nHere is my table:\n\nCREATE TABLE customer (\n cono integer not null,\n Name varchar,\n ssn varchar(10),\n PRIMARY KEY (cono)\n);\n\nHere is the rule:\n\nCREATE RULE constraint_customer_ssn_insert\nAS ON INSERT\nTO customer\nWHERE NOT new.ssn IS NULL\nDO INSTEAD\nINSERT INTO customer (cono,name) VALUES (new.cono,new.name);\n\nWhen I execute \"insert into customer values (1,'bogus',null);\" the result is \n\"ERROR: query rewritten 10 times, may contain cycles\" is appeared.\n\nIs this suppose to trigger my rule? The condition is not fullfilled, the ssn \nvalue is null in the insert query. To me it seems like the where clause is \nskipped somehow...\n\nCan anybody help me find out why?\n\nThanks,\nTobias Hermansson,\nMSc Student,\nUniversity of Sk�vde, Sweden\n_________________________________________________________________________\nGet Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.\n\n",
"msg_date": "Fri, 13 Jul 2001 18:52:14 -0000",
"msg_from": "\"Tobias Hermansson\" <tobhe_nospm@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Problem with rules and conditions"
},
{
"msg_contents": "Tobias Hermansson wrote:\n> Hello,\n>\n> I have a problem with rules in postgres, it may be a bug, or maybe I'm doing\n> something wrong. I'm running version 7.1.2 on a freebsd 4.3 box.\n>\n> Here is my table:\n>\n> CREATE TABLE customer (\n> cono integer not null,\n> Name varchar,\n> ssn varchar(10),\n> PRIMARY KEY (cono)\n> );\n>\n> Here is the rule:\n>\n> CREATE RULE constraint_customer_ssn_insert\n> AS ON INSERT\n> TO customer\n> WHERE NOT new.ssn IS NULL\n> DO INSTEAD\n> INSERT INTO customer (cono,name) VALUES (new.cono,new.name);\n>\n> When I execute \"insert into customer values (1,'bogus',null);\" the result is\n> \"ERROR: query rewritten 10 times, may contain cycles\" is appeared.\n>\n> Is this suppose to trigger my rule? The condition is not fullfilled, the ssn\n> value is null in the insert query. To me it seems like the where clause is\n> skipped somehow...\n>\n> Can anybody help me find out why?\n\n So you allways want to set customer.ssn to NULL on insert,\n right?\n\n You cannot have a rule action that does the same operation\n (INSERT) on the same table (customer). This triggers the same\n rule to get fired again, and that's an endless *rewrite*\n loop. Note that the rewriting doesn't look at the values, it\n allways splits the parsetree in your above rule and has to\n apply the same rule on the new query again.\n\n Use a trigger instead:\n\n CREATE FUNCTION cust_ssn_ins () RETURNS opaque AS '\n BEGIN\n NEW.ssn := NULL;\n RETURN NEW;\n END;'\n LANGUAGE 'plpgsql';\n\n CREATE TRIGGER cust_ssn_ins BEFORE INSERT TO customer\n FOR EACH ROW EXECUTE PROCEDURE cust_ssn_ins();\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Sun, 15 Jul 2001 08:08:01 -0400 (EDT)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with rules and conditions"
}
] |
[
{
"msg_contents": "Since I am about to add a \"bulk delete\" routine to the index access\nmethod APIs for concurrent VACUUM, I need to add a column to pg_am\nto define the associated procedure for each index AM. This seems like\na fine time to clean up some of the other outstanding TODO items for\npg_am:\n\n1. Add boolean columns that indicate the following for each AM:\n\t* Does it support UNIQUE indexes?\n\t* Does it support multicolumn indexes?\n\t* Does it handle its own locking (as opposed to expecting\n\t the executor to obtain an index-wide lock)?\nThis will eliminate ugly hardcoded tests on index AM oid's in various\nplaces.\n\n2. Remove the \"deprecated\" columns, which aren't doing anything except\nwasting space.\n\n3. Alter the index_build code so that we don't have duplicate code in\neach index AM for scanning the parent relation. I'm envisioning that\nindex.c would provide a routine IndexBuildHeapScan() that does the basic\nheap scan, testing of partial-index predicate, etc, and the calls back\nan index-AM-specific routine (which it's handed as a function pointer)\nfor each tuple that should be added to the index. A void pointer would\nalso be passed through to let the callback routine have access to\nworking state of the AM-specific index_build procedure.\n(IndexBuildHeapScan would replace the currently-unused DefaultBuild\nroutine in index.c, which is mostly the same code it needs anyway.)\nThe index AM's index_build procedure would do initial setup, call\nIndexBuildHeapScan, and then do any finishing-up processing needed.\n\n\nNote that this doesn't address Oleg's concerns about haskeytype,\nlossiness, etc. AFAICS those issues are not related to the contents\nof pg_am. Later on, I am going to have some proposals for altering\npg_opclass and related tables to deal with those issues...\n\nComments? Any other festering problems in this immediate area?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 13 Jul 2001 18:36:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Planned changes to pg_am catalog"
},
{
"msg_contents": "Is there any way to backtrack from an OID to tell what table included that \nrow (like some secret incantation from the system tables)?\n--\nNaomi Walker\nChief Information Officer\nEldorado Computing, Inc.\n602-604-3100 ext 242 \n\n",
"msg_date": "Fri, 13 Jul 2001 15:59:11 -0700",
"msg_from": "Naomi Walker <nwalker@eldocomp.com>",
"msg_from_op": false,
"msg_subject": "OID question "
},
{
"msg_contents": "Naomi Walker <nwalker@eldocomp.com> writes:\n> Is there any way to backtrack from an OID to tell what table included that \n> row (like some secret incantation from the system tables)?\n\nNope, sorry. There's very little magic about OIDs at all; they're just\nvalues from a sequence.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 13 Jul 2001 19:19:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: OID question "
},
{
"msg_contents": "On Fri, 13 Jul 2001, Tom Lane wrote:\n\n> Since I am about to add a \"bulk delete\" routine to the index access\n> method APIs for concurrent VACUUM, I need to add a column to pg_am\n> to define the associated procedure for each index AM. This seems like\n> a fine time to clean up some of the other outstanding TODO items for\n> pg_am:\n>\n> Note that this doesn't address Oleg's concerns about haskeytype,\n> lossiness, etc. AFAICS those issues are not related to the contents\n> of pg_am. Later on, I am going to have some proposals for altering\n> pg_opclass and related tables to deal with those issues...\n\nAny chance you'd untie a knot for our development in 7.2 development\ncycle ? Our code for multikey GiST, Btree is more or less complete\nand work with ugly workaround, and the only thing we need is a\nsolution of the problem with index_formtuple.\n\n>\n> Comments? Any other festering problems in this immediate area?\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Sat, 14 Jul 2001 16:39:06 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": false,
"msg_subject": "Re: Planned changes to pg_am catalog"
},
{
"msg_contents": "Oleg Bartunov <oleg@sai.msu.su> writes:\n> Any chance you'd untie a knot for our development in 7.2 development\n> cycle ?\n\nI am trying to focus on getting concurrent VACUUM done, because I think\nthat's a \"must do\" for 7.2. I hope to have some time during August to\ndeal with your GIST issues, but they are definitely lower down on the\npriority list for me.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 14 Jul 2001 11:41:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Planned changes to pg_am catalog "
},
{
"msg_contents": "... however, if you want to do some of the legwork yourself, here are\nthe ideas I had about what to do:\n\npg_opclass should have, not just one row for each distinct opclass name,\nbut one row for each supported combination of index AM and opclass name.\nDoing it this way would allow us to put additional info in pg_opclass\nrows --- right now, they're not really able to carry much information.\nThe particular bit of info I want to add is a \"keytype\" column. If this\nis not InvalidOid then it gives the OID of the index column datatype to\nbe used when this opclass is selected. For keytype to be different from\ndata type, the amproc entries associated with the opclass would need to\ninclude a conversion routine to produce the index value given the input\ndata columns --- ie, what the GIST code calls a compression routine.\n(In essence, this would be a form of functional index, no?) Possibly\npg_opclass should also include the amprocnum of the conversion routine;\nnot sure how that ought to be handled.\n\nNote that this change would have a number of implications for the\nindexing of not only pg_opclass, but pg_amop and pg_amproc as well.\nIn particular, pg_amop could lose its amopid column, and pg_amproc\nits amid column, since the opclass OID would be sufficient to indicate\nwhich index AM is meant for any row in these tables. I have not worked\nout all the details, but I believe that these tables would become a lot\nmore understandable this way.\n\nAs for lossiness, I'm inclined to remove that column from pg_index\naltogether. Instead, it should be a column in pg_amop, indicating that\nan index must be treated as lossy *for a particular operator in a\nparticular opclass*. Per previous discussion, this is the right level\nfor the concept. AFAIR, we could drop the WITH clause from CREATE INDEX\naltogether if we did this, which I think is the right thing --- the user\nshould not be responsible for telling the system the properties of an\nindex type and opclass.\n\nIf you have time to start working out the details, that'd be great.\nI won't have time for it before mid-August probably.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 14 Jul 2001 13:01:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Planned changes to pg_am catalog "
},
{
"msg_contents": "> Note that this doesn't address Oleg's concerns about haskeytype,\n> lossiness, etc. AFAICS those issues are not related to the contents\n> of pg_am. Later on, I am going to have some proposals for altering\n> pg_opclass and related tables to deal with those issues...\n>\n> Comments? Any other festering problems in this immediate area?\n\nAs part of my DROP CONSTRAINT stuff I've been fiddling with, I've found it\nnecessary to write an 'IsIndex' function. At the moment, all it does is\nreturn true if the named index exists on the named relation and is unique\n(or primary, or neither, or any).\n\nI think it would be very nice to have an all-purpose function with a\ndefinition something like this:\n\nbool IsIndex(Relation rel, const char *indname, int type, List attrs);\n\nWhere type could be:\n\n0 - any\n1 - normal\n2 - unique\n3 - primary\n\nAnd attrs, if not null, indicates that true should only be returned if the\nindex is over the given list of attributes (in the given order).\n\nI guess the function would assume that the necessary lock is acquired on the\nrelation from outside the function.\n\nI think there's _lots_ of places in the code where index existence checks\nare performed and this could prevent vast code-duplication...\n\nChris\n\n",
"msg_date": "Mon, 16 Jul 2001 09:55:07 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "RE: Planned changes to pg_am catalog"
},
{
"msg_contents": "On Sat, 14 Jul 2001, Tom Lane wrote:\n\n> ... however, if you want to do some of the legwork yourself, here are\n> the ideas I had about what to do:\n\nOK. We'll dig into problem in august. At least we'll try.\nHow many possible problems would arise after changing of pg_opclass ?\nDoes existing code will handle this change somewhat automagically\nor we have to find and modify relevant code ?\n\n>\n> pg_opclass should have, not just one row for each distinct opclass name,\n> but one row for each supported combination of index AM and opclass name.\n> Doing it this way would allow us to put additional info in pg_opclass\n> rows --- right now, they're not really able to carry much information.\n> The particular bit of info I want to add is a \"keytype\" column. If this\n> is not InvalidOid then it gives the OID of the index column datatype to\n> be used when this opclass is selected. For keytype to be different from\n> data type, the amproc entries associated with the opclass would need to\n> include a conversion routine to produce the index value given the input\n> data columns --- ie, what the GIST code calls a compression routine.\n> (In essence, this would be a form of functional index, no?) Possibly\n> pg_opclass should also include the amprocnum of the conversion routine;\n> not sure how that ought to be handled.\n\ncompress/decompress isn't a type conversion. for example,\ngist__int*_ops. indexed values and keytype are both int4 one dimensional\narrays and compress/decompress in this case do some real work.\n\n\n>\n> Note that this change would have a number of implications for the\n> indexing of not only pg_opclass, but pg_amop and pg_amproc as well.\n> In particular, pg_amop could lose its amopid column, and pg_amproc\n> its amid column, since the opclass OID would be sufficient to indicate\n> which index AM is meant for any row in these tables. I have not worked\n> out all the details, but I believe that these tables would become a lot\n> more understandable this way.\n>\n> As for lossiness, I'm inclined to remove that column from pg_index\n> altogether. Instead, it should be a column in pg_amop, indicating that\n> an index must be treated as lossy *for a particular operator in a\n> particular opclass*. Per previous discussion, this is the right level\n> for the concept. AFAIR, we could drop the WITH clause from CREATE INDEX\n> altogether if we did this, which I think is the right thing --- the user\n> should not be responsible for telling the system the properties of an\n> index type and opclass.\n>\n> If you have time to start working out the details, that'd be great.\n> I won't have time for it before mid-August probably.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Mon, 16 Jul 2001 17:46:49 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": false,
"msg_subject": "Re: Planned changes to pg_am catalog "
},
{
"msg_contents": "Oleg Bartunov <oleg@sai.msu.su> writes:\n> How many possible problems would arise after changing of pg_opclass ?\n> Does existing code will handle this change somewhat automagically\n> or we have to find and modify relevant code ?\n\nThere's a fair amount of code that would need to be touched. One thing\nI realized just last night is that some routines use the tables to ask\nquestions like \"is this operator OID a member of any btree opclass, and\nif so which opclass and strategy number?\" This is a relatively simple\nsequential scan over the pg_amop table at the moment. But if the amid\ncolumn were removed, it'd require a join with pg_opclass, which might be\ngood from the point of view of normalization theory but is a bit of a\npain in the neck to program in low-level code. It might also be nice if\nwe could use an index instead of a seq scan (although pg_amop is not so\nlarge that this is essential). So all the places that touch these\ntables need to be identified, and a design invented that doesn't make\nany of them unreasonably complex.\n\nPossibly we should leave the amid column in pg_amop, ie, deliberately\nkeep the tables unnormalized, to make some of these lookups easier.\n\n> compress/decompress isn't a type conversion. for example,\n> gist__int*_ops. indexed values and keytype are both int4 one dimensional\n> arrays and compress/decompress in this case do some real work.\n\nOkay, so the presence of a non-null keytype field should indicate that a\nconversion routine is to be invoked, even if it's the same type as the\nunderlying datatype.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 16 Jul 2001 11:09:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Planned changes to pg_am catalog "
},
{
"msg_contents": "\nIs this all addresssed?\n\n> On Sat, 14 Jul 2001, Tom Lane wrote:\n> \n> > ... however, if you want to do some of the legwork yourself, here are\n> > the ideas I had about what to do:\n> \n> OK. We'll dig into problem in august. At least we'll try.\n> How many possible problems would arise after changing of pg_opclass ?\n> Does existing code will handle this change somewhat automagically\n> or we have to find and modify relevant code ?\n> \n> >\n> > pg_opclass should have, not just one row for each distinct opclass name,\n> > but one row for each supported combination of index AM and opclass name.\n> > Doing it this way would allow us to put additional info in pg_opclass\n> > rows --- right now, they're not really able to carry much information.\n> > The particular bit of info I want to add is a \"keytype\" column. If this\n> > is not InvalidOid then it gives the OID of the index column datatype to\n> > be used when this opclass is selected. For keytype to be different from\n> > data type, the amproc entries associated with the opclass would need to\n> > include a conversion routine to produce the index value given the input\n> > data columns --- ie, what the GIST code calls a compression routine.\n> > (In essence, this would be a form of functional index, no?) Possibly\n> > pg_opclass should also include the amprocnum of the conversion routine;\n> > not sure how that ought to be handled.\n> \n> compress/decompress isn't a type conversion. for example,\n> gist__int*_ops. indexed values and keytype are both int4 one dimensional\n> arrays and compress/decompress in this case do some real work.\n> \n> \n> >\n> > Note that this change would have a number of implications for the\n> > indexing of not only pg_opclass, but pg_amop and pg_amproc as well.\n> > In particular, pg_amop could lose its amopid column, and pg_amproc\n> > its amid column, since the opclass OID would be sufficient to indicate\n> > which index AM is meant for any row in these tables. I have not worked\n> > out all the details, but I believe that these tables would become a lot\n> > more understandable this way.\n> >\n> > As for lossiness, I'm inclined to remove that column from pg_index\n> > altogether. Instead, it should be a column in pg_amop, indicating that\n> > an index must be treated as lossy *for a particular operator in a\n> > particular opclass*. Per previous discussion, this is the right level\n> > for the concept. AFAIR, we could drop the WITH clause from CREATE INDEX\n> > altogether if we did this, which I think is the right thing --- the user\n> > should not be responsible for telling the system the properties of an\n> > index type and opclass.\n> >\n> > If you have time to start working out the details, that'd be great.\n> > I won't have time for it before mid-August probably.\n> >\n> > \t\t\tregards, tom lane\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Don't 'kill -9' the postmaster\n> >\n> \n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 5 Sep 2001 00:55:52 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Planned changes to pg_am catalog"
}
] |
[
{
"msg_contents": "I notice that the query executor currently has a lot of switch statements on\nthe the type of node it is descending to. This means you get a call tree\nlike:\n\nExecProcNode\n ExecNestLoop\n ExecProcNode\n ExecMergeJoin\n ...\n\nWouldn't it be nicer if the Plan had access to function pointers that\nalready referred to the right function. So instead of:\n\nresult = ExecProcNode( a, b )\n\nyou get:\n\na->procs.exec( b );\n\nIt compresses the call tree down a bit. However, I'm not sure if it has many\nbenefits other than maintainability.\n\nOTOH, you could keep ExecProcNode and just replace the switch with a\nfunction call.\n\nAny thoughts?\n-- \nMartijn van Oosterhout <kleptog@svana.org>\nhttp://svana.org/kleptog/\n> It would be nice if someone came up with a certification system that\n> actually separated those who can barely regurgitate what they crammed over\n> the last few weeks from those who command secret ninja networking powers.\n",
"msg_date": "Sat, 14 Jul 2001 12:06:52 +1000",
"msg_from": "Martijn van Oosterhout <kleptog@svana.org>",
"msg_from_op": true,
"msg_subject": "Radical suggestion for plan executor?"
},
{
"msg_contents": "Martijn van Oosterhout <kleptog@svana.org> writes:\n> [ replace switch statements with function pointers ]\n\nI've built systems both ways, and I can't say that I find any real\ngain in transparency either way. I'm not excited about modifying\nPostgres this way. Function pointers have some definite downsides:\ndebuggers can't always step through them, source code analysis tools\ntend not to understand them too well either, etc etc.\n\nIf we were using C++ then the tradeoffs would be different, but\nthis system is just plain C...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 14 Jul 2001 00:53:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Radical suggestion for plan executor? "
}
] |
[
{
"msg_contents": "Where do I find out how to compile and run PostgreSQL under WindowsNT?\nThought I saw something about pre-built binaries available somewhere.\n\nThanks\n\n",
"msg_date": "Sat, 14 Jul 2001 22:11:38 -0400",
"msg_from": "\"P. Dwayne Miller\" <dmiller@espgroup.net>",
"msg_from_op": true,
"msg_subject": "Building PostgreSQL on WindowsNT"
}
] |
[
{
"msg_contents": "I have committed changes for an item in TODO:\n\n* Make n of CHAR(n)/VARCHAR(n) the number of letters, not bytes\n\nPlease let me know if there is any problem.\n--\nTatsuo Ishii\n",
"msg_date": "Sun, 15 Jul 2001 20:21:15 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "multibyte enhancement"
}
] |
[
{
"msg_contents": "Those of you who wanted to help translating the messages of PostgreSQL\nprograms and libraries, you can get started now. I've put up a page\nexplaining things a bit, with links to pages that explain things a bit\nmore, at\n\nhttp://www.ca.postgresql.org/~petere/nls.html\n\nPlease arrange yourselves with other volunteering speakers of your\nlanguage. Results should be sent to the pgsql-patches list.\n\nYou have a few days to ask me questions about this, then I'll be off on\nvacation and looking forward to a lot of progress when I get back. ;-)\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Mon, 16 Jul 2001 00:13:44 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Translators wanted"
},
{
"msg_contents": "Hi all,\n\n----- Original Message ----- \nFrom: Peter Eisentraut <peter_e@gmx.net>\nTo: <pgsql-general@postgresql.org>\nSent: Sunday, July 15, 2001 6:13 PM\n\n\n> Those of you who wanted to help translating the messages of PostgreSQL\n> programs and libraries, you can get started now. I've put up a page\n> explaining things a bit, with links to pages that explain things a bit\n> more, at\n> \n> http://www.ca.postgresql.org/~petere/nls.html\n> \n> Please arrange yourselves with other volunteering speakers of your\n> language. Results should be sent to the pgsql-patches list.\n\nAre there people working on the translation into the Russian language?\nIf yes, then what messages are you working on and what encoding are you using?\nI can start translating the messages, just want to make sure so that we \ndon't duplicate the effort.\n\nS.\n\n\n",
"msg_date": "Sun, 15 Jul 2001 18:55:57 -0400",
"msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Translators wanted"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> You have a few days to ask me questions about this, then I'll be off on\n> vacation and looking forward to a lot of progress when I get back. ;-)\n\nWhat's the procedure for updating the NLS files when messages change in\nthe source text? I think I've already modified several dozen backend\nerror messages from where they were as of 3 June (the mod date of\nsrc/backend/po/de.po).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 15 Jul 2001 19:09:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Translators wanted "
},
{
"msg_contents": "On Sun, 15 Jul 2001, Serguei Mokhov wrote:\n\n> Hi all,\n>\n>\n> Are there people working on the translation into the Russian language?\n> If yes, then what messages are you working on and what encoding are you using?\n> I can start translating the messages, just want to make sure so that we\n> don't duplicate the effort.\n\nGo ahead. I have no time.\n\n>\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Mon, 16 Jul 2001 09:27:32 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Translators wanted"
},
{
"msg_contents": "Peter Eisentraut wrote:\n\n> Please arrange yourselves with other volunteering speakers of your\n> language. Results should be sent to the pgsql-patches list.\n\nAny other Italian-speaking willing to do the job? I fear I won't have a\nlot of time to allocate even if I'm very interested.\n\n-- \nAlessio F. Bragadini\t\talessio@albourne.com\nAPL Financial Services\t\thttp://village.albourne.com\nNicosia, Cyprus\t\t \tphone: +357-2-755750\n\n\"It is more complicated than you think\"\n\t\t-- The Eighth Networking Truth from RFC 1925\n",
"msg_date": "Mon, 16 Jul 2001 12:32:36 +0300",
"msg_from": "Alessio Bragadini <alessio@albourne.com>",
"msg_from_op": false,
"msg_subject": "Re: Translators wanted"
},
{
"msg_contents": "El Lunes 16 Julio 2001 00:13, Peter Eisentraut escribi�:\n> Those of you who wanted to help translating the messages of PostgreSQL\n> programs and libraries, you can get started now. I've put up a page\n> explaining things a bit, with links to pages that explain things a bit\n> more, at\n>\n> http://www.ca.postgresql.org/~petere/nls.html\n>\n\n\tHi Peter... I can help you translating into spanish, in the next days Ill try to get some time to do so...\n\n\tIll contact you then....\n",
"msg_date": "Mon, 16 Jul 2001 20:10:43 +0200",
"msg_from": "=?iso-8859-1?q?V=EDctor=20Romero?= <romero@kde.org>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Translators wanted"
},
{
"msg_contents": "Tom Lane writes:\n\n> What's the procedure for updating the NLS files when messages change in\n> the source text?\n\ngmake update-po\n\nDevelopers are not expected to do that. Translators should do it at their\nconvenience.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Mon, 16 Jul 2001 21:54:25 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: Translators wanted "
},
{
"msg_contents": "Serguei Mokhov writes:\n\n> Are there people working on the translation into the Russian language?\n> If yes, then what messages are you working on and what encoding are you using?\n> I can start translating the messages, just want to make sure so that we\n> don't duplicate the effort.\n\nUse the koi8-r encoding unless you have strong reasons against it.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Mon, 16 Jul 2001 22:03:50 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Translators wanted"
},
{
"msg_contents": "I'll try to do Chinese (simplified) part, but may be a little bit \nslow, any other guys instered?\n\n regards laser\n----- Original Message ----- \nFrom: \"Peter Eisentraut\" <peter_e@gmx.net>\nTo: <pgsql-general@postgresql.org>\nSent: Monday, July 16, 2001 6:13 AM\nSubject: [HACKERS] Translators wanted\n\n\n> Those of you who wanted to help translating the messages of PostgreSQL\n> programs and libraries, you can get started now. I've put up a page\n> explaining things a bit, with links to pages that explain things a bit\n> more, at\n> \n> http://www.ca.postgresql.org/~petere/nls.html\n> \n> Please arrange yourselves with other volunteering speakers of your\n> language. Results should be sent to the pgsql-patches list.\n> \n> You have a few days to ask me questions about this, then I'll be off on\n> vacation and looking forward to a lot of progress when I get back. ;-)\n> \n> -- \n> Peter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n",
"msg_date": "Tue, 17 Jul 2001 23:34:40 +0800",
"msg_from": "\"Weiping He\" <laser@zhengmai.com.cn>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Translators wanted"
},
{
"msg_contents": "On Tue, Jul 17, 2001 at 11:34:40PM +0800, Weiping He wrote:\n> I'll try to do Chinese (simplified) part, but may be a little bit \n> slow, any other guys instered?\n\n And I'll try to do Czech part. And of course --> any other guys \ninstered?\n\n\t\t\t\tKarel\n\n\nPS. ... great work Peter!\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Tue, 17 Jul 2001 17:52:05 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Translators wanted"
}
] |
[
{
"msg_contents": "Hi everybody,\n\nI gotta this problem while I was trying to work with weblogic and postgresql with storing images as byte arrays.\n\nFirst, I used JBOSS-2.2.2 as an application server and Postgresql-7.0.3 as a database serevr to run one of my Java enterprise applications. There I used \"OID\" data type to store images, and worked fine with the above combination. I used jdbc7.0-1.2.jar as a postgresql jdbc driver. \n\nPlease look at the sample codes given below.\n\nInputStream banner;\nString bannerID = \"some id';\nPreparedStatement pstmt =\n dbConnection.prepareStatement(���Insert into tablename (BannerID, Banner) values(?,?)���);\nint NoOfBytes = banner.available();\nbyte[] bytebuffer = new byte[NoOfBytes];\nbanner.read(bytebuffer);\npstmt.setString(1, bannerID);\npstmt.setBytes(2, bytebuffer); \nint resultCount = pstmt.executeUpdate();\n\nAfter this, I successfully deployed this application in weblogic-6.0 using the same Postgresql database and the jdbc driver. There, some other database accessing parts worked fine, but the above image thing is not worked and gave an error message like \"FastPath call returned ERROR: lo_write: invalid large obj descriptor (0)\".\n\nI think nothing wrong with the codes...and can be a driver problem with weblogic..Can anybody explain the above pls. \n\nThankx,\n\nPeiris\n\n\nGet 250 color business cards for FREE!\nhttp://businesscards.lycos.com/vp/fastpath/\n",
"msg_date": "Mon, 16 Jul 2001 09:43:23 +0530",
"msg_from": "\"Sanath Peiris\" <bspeiris@lycos.com>",
"msg_from_op": true,
"msg_subject": "PGSQL problem with weblogic and OID data type"
},
{
"msg_contents": "Sanath Peiris wrote:\n> \n> Hi everybody,\n> \n> I gotta this problem while I was trying to work with weblogic and postgresql with storing images as byte arrays.\n> \n> First, I used JBOSS-2.2.2 as an application server and Postgresql-7.0.3 as a database serevr to run one of my Java enterprise applications. There I used \"OID\" data type to store images, and worked fine with the above combination. I used jdbc7.0-1.2.jar as a postgresql jdbc driver.\n> \n> Please look at the sample codes given below.\n> \n> InputStream banner;\n> String bannerID = \"some id';\n> PreparedStatement pstmt =\n> dbConnection.prepareStatement(?Insert into tablename (BannerID, Banner) values(?,?)?);\n> int NoOfBytes = banner.available();\n> byte[] bytebuffer = new byte[NoOfBytes];\n> banner.read(bytebuffer);\n> pstmt.setString(1, bannerID);\n> pstmt.setBytes(2, bytebuffer);\n> int resultCount = pstmt.executeUpdate();\n> \n> After this, I successfully deployed this application in weblogic-6.0 using the same Postgresql database and the jdbc driver. There, some other database accessing parts worked fine, but the above image thing is not worked and gave an error message like \"FastPath call returned ERROR: lo_write: invalid large obj descriptor (0)\".\n> \n> I think nothing wrong with the codes...and can be a driver problem with weblogic..Can anybody explain the above pls.\n\n\nSanath\n\nDont know if anybody replied to your question yet. But you should set\nyour AutoCommit to false e.a conn.setAutoCommit(false); Do this before\nyou execute your preparedstatement.\nThat should fix your problem.\n\nRegards\nPhillip\n",
"msg_date": "Tue, 17 Jul 2001 08:51:47 +0200",
"msg_from": "Phillip F Jansen <pfj@ucs.co.za>",
"msg_from_op": false,
"msg_subject": "Re: PGSQL problem with weblogic and OID data type"
}
] |
[
{
"msg_contents": "I have found that many TODO items would benefit from a pg_depend table\nthat tracks object dependencies. TODO updated.\n\n---------------------------------------------------------------------------\n\nDEPENDENCY CHECKING / pg_depend\n\n* Auto-destroy sequence on DROP of table with SERIAL, perhaps with a\n separate SERIAL type\n* Prevent column dropping if column is used by foreign key\n* Propagate column or table renaming to foreign key constraints\n* Automatically drop constraints/functions when object is dropped\n* Make constraints clearer in dump file\n* Make foreign keys easier to identify \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 16 Jul 2001 01:00:16 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "pg_depend"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> I have found that many TODO items would benefit from a pg_depend table\n> that tracks object dependencies. TODO updated.\n\nI'm not so convinced on that idea. Assume you're dropping object foo.\nYou look at pg_depend and see that objects 145928, 264792, and 1893723\ndepend on it. Great, what do you do now?\n\nEvery system catalog (except the really badly designed ones) already\ncontains dependency information. What might help is that we make the\ninternal API for altering and dropping any kind of object more consistent\nand general so that they can call each other in the dependency case.\n(E.g., make sure none of them require whereToSendOutput or parser state as\nan argument.)\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Mon, 16 Jul 2001 21:42:14 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_depend"
},
{
"msg_contents": "> Bruce Momjian writes:\n> \n> > I have found that many TODO items would benefit from a pg_depend table\n> > that tracks object dependencies. TODO updated.\n> \n> I'm not so convinced on that idea. Assume you're dropping object foo.\n> You look at pg_depend and see that objects 145928, 264792, and 1893723\n> depend on it. Great, what do you do now?\n> \n> Every system catalog (except the really badly designed ones) already\n> contains dependency information. What might help is that we make the\n> internal API for altering and dropping any kind of object more consistent\n> and general so that they can call each other in the dependency case.\n> (E.g., make sure none of them require whereToSendOutput or parser state as\n> an argument.)\n\nYes, it is not simple. The table is just one part of it. Code has to\ndo lookups and have cascade/failure options based on what it finds. \n\nThings can get quite complicated, especially circular dependencies. It\nneeds a general overhaul and has to hit every area. We need a central\nlocation to keep all this info.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 16 Jul 2001 16:11:28 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pg_depend"
},
{
"msg_contents": "On Mon, 16 Jul 2001, Peter Eisentraut wrote:\n\n> Bruce Momjian writes:\n> \n> > I have found that many TODO items would benefit from a pg_depend table\n> > that tracks object dependencies. TODO updated.\n> \n> I'm not so convinced on that idea. Assume you're dropping object foo.\n> You look at pg_depend and see that objects 145928, 264792, and 1893723\n> depend on it. Great, what do you do now?\nI believe someone else previously suggested this:\n\ndrop <type> object [RESTRICT | CASCADE]\n\nto make use of dependency info.\n\n> Every system catalog (except the really badly designed ones) already\n> contains dependency information. What might help is that we make the\n> internal API for altering and dropping any kind of object more consistent\n> and general so that they can call each other in the dependency case.\n> (E.g., make sure none of them require whereToSendOutput or parser state as\n> an argument.)\nYes, that's definitely requirement to implement the above...\n\n",
"msg_date": "Mon, 16 Jul 2001 18:16:26 -0400 (EDT)",
"msg_from": "Alex Pilosov <alex@pilosoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_depend"
},
{
"msg_contents": "Alex Pilosov writes:\n\n> > I'm not so convinced on that idea. Assume you're dropping object foo.\n> > You look at pg_depend and see that objects 145928, 264792, and 1893723\n> > depend on it. Great, what do you do now?\n> I believe someone else previously suggested this:\n>\n> drop <type> object [RESTRICT | CASCADE]\n>\n> to make use of dependency info.\n\nThat was me. The point, however, was, given object id 145928, how the\nheck to you know what table this comes from?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Tue, 17 Jul 2001 00:23:07 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_depend"
},
{
"msg_contents": "On Tue, 17 Jul 2001, Peter Eisentraut wrote:\n\n> Alex Pilosov writes:\n> \n> > > I'm not so convinced on that idea. Assume you're dropping object foo.\n> > > You look at pg_depend and see that objects 145928, 264792, and 1893723\n> > > depend on it. Great, what do you do now?\n> > I believe someone else previously suggested this:\n> >\n> > drop <type> object [RESTRICT | CASCADE]\n> >\n> > to make use of dependency info.\n> \n> That was me. The point, however, was, given object id 145928, how the\n> heck to you know what table this comes from?\n\nhave a view pg_objecttype which is a UNION across all the [relevant]\nsystem tables sounds fine to me, but maybe I'm missing something?\n\n\n\n",
"msg_date": "Mon, 16 Jul 2001 18:39:32 -0400 (EDT)",
"msg_from": "Alex Pilosov <alex@pilosoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_depend"
},
{
"msg_contents": "> Alex Pilosov writes:\n> \n> > > I'm not so convinced on that idea. Assume you're dropping object foo.\n> > > You look at pg_depend and see that objects 145928, 264792, and 1893723\n> > > depend on it. Great, what do you do now?\n> > I believe someone else previously suggested this:\n> >\n> > drop <type> object [RESTRICT | CASCADE]\n> >\n> > to make use of dependency info.\n> \n> That was me. The point, however, was, given object id 145928, how the\n> heck to you know what table this comes from?\n\nI think we will need the relid of the system table. I imagine four\ncolumns:\n\n\tobject relid\n\tobject oid\n\treference relid\n\treferences oid\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 16 Jul 2001 19:13:54 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pg_depend"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> You look at pg_depend and see that objects 145928, 264792, and 1893723\n> depend on it. Great, what do you do now?\n\n>> I believe someone else previously suggested this:\n>> drop <type> object [RESTRICT | CASCADE]\n>> to make use of dependency info.\n\n> That was me. The point, however, was, given object id 145928, how the\n> heck to you know what table this comes from?\n\nEven more to the point, what guarantee can we have that that OID even\ndefines a unique object at all? We have unique indexes that ensure\nthere are not two tables with the same OID, or two functions with the\nsame OID, etc --- but none that ensure uniqueness across system\ncatalogs.\n\nThe objects would need to be identified by two-part IDs, one part\nspecifying the object type and one giving its OID (which is known unique\nwithin that type). Possibly object type would be best handled by giving\nthe OID of the system catalog containing the object's definition row.\nIn any case, looking at the type part would let users of the pg_depend\ncatalog figure out what they needed to do.\n\nBTW, pg_description is broken because it assumes that OID alone is a\nsufficient identifier ... but since it's such a noncritical function,\nI haven't gotten too excited about it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 16 Jul 2001 19:23:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_depend "
},
{
"msg_contents": "> The objects would need to be identified by two-part IDs, one part\n> specifying the object type and one giving its OID (which is known unique\n> within that type). Possibly object type would be best handled by giving\n> the OID of the system catalog containing the object's definition row.\n> In any case, looking at the type part would let users of the pg_depend\n> catalog figure out what they needed to do.\n\nYes, exactly. Also, I can see code that will handles dependencies\ndifferently if it is a pg_class or pg_type row that is mentioned in\npg_depend.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 16 Jul 2001 19:25:27 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pg_depend"
},
{
"msg_contents": "On Tue, 17 Jul 2001, Peter Eisentraut wrote:\n\n> Alex Pilosov writes:\n> \n> > > I'm not so convinced on that idea. Assume you're dropping object foo.\n> > > You look at pg_depend and see that objects 145928, 264792, and 1893723\n> > > depend on it. Great, what do you do now?\n> > I believe someone else previously suggested this:\n> >\n> > drop <type> object [RESTRICT | CASCADE]\n> >\n> > to make use of dependency info.\n> \n> That was me. The point, however, was, given object id 145928, how the\n> heck to you know what table this comes from?\n\nYou have three columns, depender, dependee, and the third the oid of the\nentry of pg_class describing what the depender is. Oh, actually you'd\nprobably need four columns, depender, dependee, depender in pg_class, and\ndependee in pg_class.\n\nTake care,\n\nBill\n\n",
"msg_date": "Mon, 16 Jul 2001 16:29:58 -0700 (PDT)",
"msg_from": "Bill Studenmund <wrstuden@zembu.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_depend"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> \n> Alex Pilosov writes:\n> \n> > > I'm not so convinced on that idea. Assume you're dropping object foo.\n> > > You look at pg_depend and see that objects 145928, 264792, and 1893723\n> > > depend on it. Great, what do you do now?\n> > I believe someone else previously suggested this:\n> >\n> > drop <type> object [RESTRICT | CASCADE]\n> >\n> > to make use of dependency info.\n> \n> That was me. The point, however, was, given object id 145928, how the\n> heck to you know what table this comes from?\n> \n\nIs it really determined that *DROP OBJECT* drops the objects\nwhich are dependent on it ?\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Tue, 17 Jul 2001 09:55:42 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: pg_depend"
},
{
"msg_contents": "On Tue, 17 Jul 2001, Hiroshi Inoue wrote:\n\n> Peter Eisentraut wrote:\n> > \n> > Alex Pilosov writes:\n> > \n> > > drop <type> object [RESTRICT | CASCADE]\n> > >\n> > > to make use of dependency info.\n> > \n> > That was me. The point, however, was, given object id 145928, how the\n> > heck to you know what table this comes from?\n> > \n> \n> Is it really determined that *DROP OBJECT* drops the objects\n> which are dependent on it ?\n\nIf you used DROP OBJECT CASCADE, yes. That's what CASCADE is saying.\n\nI think the idea is that you can say what happens - delete dependents, or\ndo something else.\n\nTake care,\n\nBill\n\n",
"msg_date": "Mon, 16 Jul 2001 18:31:21 -0700 (PDT)",
"msg_from": "Bill Studenmund <wrstuden@zembu.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_depend"
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> Is it really determined that *DROP OBJECT* drops the objects\n> which are dependent on it ?\n\nDROP object CASCADE should work that way, because that's what the spec\nsays.\n\nWhether the default DROP behavior should be CASCADE, RESTRICT, or the\ncurrent laissez-faire behavior remains to be debated ;-). The spec\nis no help since it has no default: DROP *requires* a CASCADE or\nRESTRICT option in SQL92. But I doubt our users will let us get away\nwith changing the syntax that way. So, once we have the CASCADE and\nRESTRICT options implemented, we'll need to decide what an unadorned\nDROP should do. Opinions anyone?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 16 Jul 2001 21:51:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_depend "
},
{
"msg_contents": "> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > Is it really determined that *DROP OBJECT* drops the objects\n> > which are dependent on it ?\n> \n> DROP object CASCADE should work that way, because that's what the spec\n> says.\n> \n> Whether the default DROP behavior should be CASCADE, RESTRICT, or the\n> current laissez-faire behavior remains to be debated ;-). The spec\n> is no help since it has no default: DROP *requires* a CASCADE or\n> RESTRICT option in SQL92. But I doubt our users will let us get away\n> with changing the syntax that way. So, once we have the CASCADE and\n> RESTRICT options implemented, we'll need to decide what an unadorned\n> DROP should do. Opinions anyone?\n\nDon't forget RENAME.\n\nAnd what do we do if two items depend on the same object.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 16 Jul 2001 21:57:56 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pg_depend"
},
{
"msg_contents": "> Whether the default DROP behavior should be CASCADE, RESTRICT, or the\n> current laissez-faire behavior remains to be debated ;-). The spec\n> is no help since it has no default: DROP *requires* a CASCADE or\n> RESTRICT option in SQL92. But I doubt our users will let us get away\n> with changing the syntax that way. So, once we have the CASCADE and\n> RESTRICT options implemented, we'll need to decide what an unadorned\n> DROP should do. Opinions anyone?\n\nHmmm...an unadorned drop could remove the object without RESRICTing it or\nCASCADEing it. Hence, if there are objects that depend on it, the object\nwill be removed anyway, and dependent objects will not be touched. It's one\nof those things that gives the DBA power, but might let them munge their\ndatabase. (Although it's exactly the same as the current way things happen)\n\nChris\n\n",
"msg_date": "Tue, 17 Jul 2001 10:16:42 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "RE: pg_depend "
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n> \n> > Whether the default DROP behavior should be CASCADE, RESTRICT, or the\n> > current laissez-faire behavior remains to be debated ;-). The spec\n> > is no help since it has no default: DROP *requires* a CASCADE or\n> > RESTRICT option in SQL92. But I doubt our users will let us get away\n> > with changing the syntax that way. So, once we have the CASCADE and\n> > RESTRICT options implemented, we'll need to decide what an unadorned\n> > DROP should do. Opinions anyone?\n> \n> Hmmm...an unadorned drop could remove the object without RESRICTing it or\n> CASCADEing it. Hence, if there are objects that depend on it, the object\n> will be removed anyway, and dependent objects will not be touched. \n\nWe could mark the objects(and their dependent objects) as *INVALID*.\nThey would revive when reference objects revive in the world of *name*s.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Tue, 17 Jul 2001 11:58:12 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: pg_depend"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> > That was me. The point, however, was, given object id 145928, how the\n> > heck to you know what table this comes from?\n>\n> I think we will need the relid of the system table. I imagine four\n> columns:\n>\n> \tobject relid\n> \tobject oid\n> \treference relid\n> \treferences oid\n\nI'm not seeing the point. You're essentially duplicating the information\nthat's already available in the system catalogs. This is bound to become\na catastrophe the minute a user steps in and does manual surgery on some\ncatalog. (And yes, manual surgery should still be possible.)\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Tue, 17 Jul 2001 15:00:50 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_depend"
},
{
"msg_contents": "> Bruce Momjian writes:\n> \n> > > That was me. The point, however, was, given object id 145928, how the\n> > > heck to you know what table this comes from?\n> >\n> > I think we will need the relid of the system table. I imagine four\n> > columns:\n> >\n> > \tobject relid\n> > \tobject oid\n> > \treference relid\n> > \treferences oid\n> \n> I'm not seeing the point. You're essentially duplicating the information\n> that's already available in the system catalogs. This is bound to become\n> a catastrophe the minute a user steps in and does manual surgery on some\n> catalog. (And yes, manual surgery should still be possible.)\n\nBut how then do you find the system table that uses the given oid? \nWasn't that your valid complaint?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 17 Jul 2001 09:55:15 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pg_depend"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> > I'm not seeing the point. You're essentially duplicating the information\n> > that's already available in the system catalogs. This is bound to become\n> > a catastrophe the minute a user steps in and does manual surgery on some\n> > catalog. (And yes, manual surgery should still be possible.)\n>\n> But how then do you find the system table that uses the given oid?\n\nIt's implied by the column you're looking at.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Tue, 17 Jul 2001 17:30:03 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_depend"
},
{
"msg_contents": "> Bruce Momjian writes:\n> \n> > > I'm not seeing the point. You're essentially duplicating the information\n> > > that's already available in the system catalogs. This is bound to become\n> > > a catastrophe the minute a user steps in and does manual surgery on some\n> > > catalog. (And yes, manual surgery should still be possible.)\n> >\n> > But how then do you find the system table that uses the given oid?\n> \n> It's implied by the column you're looking at.\n\nIs it? Are we going to record dependency both ways, e.g primary table\n-> foreign table and foreign table -> primary table, or just one of\nthem. And when we see we depend on something, do we know always what it\ncould be. If I drop a table and I depend on oid XXX, do I know if that\nis a type, function, or serial sequence?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 17 Jul 2001 12:01:22 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pg_depend"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> Is it? Are we going to record dependency both ways, e.g primary table\n> -> foreign table and foreign table -> primary table, or just one of\n> them. And when we see we depend on something, do we know always what it\n> could be. If I drop a table and I depend on oid XXX, do I know if that\n> is a type, function, or serial sequence?\n\nWhen you drop a table, there are only so many things that could depend on\nit:\n\n* rules/views\n* triggers\n* check constraints\n* foreign key constraints\n* primary key constraints\n* unique constraints\n* subtables\n\nincluding their dependencies. There might be others I forgot but a\nfinite list can be defined.\n\nWhen a table is dropped, you scan all of these objects (their system\ncatalogs) for matches against the table and either do a cascade or\nrestrict. This is not new, we already do this for indexes and\ndescriptions, for instance.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Tue, 17 Jul 2001 18:44:25 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_depend"
},
{
"msg_contents": "> When a table is dropped, you scan all of these objects (their system\n> catalogs) for matches against the table and either do a cascade or\n> restrict. This is not new, we already do this for indexes and\n> descriptions, for instance.\n\nI was thinking we could centralize all that checking in pg_depend. \nHowever, we could decide just to do the areas where system tables don't\nwork, like foreign keys and sequences. But when I find an oid depends\non me, do I start scanning tables looking to see if is a sequence or a\nforeign key?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 17 Jul 2001 12:46:35 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pg_depend"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Bruce Momjian writes:\n>> But how then do you find the system table that uses the given oid?\n\n> It's implied by the column you're looking at.\n\nIt is? Remember that we need to use this table to get from an object\nto the objects that depend on it. A datatype OID, for example, would\nhave table OIDs (for column datatypes), function OIDs (for argument\ndatatypes), operator OIDs (ditto), aggregate OIDs (ditto), etc etc\ndependent on it. How will you intuit which of those is represented\nby a given row in pg_depend?\n\nThe alternative to pg_depend is to do a brute force scan of all the\nsystem catalogs looking for dependent objects. In that case, you'd\nknow what you are looking at, but if we extract the dependencies as\na separate table, I don't see how you'd know without being told.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Jul 2001 13:33:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_depend "
},
{
"msg_contents": "> When you drop a table, there are only so many things that could depend on\n> it:\n> \n> * rules/views\n> * triggers\n> * check constraints\n> * foreign key constraints\n> * primary key constraints\n> * unique constraints\n> * subtables\n> \n> including their dependencies. There might be others I forgot but a\n> finite list can be defined.\n> \n> When a table is dropped, you scan all of these objects (their system\n> catalogs) for matches against the table and either do a cascade or\n> restrict. This is not new, we already do this for indexes and\n> descriptions, for instance.\n\nHere is how I see it. If you use the pg_depend table to track these\ndependencies, you know at the time you do the insert where they come\nfrom so why not just record it at that time? Why poke around later\nlooking at many system tables? The big issue is that you can pretty\nmuch centralize the stuff during INSERT and just use that on\nDROP/RENAME. I can even see a loop that says, \"I am OK with sequence\ndependencies, but not other pg_class dependencies\" or stuff like that. \nYou can just trigger on the sysrelid in the table and determine where to\ngo. If not you have to have all sorts of system poking code in\nDROP/RENAME, unless you want to just call a function to hit _every_\nsystem table looking for the oid, which I doubt you want to do.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 17 Jul 2001 15:12:03 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pg_depend"
},
{
"msg_contents": "Tom Lane writes:\n\n> The alternative to pg_depend is to do a brute force scan of all the\n> system catalogs looking for dependent objects. In that case, you'd\n> know what you are looking at, but if we extract the dependencies as\n> a separate table, I don't see how you'd know without being told.\n\nThe former is what I'm advocating.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Tue, 17 Jul 2001 21:58:45 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_depend "
},
{
"msg_contents": "> Tom Lane writes:\n> \n> > The alternative to pg_depend is to do a brute force scan of all the\n> > system catalogs looking for dependent objects. In that case, you'd\n> > know what you are looking at, but if we extract the dependencies as\n> > a separate table, I don't see how you'd know without being told.\n> \n> The former is what I'm advocating.\n\nSo you are basically saying you don't like pg_depend. Would you prefer\nto use it only in cases we can't encode the dependencies easily in the\nsystem catalogs, like functions that require certain relations?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 17 Jul 2001 16:03:17 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pg_depend"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> The alternative to pg_depend is to do a brute force scan of all the\n>> system catalogs looking for dependent objects. In that case, you'd\n>> know what you are looking at, but if we extract the dependencies as\n>> a separate table, I don't see how you'd know without being told.\n\n> The former is what I'm advocating.\n\nSeems like a bad idea; it'll slow down deletes quite a lot, no? Do you\nreally want to (for example) parse every SQL function in the system to\nsee if it refers to a table being dropped? Why would we want to do that\nwork over again for every such delete, rather than doing it once when\nan object is created and storing the info in a table? Also consider\nthat what you are proposing is (at least) an O(N^2) algorithm when there\nare a large number of objects.\n\nFurthermore, a separate dependency table would allow us to support\nuser-defined dependencies. It could be that the user knows function A\nshould go away if table B does, yet there is no physical dependency that\nthe system would recognize for it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Jul 2001 16:26:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_depend "
},
{
"msg_contents": "On Tue, 17 Jul 2001, Peter Eisentraut wrote:\n\n> Tom Lane writes:\n> \n> > The alternative to pg_depend is to do a brute force scan of all the\n> > system catalogs looking for dependent objects. In that case, you'd\n> > know what you are looking at, but if we extract the dependencies as\n> > a separate table, I don't see how you'd know without being told.\n> \n> The former is what I'm advocating.\n\nWhy? It's grossly inefficient and requires lots of effort. And scales\nhorribly to adding new things which can depend on others.\n\nFollowing that argument (admittedly to an extreme conclusion), we should\nrip out index support. After all, all of the info in the index is stored\nin the table, we don't need to duplicate it elsewhere.\n\npg_depend is a concise way to encode dependencies. We do all of the work\nat insert, where we know what depends on what. To not have pg_depend means\nthat on delete, we have to scan EVERYTHING to see what depends on what\nwe're dropping. If we find something (and are CASCADEing), we have to\ncheck and see if _it_ depends on anything (another complete scan). We have\nto keep doing complete scans until we find nothing.\n\nTake care,\n\nBill\n\n",
"msg_date": "Tue, 17 Jul 2001 15:07:01 -0700 (PDT)",
"msg_from": "Bill Studenmund <wrstuden@zembu.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_depend "
},
{
"msg_contents": "On Tue, 17 Jul 2001, Tom Lane wrote:\n\n> Seems like a bad idea; it'll slow down deletes quite a lot, no? Do you\n> really want to (for example) parse every SQL function in the system to\n> see if it refers to a table being dropped? Why would we want to do that\n> work over again for every such delete, rather than doing it once when\n> an object is created and storing the info in a table? Also consider\n> that what you are proposing is (at least) an O(N^2) algorithm when there\n> are a large number of objects.\n\nI think it's actually O(N^M) where there are N system objects and a chain\nof M dependencies (A depends on B which depends on C => M = 3).\n\nTake care,\n\nBill\n\n",
"msg_date": "Tue, 17 Jul 2001 15:09:28 -0700 (PDT)",
"msg_from": "Bill Studenmund <wrstuden@zembu.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_depend "
},
{
"msg_contents": "Bill Studenmund <wrstuden@zembu.com> writes:\n> I think it's actually O(N^M) where there are N system objects and a chain\n> of M dependencies (A depends on B which depends on C => M = 3).\n\nIt's probably not *that* bad. It's reasonable to assume that only a\nsmall number of objects actually depend directly on any one object you\nmight want to delete. (Performance of deleting, say, the int4 datatype\nis probably not of major interest ;-) ...) Only for those objects, not\nfor all N, would you need to descend to the next level of search.\n\nNonetheless, a properly indexed pg_depend table would allow you to find\nthese objects directly, and again to find their dependents directly,\netc. The brute force approach would require a rather expensive scan\nover all the system catalogs, plus nontrivial analysis for some types\nof system objects such as functions. Repeating that for each cascaded\ndelete is even less appetizing than doing it once.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Jul 2001 19:13:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_depend "
},
{
"msg_contents": "On Tue, 17 Jul 2001, Tom Lane wrote:\n\n> Bill Studenmund <wrstuden@zembu.com> writes:\n> > I think it's actually O(N^M) where there are N system objects and a chain\n> > of M dependencies (A depends on B which depends on C => M = 3).\n> \n> It's probably not *that* bad. It's reasonable to assume that only a\n> small number of objects actually depend directly on any one object you\n> might want to delete. (Performance of deleting, say, the int4 datatype\n> is probably not of major interest ;-) ...) Only for those objects, not\n> for all N, would you need to descend to the next level of search.\n\nAh yes. It'll be O(ND) where D is the number of dependers (the number of\nleaves in the dependency tree).\n\n> Nonetheless, a properly indexed pg_depend table would allow you to find\n> these objects directly, and again to find their dependents directly,\n> etc. The brute force approach would require a rather expensive scan\n> over all the system catalogs, plus nontrivial analysis for some types\n> of system objects such as functions. Repeating that for each cascaded\n> delete is even less appetizing than doing it once.\n\nIndeed.\n\nTake care,\n\nBill\n\n",
"msg_date": "Tue, 17 Jul 2001 16:24:01 -0700 (PDT)",
"msg_from": "Bill Studenmund <wrstuden@zembu.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_depend "
},
{
"msg_contents": "Peter Eisentraut wrote:\n> \n> Bruce Momjian writes:\n> \n> > > That was me. The point, however, was, given object id 145928, how the\n> > > heck to you know what table this comes from?\n> >\n> > I think we will need the relid of the system table. I imagine four\n> > columns:\n> >\n> > object relid\n> > object oid\n> > reference relid\n> > references oid\n> \n\nI like \n\tobject relid\n\tobject oid\n\tobject name\n\treference relid\n\treference oid\n\nand unadorned DROP doesn't drop dependent objects.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Wed, 18 Jul 2001 09:47:51 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: pg_depend"
},
{
"msg_contents": "> I like \n> \tobject relid\n> \tobject oid\n> \tobject name\n> \treference relid\n> \treference oid\n\nCan I ask why you like the object name?\n\n> \n> and unadorned DROP doesn't drop dependent objects.\n\nOK.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 17 Jul 2001 21:49:13 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pg_depend"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > I like\n> > object relid\n> > object oid\n> > object name\n> > reference relid\n> > reference oid\n> \n> Can I ask why you like the object name?\n> \n\nOops I made a mistake.\nReference name is needed not an object name,\ni.e\n\tobject relid\n\tobject oid\n\trelerence relid\n\treference oid\n\treference name\n\n create table a (...);\n create view view_a as select .. from a;\n\nThen we have an pg_depend entry e.g.\n\n\tpg_class_relid\n\toid of the view_a\n\tpg_class_relid\n\toid of the table a\n\t'a' the name of the table\n\nand so on.\n\n drop table a; (unadorned drop).\n\nThen the above entry would be changed to\n\n\tpg_class_relid(unchanged)\n\toid of the view_s(unchagned)\n\tpg_class_relid(unchanged)\n\tInvalidOid\n\t'a' the name of the table(unchanged)\n\n create table a (...);\n\nThen the pg_depend entry would be\n\n\tpg_class_relid(unchanged)\n\toid of the view_s(unchagned)\n\tpg_class_relid(unchanged)\n\tthe oid of the new table a\n\t'a' the name of the table(unchanged)\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Wed, 18 Jul 2001 11:25:10 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: pg_depend"
},
{
"msg_contents": "> Then we have an pg_depend entry e.g.\n> \n> \tpg_class_relid\n> \toid of the view_a\n> \tpg_class_relid\n> \toid of the table a\n> \t'a' the name of the table\n> \n> and so on.\n> \n> drop table a; (unadorned drop).\n> \n> Then the above entry would be changed to\n> \n> \tpg_class_relid(unchanged)\n> \toid of the view_s(unchagned)\n> \tpg_class_relid(unchanged)\n> \tInvalidOid\n> \t'a' the name of the table(unchanged)\n> \n> create table a (...);\n> \n> Then the pg_depend entry would be\n> \n> \tpg_class_relid(unchanged)\n> \toid of the view_s(unchagned)\n> \tpg_class_relid(unchanged)\n> \tthe oid of the new table a\n> \t'a' the name of the table(unchanged)\n\nSo you want to keep the name of the referenced object in case it is\ndropped. Makes sense.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 17 Jul 2001 22:29:54 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pg_depend"
},
{
"msg_contents": "At 11:25 18/07/01 +0900, Hiroshi Inoue wrote:\n>\n>Oops I made a mistake.\n>Reference name is needed not an object name,\n>i.e\n>\tobject relid\n>\tobject oid\n>\trelerence relid\n>\treference oid\n>\treference name\n>\n\nI think any deisgn needs to cater for attr dependencies. eg.\n\n create table a (f1 int4, f2 int8);\n create view view_a as select f2 from a;\n\nThen\n\n alter table a drop f1; -- Is OK. Should just happen\n alter table a drop f2; -- Should warn about the view, and/or cascade etc.\n alter table a alter f2 float; -- Should trigger a view recompilation.\n\n...same thing needs to happen with constraints that reference attrs\n\nI *think* tables are the only items that can have subobjects with dependant.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Wed, 18 Jul 2001 13:08:15 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: pg_depend"
},
{
"msg_contents": "On Tue, Jul 17, 2001 at 07:13:10PM -0400, Tom Lane wrote:\n> \n> Nonetheless, a properly indexed pg_depend table would allow you to find\n> these objects directly, and again to find their dependents directly,\n> etc. The brute force approach would require a rather expensive scan\n> over all the system catalogs, plus nontrivial analysis for some types\n> of system objects such as functions. Repeating that for each cascaded\n> delete is even less appetizing than doing it once.\n\nStated that way, the performance argument sounds very convincing. However,\nthe _real_ convincer for me is the support for user designated\ndependencies, as Tom pointed out earlier. That allows the system to do\nas much as possible automatically, (even functional dependency analysis,\nif someone want to write it) but doesn't require the automatic mechanisms\nto be perfect: the DBA has a mechanism to do the crazy, edge case things.\n\nRoss\n",
"msg_date": "Wed, 18 Jul 2001 09:48:28 -0500",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: pg_depend"
},
{
"msg_contents": "On Wed, Jul 18, 2001 at 01:08:15PM +1000, Philip Warner wrote:\n> At 11:25 18/07/01 +0900, Hiroshi Inoue wrote:\n> >\n> >Oops I made a mistake.\n> >Reference name is needed not an object name,\n> >i.e\n> >\tobject relid\n> >\tobject oid\n> >\trelerence relid\n> >\treference oid\n> >\treference name\n> >\n> \n> I think any deisgn needs to cater for attr dependencies. eg.\n> \n> create table a (f1 int4, f2 int8);\n> create view view_a as select f2 from a;\n> \n> Then\n> \n> alter table a drop f1; -- Is OK. Should just happen\n> alter table a drop f2; -- Should warn about the view, and/or cascade etc.\n> alter table a alter f2 float; -- Should trigger a view recompilation.\n> \n> ...same thing needs to happen with constraints that reference attrs\n> \n> I *think* tables are the only items that can have subobjects with dependant.\n\nWouldn't that work simply by using the oid for the column in pg_attribute\nas the primary dependency, rather than the table itself, from pg_class? So,\nthe dependency chain would be:\n\nview -> attribute -> table\n\nSo your examples would 'just work', I think.\n\nRoss\n",
"msg_date": "Wed, 18 Jul 2001 09:52:32 -0500",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: pg_depend"
},
{
"msg_contents": ">\n>Wouldn't that work simply by using the oid for the column in pg_attribute\n>as the primary dependency, rather than the table itself, from pg_class? So,\n>the dependency chain would be:\n>\n>view -> attribute -> table\n>\n>So your examples would 'just work', I think.\n>\n\nTrue. We need to remember to store both sets of dependencies (used attrs as\nwell as the table dependency).\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Thu, 19 Jul 2001 00:58:26 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: pg_depend"
},
{
"msg_contents": "> >\n> >Wouldn't that work simply by using the oid for the column in pg_attribute\n> >as the primary dependency, rather than the table itself, from pg_class? So,\n> >the dependency chain would be:\n> >\n> >view -> attribute -> table\n> >\n> >So your examples would 'just work', I think.\n> >\n> \n> True. We need to remember to store both sets of dependencies (used attrs as\n> well as the table dependency).\n\nTODO update with column labels:\n\n* Add pg_depend table for dependency recording; use sysrelid, oid, \n depend_sysrelid, depend_oid, name \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 18 Jul 2001 11:26:22 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pg_depend"
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> Reference name is needed not an object name,\n\nOnly if we want to support the notion that drop-and-recreate-with-same-name\nmeans that references from other objects should now apply to the new\nobject. I do not think that that's really a good idea, at least not\nwithout a heck of a lot of compatibility checking. It'd be way too easy\nto create cases where the properties of the new object do not match\nwhat the referring object expects.\n\nThe majority of the cases I've heard about where this would be useful\nare for functions, and we could solve that a lot better with an ALTER\nFUNCTION command that allows changing the function body (but not the\nname, arguments, or result type).\n\nBTW, name alone is not a good enough referent for functions... you'd\nhave to store the argument types too.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Jul 2001 11:31:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_depend "
},
{
"msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> I think any deisgn needs to cater for attr dependencies. eg.\n\nI don't really see a need to recognize dependencies at finer than table\nlevel. I'd just make the dependency be from view_a to a and keep things\nsimple. What's so wrong with recompiling the view for *every* change\nof the underlying table?\n\nWe could support attr-level dependencies within the proposed pg_depend\nlayout if we made pg_attribute one of the allowed object categories.\nHowever, I'd prefer not to make OID of pg_attribute rows be a primary\nkey for that table (in the long run I'd like to not assign OIDs at all\nto pg_attribute, as well as other tables that don't need OIDs). So the\nbetter way to do it would be to make the pg_depend entries include\nattribute numbers. But I really think this is unnecessary complexity.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Jul 2001 11:38:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_depend "
},
{
"msg_contents": "At 11:38 18/07/01 -0400, Tom Lane wrote:\n>Philip Warner <pjw@rhyme.com.au> writes:\n>> I think any deisgn needs to cater for attr dependencies. eg.\n>\n>I don't really see a need to recognize dependencies at finer than table\n>level. I'd just make the dependency be from view_a to a and keep things\n>simple. What's so wrong with recompiling the view for *every* change\n>of the underlying table?\n>\n\nNot a problem for views, but when you get to constraints on large tables,\nre-evaluating all the constraints unnecessarily could be a nightmare, and\nespecially frustrating when you just dropped an irrelevant attr.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Thu, 19 Jul 2001 01:43:10 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: pg_depend "
},
{
"msg_contents": "> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > Reference name is needed not an object name,\n> \n> Only if we want to support the notion that drop-and-recreate-with-same-name\n> means that references from other objects should now apply to the new\n> object. I do not think that that's really a good idea, at least not\n> without a heck of a lot of compatibility checking. It'd be way too easy\n> to create cases where the properties of the new object do not match\n> what the referring object expects.\n> \n> The majority of the cases I've heard about where this would be useful\n> are for functions, and we could solve that a lot better with an ALTER\n> FUNCTION command that allows changing the function body (but not the\n> name, arguments, or result type).\n> \n> BTW, name alone is not a good enough referent for functions... you'd\n> have to store the argument types too.\n\nI assume the name was only for reference use so you could give the user\nan idea of what is missing. Clearly you don't use that to recreate\nanything, or I hope not.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 18 Jul 2001 11:56:27 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pg_depend"
},
{
"msg_contents": "> Philip Warner <pjw@rhyme.com.au> writes:\n> > I think any deisgn needs to cater for attr dependencies. eg.\n> \n> I don't really see a need to recognize dependencies at finer than table\n> level. I'd just make the dependency be from view_a to a and keep things\n> simple. What's so wrong with recompiling the view for *every* change\n> of the underlying table?\n\nWhat about other objects. Foreign keys? Serial?\n\n> We could support attr-level dependencies within the proposed pg_depend\n> layout if we made pg_attribute one of the allowed object categories.\n> However, I'd prefer not to make OID of pg_attribute rows be a primary\n> key for that table (in the long run I'd like to not assign OIDs at all\n> to pg_attribute, as well as other tables that don't need OIDs). So the\n> better way to do it would be to make the pg_depend entries include\n> attribute numbers. But I really think this is unnecessary complexity.\n\nI liked the pg_attribute references for some uses. I agree doing that\nfor a view seems overly complex.\n\nI don't see any value in dropping oid from pg_attribute.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 18 Jul 2001 11:59:00 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pg_depend"
},
{
"msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> At 11:38 18/07/01 -0400, Tom Lane wrote:\n>> I'd just make the dependency be from view_a to a and keep things\n>> simple. What's so wrong with recompiling the view for *every* change\n>> of the underlying table?\n\n> Not a problem for views, but when you get to constraints on large tables,\n> re-evaluating all the constraints unnecessarily could be a nightmare, and\n> especially frustrating when you just dropped an irrelevant attr.\n\nHuh? You seem to be thinking that we'd need to re-check the constraint\nat each row of the table, but I don't see why we'd need to. I was just\nenvisioning re-parsing the constraint source text.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Jul 2001 12:37:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_depend "
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > Reference name is needed not an object name,\n> \n> Only if we want to support the notion that \n> drop-and-recreate-with-same-name\n> means that references from other objects should now apply to the new\n> object. I do not think that that's really a good idea, at least not\n> without a heck of a lot of compatibility checking. It'd be way too easy\n> to create cases where the properties of the new object do not match\n> what the referring object expects.\n> \n\nFor example, we would process the following step to drop a\ncolumn.\n\nselect ....(all columns except a column) from a into b;\ndrop table a;\nalter table b rename to a;\n\nBut we would lose all relelvant objects.\n\nThough we may be able to solve this problem by implementing\n*drop column* properly, we couldn't solve this kind of problems\nat once. In fact neither *drop column* nor *cluster* is solved.\nWe could always have (at least) the second best way by\nallowing drop-and-recreate-with-same-name revival.\n\n> The majority of the cases I've heard about where this would be useful\n> are for functions, and we could solve that a lot better with an ALTER\n> FUNCTION command that allows changing the function body (but not the\n> name, arguments, or result type).\n> \n> BTW, name alone is not a good enough referent for functions... you'd\n> have to store the argument types too.\n> \n\n??? Isn't an entry\n\tpg_proc_relid\n\tthe oid of the function\n\tpg_type_relid\n\tthe oid of an argument type\n\tthe name of the argument type\nmade ?\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Thu, 19 Jul 2001 02:23:44 +0900",
"msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "RE: pg_depend "
},
{
"msg_contents": "On Wed, 18 Jul 2001, Hiroshi Inoue wrote:\n\n> Oops I made a mistake.\n> Reference name is needed not an object name,\n> i.e\n> \tobject relid\n> \tobject oid\n> \trelerence relid\n> \treference oid\n> \treference name\n> \n> create table a (...);\n> create view view_a as select .. from a;\n> \n> Then we have an pg_depend entry e.g.\n> \n> \tpg_class_relid\n> \toid of the view_a\n> \tpg_class_relid\n> \toid of the table a\n> \t'a' the name of the table\n> \n> and so on.\n> \n> drop table a; (unadorned drop).\n> \n> Then the above entry would be changed to\n> \n> \tpg_class_relid(unchanged)\n> \toid of the view_s(unchagned)\n> \tpg_class_relid(unchanged)\n> \tInvalidOid\n> \t'a' the name of the table(unchanged)\n> \n> create table a (...);\n> \n> Then the pg_depend entry would be\n> \n> \tpg_class_relid(unchanged)\n> \toid of the view_s(unchagned)\n> \tpg_class_relid(unchanged)\n> \tthe oid of the new table a\n> \t'a' the name of the table(unchanged)\n\nThis step I disagree with. Well, I disagree with the automated aspect of\nthe update. How does postgres know that the new table a is sufficiently\nlike the old table that it should be used? A way the DBA could say, \"yeah,\nrestablish that,\" would be fine.\n\nWhich is better, a view which is broken as the table it was based off of\nwas dropped (even though there's a table of the same name now) or a view\nwhich is broken because there is now a table whose name matches its\nold table's name, but has different columns (either names or types)?\n\nI'd say #1.\n\nTake care,\n\nBill\n\n",
"msg_date": "Wed, 18 Jul 2001 10:25:00 -0700 (PDT)",
"msg_from": "Bill Studenmund <wrstuden@zembu.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_depend"
},
{
"msg_contents": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp> writes:\n>> BTW, name alone is not a good enough referent for functions... you'd\n>> have to store the argument types too.\n\n> ??? Isn't an entry\n> \tpg_proc_relid\n> \tthe oid of the function\n> \tpg_type_relid\n> \tthe oid of an argument type\n> \tthe name of the argument type\n> made ?\n\nThat's the entry that was dropped, no? Given a pg_depend row pointing\nat a function named foo, with an OID that no longer exists, how will you\ntell which of the (possibly many) functions named foo is wanted?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Jul 2001 13:29:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_depend "
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I don't see any value in dropping oid from pg_attribute.\n\nConservation of OIDs. Assigning an OID to every row of pg_attribute\nchews up lots of OIDs, for a table that should never be referenced by\nOID --- its primary key is (table OID, attribute number).\n\nRight now this isn't really significant, but if/when we have an option\nto suppress OID generation for user tables, I have every intention of\napplying it to a bunch of the system tables as well. pg_attribute is\na prime candidate.\n\n(\"When\" probably means \"next month\", btw. This is on my 7.2 list...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Jul 2001 13:34:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_depend "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I don't see any value in dropping oid from pg_attribute.\n> \n> Conservation of OIDs. Assigning an OID to every row of pg_attribute\n> chews up lots of OIDs, for a table that should never be referenced by\n> OID --- its primary key is (table OID, attribute number).\n> \n> Right now this isn't really significant, but if/when we have an option\n> to suppress OID generation for user tables, I have every intention of\n> applying it to a bunch of the system tables as well. pg_attribute is\n> a prime candidate.\n> \n> (\"When\" probably means \"next month\", btw. This is on my 7.2 list...)\n\nYikes, I am not sure we are ready to make oids optional. System table\noid's seem like the last place to try and preserve oids. Do we return\nunused oids back to the pool on backend exit yet? (I don't see it on\nthe TODO list.) That seems like a much more profitable place to start.\n\nWill we have cheap 64-bit oids by the time oid wraparound becomes an\nissue?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 18 Jul 2001 13:41:02 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pg_depend"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Yikes, I am not sure we are ready to make oids optional.\n\nWe've discussed it enough, it's time to do it. I have an ulterior plan\nhere: I want 7.2 not to have any limitations that prevent it from being\nused in a true 24x7, up-forever scenario. VACUUM lockouts are fixed\nnow, or nearly. The other stumbling blocks for continuous runs are OID\nwraparound and XID wraparound. We've got unique indexes on OIDs for all\nsystem catalogs that need them (we were short a couple as of 7.1, btw),\nbut OID wrap is still likely to lead to unwanted \"duplicate key\"\nfailures. So we still need a way to reduce the system's appetite for\nOIDs. In a configuration where OIDs are used only where *necessary*,\nit'd be a long time till wrap. I also intend to do something about XID\nwrap next month...\n\n> Do we return unused oids back to the pool on backend exit yet?\n\nSince WAL, and that was never a fundamental answer anyway.\n\n> Will we have cheap 64-bit oids by the time oid wraparound becomes an\n> issue?\n\nNo, we won't, because OID wrap is an issue already for any long-uptime\ninstallation. (64-bit XIDs are not a real practical answer either,\nbtw.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Jul 2001 13:52:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "OID wraparound (was Re: pg_depend)"
},
{
"msg_contents": "On Wednesday 18 July 2001 13:52, Tom Lane wrote:\n> here: I want 7.2 not to have any limitations that prevent it from being\n> used in a true 24x7, up-forever scenario. VACUUM lockouts are fixed\n> now, or nearly. The other stumbling blocks for continuous runs are OID\n\nGo for it, Tom. After the posting the other day about the 200GB data per \nweek data load, this _really_ needs to be done. It won't directly affect me, \nas my needs are a little more modest (just about anything looks modest \ncompared to _that_ data load).\n\nPetty limitations such as these two need to go away, and soon -- we're \ngetting used by big installations now. This isn't Stonebraker's research \nPostgres anymore. The 7.1 removal of previous limitations was nearly overdue \n-- and these two issues of ID wrap need to be addressed -- my gut feel is \nthat the reports of OID/XID wrap are going to skyrocket within 6 months as \nbigger and bigger installations try out PostgreSQL/RHDB (fact is that many \nare going to try it out _because_ it has been relabeled by Red Hat....).\n\nThe MySQL/NuSphere articles illustrate that -- the NuSphere guy goes as far \nas saying that the support of _Red_Hat_ is what gives PG credibilitiy -- and, \nyou have to admit, RH's adoption of PG does increase, in many circles, PG's \ncredibility.\n\nOf course, PG has credibility with me for other reasons -- it was, IMHO, just \na matter of time before Red Hat saw the PostgreSQL Light.....\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Wed, 18 Jul 2001 14:59:06 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound (was Re: pg_depend)"
},
{
"msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> ... these two issues of ID wrap need to be addressed -- my gut feel is \n> that the reports of OID/XID wrap are going to skyrocket within 6 months as \n> bigger and bigger installations try out PostgreSQL/RHDB \n\nYes, my thoughts exactly. We're trying to play in the big leagues now.\nI don't believe we can put these problems off any longer.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Jul 2001 15:04:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound (was Re: pg_depend) "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Yikes, I am not sure we are ready to make oids optional.\n> \n> We've discussed it enough, it's time to do it. I have an ulterior plan\n> here: I want 7.2 not to have any limitations that prevent it from being\n> used in a true 24x7, up-forever scenario. VACUUM lockouts are fixed\n> now, or nearly. The other stumbling blocks for continuous runs are OID\n> wraparound and XID wraparound. We've got unique indexes on OIDs for all\n> system catalogs that need them (we were short a couple as of 7.1, btw),\n> but OID wrap is still likely to lead to unwanted \"duplicate key\"\n> failures. So we still need a way to reduce the system's appetite for\n> OIDs. In a configuration where OIDs are used only where *necessary*,\n> it'd be a long time till wrap. I also intend to do something about XID\n> wrap next month...\n\nIf you want to make oids optional on user tables, we can vote on that. \nHowever, OID's keep our system tables together. Though we don't need\nthem on every system table, it seems they should be on all system tables\njust for completeness. Are we really losing a significant amount of\noids through system tables?\n\n> > Do we return unused oids back to the pool on backend exit yet?\n> \n> Since WAL, and that was never a fundamental answer anyway.\n> \n> > Will we have cheap 64-bit oids by the time oid wraparound becomes an\n> > issue?\n> \n> No, we won't, because OID wrap is an issue already for any long-uptime\n> installation. (64-bit XIDs are not a real practical answer either,\n> btw.)\n\nHave we had a wraparound yet?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 18 Jul 2001 15:12:24 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: OID wraparound (was Re: pg_depend)"
},
{
"msg_contents": "> Lamar Owen <lamar.owen@wgcr.org> writes:\n> > ... these two issues of ID wrap need to be addressed -- my gut feel is \n> > that the reports of OID/XID wrap are going to skyrocket within 6 months as \n> > bigger and bigger installations try out PostgreSQL/RHDB \n> \n> Yes, my thoughts exactly. We're trying to play in the big leagues now.\n> I don't believe we can put these problems off any longer.\n\nIs the idea to make oid's optional, with them disabled by default on\nuser tables?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 18 Jul 2001 15:32:06 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: OID wraparound (was Re: pg_depend)"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Is the idea to make oid's optional, with them disabled by default on\n> user tables?\n\nMy thought is to make OID generation optional on a per-table basis, and\ndisable it on system tables that don't need unique OIDs. (OID would\nread as NULL on any row for which an OID wasn't generated.)\n\nIt remains to be debated exactly how users should control the choice for\nuser tables, and which choice ought to be the default. I don't have a\nstrong opinion about that either way, and am prepared to hear\nsuggestions.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Jul 2001 16:06:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound (was Re: pg_depend) "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Is the idea to make oid's optional, with them disabled by default on\n> > user tables?\n> \n> My thought is to make OID generation optional on a per-table basis, and\n> disable it on system tables that don't need unique OIDs. (OID would\n> read as NULL on any row for which an OID wasn't generated.)\n> \n> It remains to be debated exactly how users should control the choice for\n> user tables, and which choice ought to be the default. I don't have a\n> strong opinion about that either way, and am prepared to hear\n> suggestions.\n\nI think it should be off on user tables by default, but kept on system\ntables just for completeness. It could be added at table creation time\nor from ALTER TABLEL ADD. It seems we just use them too much for system\nstuff. pg_description is just one example.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 18 Jul 2001 16:08:31 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: OID wraparound (was Re: pg_depend)"
},
{
"msg_contents": "On Wednesday 18 July 2001 16:06, Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Is the idea to make oid's optional, with them disabled by default on\n> > user tables?\n\n> It remains to be debated exactly how users should control the choice for\n> user tables, and which choice ought to be the default. I don't have a\n> strong opinion about that either way, and am prepared to hear\n> suggestions.\n\nSET OIDGEN boolean for database-wide default policy.\nCREATE TABLE WITH OIDS for individual tables? CREATE TABLE WITHOUT OIDS?\n?? Is this sort of thing addressed by any SQL standard (Thomas?)?\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Wed, 18 Jul 2001 16:10:13 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound (was Re: pg_depend)"
},
{
"msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> On Wednesday 18 July 2001 16:06, Tom Lane wrote:\n>> It remains to be debated exactly how users should control the choice for\n>> user tables, and which choice ought to be the default. I don't have a\n>> strong opinion about that either way, and am prepared to hear\n>> suggestions.\n\n> SET OIDGEN boolean for database-wide default policy.\n> CREATE TABLE WITH OIDS for individual tables? CREATE TABLE WITHOUT OIDS?\n\nSomething along that line, probably.\n\n> ?? Is this sort of thing addressed by any SQL standard (Thomas?)?\n\nOIDs aren't standard, so the standards are hardly likely to help us\ndecide how they should work.\n\nI think the really critical choice here is how much backwards\ncompatibility we want to keep. The most backwards-compatible way,\nobviously, is OIDs on by default and things work exactly as they\ndo now. But if we were willing to bend things a little then some\ninteresting possibilities open up. One thing I've been wondering\nabout is whether an explicit WITH OIDS spec ought to cause automatic\ncreation of a unique index on OID for that table. ISTM that any\napplication that wants OIDs at all would want such an index...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Jul 2001 16:30:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound (was Re: pg_depend) "
},
{
"msg_contents": "If OIDs are dropped a mechanism for retrieving the primary key of the\nlast insert would be greatly appreciated. Heck, it would be useful\nnow (rather than returning OID).\n\nI much prefer retrieving the sequence number after the insert than\nbefore insert where the insert uses it. Especially when trigger\nmuckary is involved.\n\n--\nRod Taylor\n\nYour eyes are weary from staring at the CRT. You feel sleepy. Notice\nhow restful it is to watch the cursor blink. Close your eyes. The\nopinions stated above are yours. You cannot imagine why you ever felt\notherwise.\n\n----- Original Message -----\nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nTo: \"Lamar Owen\" <lamar.owen@wgcr.org>\nCc: \"Bruce Momjian\" <pgman@candle.pha.pa.us>; \"PostgreSQL-development\"\n<pgsql-hackers@postgresql.org>\nSent: Wednesday, July 18, 2001 4:30 PM\nSubject: Re: OID wraparound (was Re: [HACKERS] pg_depend)\n\n\n> Lamar Owen <lamar.owen@wgcr.org> writes:\n> > On Wednesday 18 July 2001 16:06, Tom Lane wrote:\n> >> It remains to be debated exactly how users should control the\nchoice for\n> >> user tables, and which choice ought to be the default. I don't\nhave a\n> >> strong opinion about that either way, and am prepared to hear\n> >> suggestions.\n>\n> > SET OIDGEN boolean for database-wide default policy.\n> > CREATE TABLE WITH OIDS for individual tables? CREATE TABLE\nWITHOUT OIDS?\n>\n> Something along that line, probably.\n>\n> > ?? Is this sort of thing addressed by any SQL standard (Thomas?)?\n>\n> OIDs aren't standard, so the standards are hardly likely to help us\n> decide how they should work.\n>\n> I think the really critical choice here is how much backwards\n> compatibility we want to keep. The most backwards-compatible way,\n> obviously, is OIDs on by default and things work exactly as they\n> do now. But if we were willing to bend things a little then some\n> interesting possibilities open up. One thing I've been wondering\n> about is whether an explicit WITH OIDS spec ought to cause automatic\n> creation of a unique index on OID for that table. ISTM that any\n> application that wants OIDs at all would want such an index...\n>\n> regards, tom lane\n>\n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to\nmajordomo@postgresql.org\n>\n\n",
"msg_date": "Wed, 18 Jul 2001 16:46:30 -0400",
"msg_from": "\"Rod Taylor\" <rbt@barchord.com>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound (was Re: pg_depend) "
},
{
"msg_contents": "> If OIDs are dropped a mechanism for retrieving the primary key of the\n> last insert would be greatly appreciated. Heck, it would be useful\n> now (rather than returning OID).\n> \n> I much prefer retrieving the sequence number after the insert than\n> before insert where the insert uses it. Especially when trigger\n> muckary is involved.\n\nDoesn't currval() work for your needs.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 18 Jul 2001 17:06:46 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: OID wraparound (was Re: pg_depend)"
},
{
"msg_contents": "Also, without OID's, how do you fix EXACT duplicate records that happen \nby accident? \n\nLER\n\n\n>>>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<<\n\nOn 7/18/01, 3:46:30 PM, Rod Taylor <rbt@barchord.com> wrote regarding Re: \nOID wraparound (was Re: [HACKERS] pg_depend) :\n\n\n> If OIDs are dropped a mechanism for retrieving the primary key of the\n> last insert would be greatly appreciated. Heck, it would be useful\n> now (rather than returning OID).\n\n> I much prefer retrieving the sequence number after the insert than\n> before insert where the insert uses it. Especially when trigger\n> muckary is involved.\n\n> --\n> Rod Taylor\n\n> Your eyes are weary from staring at the CRT. You feel sleepy. Notice\n> how restful it is to watch the cursor blink. Close your eyes. The\n> opinions stated above are yours. You cannot imagine why you ever felt\n> otherwise.\n\n> ----- Original Message -----\n> From: \"Tom Lane\" <tgl@sss.pgh.pa.us>\n> To: \"Lamar Owen\" <lamar.owen@wgcr.org>\n> Cc: \"Bruce Momjian\" <pgman@candle.pha.pa.us>; \"PostgreSQL-development\"\n> <pgsql-hackers@postgresql.org>\n> Sent: Wednesday, July 18, 2001 4:30 PM\n> Subject: Re: OID wraparound (was Re: [HACKERS] pg_depend)\n\n\n> > Lamar Owen <lamar.owen@wgcr.org> writes:\n> > > On Wednesday 18 July 2001 16:06, Tom Lane wrote:\n> > >> It remains to be debated exactly how users should control the\n> choice for\n> > >> user tables, and which choice ought to be the default. I don't\n> have a\n> > >> strong opinion about that either way, and am prepared to hear\n> > >> suggestions.\n> >\n> > > SET OIDGEN boolean for database-wide default policy.\n> > > CREATE TABLE WITH OIDS for individual tables? CREATE TABLE\n> WITHOUT OIDS?\n> >\n> > Something along that line, probably.\n> >\n> > > ?? Is this sort of thing addressed by any SQL standard (Thomas?)?\n> >\n> > OIDs aren't standard, so the standards are hardly likely to help us\n> > decide how they should work.\n> >\n> > I think the really critical choice here is how much backwards\n> > compatibility we want to keep. The most backwards-compatible way,\n> > obviously, is OIDs on by default and things work exactly as they\n> > do now. But if we were willing to bend things a little then some\n> > interesting possibilities open up. One thing I've been wondering\n> > about is whether an explicit WITH OIDS spec ought to cause automatic\n> > creation of a unique index on OID for that table. ISTM that any\n> > application that wants OIDs at all would want such an index...\n> >\n> > regards, tom lane\n> >\n> > ---------------------------(end of\n> broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to\n> majordomo@postgresql.org\n> >\n\n\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n",
"msg_date": "Wed, 18 Jul 2001 21:23:56 GMT",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound (was Re: pg_depend) "
},
{
"msg_contents": "> Also, without OID's, how do you fix EXACT duplicate records that happen \n> by accident? \n\nHow about tid's? SELECT tid FROM tab1.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 18 Jul 2001 17:24:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: OID wraparound (was Re: pg_depend)"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> Also, without OID's, how do you fix EXACT duplicate records that happen \n>> by accident? \n\n> How about tid's? SELECT tid FROM tab1.\n\n\"SELECT ctid\", actually, but that is still the fallback. (Actually\nit always was --- OIDs aren't necessarily unique either, Larry.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Jul 2001 17:32:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound (was Re: pg_depend) "
},
{
"msg_contents": "\nDidn't know about that one, at least from the reading of the docs...\n\nThanks,\nYou answered the question. I knew OID's weren't unique, but they are \nlikely to be able to distinguish between 2 rows in the same table. \n\nMaybe ctid needs to be documented better? \n\nLER\n\n>>>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<<\n\nOn 7/18/01, 4:32:28 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote regarding Re: \nOID wraparound (was Re: [HACKERS] pg_depend) :\n\n\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> Also, without OID's, how do you fix EXACT duplicate records that happen\n> >> by accident?\n\n> > How about tid's? SELECT tid FROM tab1.\n\n> \"SELECT ctid\", actually, but that is still the fallback. (Actually\n> it always was --- OIDs aren't necessarily unique either, Larry.)\n\n> regards, tom lane\n",
"msg_date": "Wed, 18 Jul 2001 21:35:59 GMT",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound (was Re: pg_depend) "
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I think it should be off on user tables by default, but kept on system\n> tables just for completeness.\n\nClearly, certain system tables *must* have OIDs --- pg_class, pg_type,\npg_operator, etc --- because we use those OIDs to refer to objects.\nThese are exactly the same tables that have unique indexes on OID.\n\nHowever, I don't see the point of consuming OIDs for entries in, say,\npg_listener. The notion that it must have OIDs simply because it's\na system table seems silly.\n\npg_attribute is on the edge --- are table columns objects in their own\nright, deserving of a separate OID, or not? So far I don't see any\nreally good reason why they should have one.\n\nSince the goal is to minimize OID consumption, not assigning OIDs to\npg_attribute entries seems like a good idea. I don't think this is\njust a marginal hack. ISTM the main source of OID consumption for an\nup-and-running system (if it has no large user tables with OIDs) will be\ncreation of temp tables. We can expend two OIDs per temp table\n(pg_class and pg_type), or we can expend N+9 for an N-column temp table\n(the seven system attributes plus the N user ones plus pg_class and\npg_type). That's *at least* a 5x difference in steady-state rate of OID\nconsumption. If that doesn't get your attention, it should.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Jul 2001 18:41:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound (was Re: pg_depend) "
},
{
"msg_contents": "Larry Rosenman <ler@lerctr.org> writes:\n> Maybe ctid needs to be documented better? \n\nI think it's documented about as well as OID is, actually --- see\n\nhttp://www.ca.postgresql.org/users-lounge/docs/7.1/postgres/sql-syntax-columns.html\n\nwhich AFAIR is the only formal documentation of any of the system\ncolumns.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Jul 2001 18:50:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound (was Re: pg_depend) "
},
{
"msg_contents": "currval() could work nicely, but thats an additional query. Currently\nOID (in php among others) can be retrieved along with the insert\nresponse which is instantly retrievable. This makes for a very quick\nmiddleware enforced foreign key entry in other databases.\n\nReturning the entire primary key of the last row inserted without\ndoing additional queries -- this is a known element which could be\ncached -- could be very useful in these situations.\n\nWith tables requiring multi-key elements we do a second select asking\nfor currval()s of the sequences.\n\n--\nRod Taylor\n\nYour eyes are weary from staring at the CRT. You feel sleepy. Notice\nhow restful it is to watch the cursor blink. Close your eyes. The\nopinions stated above are yours. You cannot imagine why you ever felt\notherwise.\n\n----- Original Message -----\nFrom: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\nTo: \"Rod Taylor\" <rbt@barchord.com>\nCc: \"Lamar Owen\" <lamar.owen@wgcr.org>; \"Tom Lane\"\n<tgl@sss.pgh.pa.us>; \"PostgreSQL-development\"\n<pgsql-hackers@postgresql.org>\nSent: Wednesday, July 18, 2001 5:06 PM\nSubject: Re: OID wraparound (was Re: [HACKERS] pg_depend)\n\n\n> > If OIDs are dropped a mechanism for retrieving the primary key of\nthe\n> > last insert would be greatly appreciated. Heck, it would be\nuseful\n> > now (rather than returning OID).\n> >\n> > I much prefer retrieving the sequence number after the insert than\n> > before insert where the insert uses it. Especially when trigger\n> > muckary is involved.\n>\n> Doesn't currval() work for your needs.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania\n19026\n>\n\n",
"msg_date": "Wed, 18 Jul 2001 19:02:45 -0400",
"msg_from": "\"Rod Taylor\" <rbt@barchord.com>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound (was Re: pg_depend)"
},
{
"msg_contents": "On Wed, Jul 18, 2001 at 04:06:28PM -0400, Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Is the idea to make oid's optional, with them disabled by default on\n> > user tables?\n> \n> My thought is to make OID generation optional on a per-table basis, and\n> disable it on system tables that don't need unique OIDs. (OID would\n> read as NULL on any row for which an OID wasn't generated.)\n\nHow about generalizing this to user defineable system attributes? OID\nwould just be a special case: it's really just a system 'serial' isn't it?\n\nWe occasionally get calls for other system type attributes that would\nbe too expensive for every table, but would be useful for individual\ntables. One is creation_timestamp. Or this could be a route to bringing\ntimetravel back in: start_date stop_date, anyone?\n\n\n> \n> It remains to be debated exactly how users should control the choice for\n> user tables, and which choice ought to be the default. I don't have a\n> strong opinion about that either way, and am prepared to hear\n> suggestions.\n\nTwo ways come to mind: either special WITH options, at the end, or\na new per attribute SYSTEM keyword:\n\nCREATE TABLE <...> WITH OIDS\nCREATE TABLE <...> WITH TIMETRAVEL\nCREATE TABLE <...> WITH DATESTAMP\n\nCREAT TABLE foo (oid oid SYSTEM, \n created timestamp SYSTEM DEFAULT CURRENT_TIMESTAMP,\n\t\t my_id serial,\n\t\t my_field text);\n\nSo, basically it just creates the type and gives it a negative attnum.\nThe 'oid system' case would need to be treated specially, hooking the\noid up to the system wide counter.\n\nI'm not sure the special behavior of returning NULL for oid on a table\nwithout one is going to be useful: any client code that expects everything\nto have an oid is unlikely to handle NULL better than an error. In fact,\nin combination with the MS-Access compatability hack of '= NULL' as\n'IS NULL', I see a potential great loss of data:\n\nSELECT oid,* from some_table;\n\n<display to user for editing>\n\nUPDATE some_table set field1=$field1, field2=$field2, <...> WHERE oid = $oid;\n\nif $oid is NULL ... There goes the entire table.\n\nRoss\n",
"msg_date": "Wed, 18 Jul 2001 18:45:06 -0500",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound (was Re: pg_depend)"
},
{
"msg_contents": "\"Ross J. Reedstrom\" <reedstrm@rice.edu> writes:\n> On Wed, Jul 18, 2001 at 04:06:28PM -0400, Tom Lane wrote:\n>> My thought is to make OID generation optional on a per-table basis, and\n>> disable it on system tables that don't need unique OIDs. (OID would\n>> read as NULL on any row for which an OID wasn't generated.)\n\n> How about generalizing this to user defineable system attributes? OID\n> would just be a special case: it's really just a system 'serial' isn't it?\n\nHmm. Of the existing system attributes, OID is the only one that's\nconceivably optional --- ctid,xmin,xmax,cmin,cmax are essential to\nthe functioning of the system. (tableoid doesn't count here, since\nit's a \"virtual\" attribute that doesn't occupy any storage space on\ndisk, and thus making it optional wouldn't buy anything.) So there's\nno gain to be seen in that direction.\n\nIn the other direction, I have no desire to buy into adding creation\ntimestamp or anything else in this go-round. Maybe sometime in the\nfuture.\n\nBTW, I'm not intending to change the on-disk format of tuple headers;\nif no OID is assigned to a row, the OID field will still be there,\nit'll just be 0. Given that it's only four bytes, it's probably not\nworth dealing with a variable header format to suppress the space usage.\n(On machines where MAXALIGN is 8 bytes, there likely wouldn't be any\nsavings anyway.)\n\nI wouldn't much care for dealing with a variable tuple header format to\nsupport creation timestamp either, and that leads to the conclusion that\nit's just going to be a user field anyway. People who need it can do it\nwith a trigger ...\n\n\n> I'm not sure the special behavior of returning NULL for oid on a table\n> without one is going to be useful: any client code that expects everything\n> to have an oid is unlikely to handle NULL better than an error.\n\nWell, I can see three possible choices: return NULL, return zero, or\ndon't create an OID entry in pg_attribute at all for such a table\n(I *think* that would be sufficient to prevent people from accessing\nthe OID column, but am not sure). Of these I'd think the first is\nleast likely to break stuff. However, you might be right that breaking\nstuff is preferable to the possibility of an app that thinks it knows\nwhat it's doing causing major data lossage because it doesn't.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Jul 2001 20:13:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound (was Re: pg_depend) "
},
{
"msg_contents": "From: Tom Lane <tgl@sss.pgh.pa.us>\nSubject: OID wraparound (was Re: [HACKERS] pg_depend)\nDate: Wed, 18 Jul 2001 13:52:45 -0400\nMessage-ID: <6335.995478765@sss.pgh.pa.us>\n\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Yikes, I am not sure we are ready to make oids optional.\n> \n> We've discussed it enough, it's time to do it. I have an ulterior plan\n> here: I want 7.2 not to have any limitations that prevent it from being\n> used in a true 24x7, up-forever scenario. VACUUM lockouts are fixed\n> now, or nearly.\n\nWhat about pg_log? It will easily become a huge file. Currently the\nonly solution is re-installing whole database, that is apparently\nunacceptable for very big installation like 1TB.\n\n> The other stumbling blocks for continuous runs are OID\n> wraparound and XID wraparound. We've got unique indexes on OIDs for all\n> system catalogs that need them (we were short a couple as of 7.1, btw),\n> but OID wrap is still likely to lead to unwanted \"duplicate key\"\n> failures. So we still need a way to reduce the system's appetite for\n> OIDs. In a configuration where OIDs are used only where *necessary*,\n> it'd be a long time till wrap. I also intend to do something about XID\n> wrap next month...\n\nSo are we going to remove OID? I see following in the SQL99 draft (not\nsure it actually becomes a part of the SQL99 standard, though). Can we\nimplement the \"Object identifier\" without the current oid mechanism?\n\n---------------------------------------------------------------------\n 4.10 Object identifier\n\n An object identifier OID is a value generated when an object is\n created, to give that object an immutable identity. It is unique in\n the known universe of objects that are instances of abstract data\n types, and is conceptually separate from the value, or state, of\n the instance.\n\n The object identifier type is described by an object identifier\n type descriptor. An object identifier type descriptor contains:\n\n - an indication that this is an object identifier type; and\n\n - the name of the abstract data type within which the object\n identifier type is used.\n\n The object identifier type is only used to define the OID pseudo-\n column implicitly defined in object ADTs within an ADT definition.\n\n ___________________________________________________________________\n\n An OID literal exists for an object identifier type only if the\n associated abstract data type was defined WITH OID VISIBLE. The OID\n value is materialized as a character string with an implementation-\n defined length and character set SQL_TEXT.\n\n---------------------------------------------------------------------\n\n>> Will we have cheap 64-bit oids by the time oid wraparound becomes an\n>> issue?\n>\n>No, we won't, because OID wrap is an issue already for any long-uptime\n>installation. (64-bit XIDs are not a real practical answer either,\n>btw.)\n\nWhat's wrong with 64-bit oids (except extra 4bytes)?\n--\nTatsuo Ishii\n",
"msg_date": "Thu, 19 Jul 2001 10:06:50 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound (was Re: pg_depend)"
},
{
"msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> What about pg_log? It will easily become a huge file. Currently the\n> only solution is re-installing whole database, that is apparently\n> unacceptable for very big installation like 1TB.\n\nThat's part of the XID wraparound issue, which is a separate\ndiscussion... but yes, I want to do something about that for 7.2 also.\n\n> So are we going to remove OID?\n\nNo, only make it optional for user tables.\n\n> I see following in the SQL99 draft (not\n> sure it actually becomes a part of the SQL99 standard, though). Can we\n> implement the \"Object identifier\" without the current oid mechanism?\n\nAs near as I can tell, SQL99's idea of OIDs has little to do with ours\nanyway. Note that they want to assign an OID to an \"instance of an\nabstract data type\". Thus, if you created a table with several columns\neach of which is one or another kind of ADT, then each column value\nwould contain an associated OID --- the OID is assigned to each value,\nnot to table rows.\n\nMy suspicion is that SQL99-style OIDs would be implemented as a separate\ncounter, and would be 8 bytes from the get-go.\n\n> What's wrong with 64-bit oids (except extra 4bytes)?\n\nPortability, mostly. I'm not ready to tell platforms without 'long\nlong' that we don't support them at all anymore. If they don't have\nint8, or someday they don't have SQL99 OIDs, that's one thing, but\nzero functionality is something else.\n\nI'm also somewhat concerned about the speed price of widening Datum to\n8 bytes on machines where that's not a well-supported datatype --- note\nthat we'll pay for that almost everywhere, not only in Oid\nmanipulations.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Jul 2001 21:23:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound (was Re: pg_depend) "
},
{
"msg_contents": ">> What's wrong with 64-bit oids (except extra 4bytes)?\n\n> Portability, mostly.\n\nOh, there's one other small problem: breaking the on-the-wire protocol.\nWe send OIDs as column datatype identifiers, so an 8-byte-OID backend\nwould not interoperate with clients that didn't also think OID is 8\nbytes. Aside from client/server compatibility issues, that raises the\nportability ante a good deal --- not only your server machine has to\nhave 'long long' support, but so do all your application environments.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Jul 2001 21:29:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound (was Re: pg_depend) "
},
{
"msg_contents": "Bill Studenmund wrote:\n> \n> On Wed, 18 Jul 2001, Hiroshi Inoue wrote:\n> \n> > Oops I made a mistake.\n> > Reference name is needed not an object name,\n> > i.e\n> > object relid\n> > object oid\n> > relerence relid\n> > reference oid\n> > reference name\n> >\n> > create table a (...);\n> > create view view_a as select .. from a;\n> >\n> > Then we have an pg_depend entry e.g.\n> >\n> > pg_class_relid\n> > oid of the view_a\n> > pg_class_relid\n> > oid of the table a\n> > 'a' the name of the table\n> >\n> > and so on.\n> >\n> > drop table a; (unadorned drop).\n> >\n> > Then the above entry would be changed to\n> >\n> > pg_class_relid(unchanged)\n> > oid of the view_s(unchagned)\n> > pg_class_relid(unchanged)\n> > InvalidOid\n> > 'a' the name of the table(unchanged)\n> >\n> > create table a (...);\n> >\n> > Then the pg_depend entry would be\n> >\n> > pg_class_relid(unchanged)\n> > oid of the view_s(unchagned)\n> > pg_class_relid(unchanged)\n> > the oid of the new table a\n> > 'a' the name of the table(unchanged)\n> \n> This step I disagree with. Well, I disagree with the automated aspect of\n> the update. How does postgres know that the new table a is sufficiently\n> like the old table that it should be used? A way the DBA could say, \"yeah,\n> restablish that,\" would be fine.\n> \n\nYou could DROP a table with CASCADE or RESTRICT keyword if\nyou hate the behavior.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Thu, 19 Jul 2001 11:19:04 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: pg_depend"
},
{
"msg_contents": "On Thursday 19 July 2001 06:08, you wrote:\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n\n> I think it should be off on user tables by default, but kept on system\n> tables just for completeness. It could be added at table creation time\n> or from ALTER TABLEL ADD. It seems we just use them too much for system\n> stuff. pg_description is just one example.\n\nand what difference should it make, to have a few extra hundred or thousand \nOIDs used by system tables, when I insert daily some ten thousand records \neach using an OID for itself?\n\nWhy not make OIDs 64 bit? Might slow down a little on legacy hardware, but in \na couple of years we'll all run 64 bit hardware anyway.\n\nI believe that just using 64 bit would require the least changes to Postgres. \nNow, why would that look that obvious to me and yet I saw no mentioing of \nthis in the recent postings. Surely it has been discussed before, so which is \nthe point I miss or don't understand?\n\nI would need 64 bit sequences anyway, as it is predictable that our table for \npathology results will run out of unique IDs in a couple of years.\n\nHorst \n",
"msg_date": "Thu, 19 Jul 2001 13:03:28 +1000",
"msg_from": "Horst Herb <horst@hherb.com>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound (was Re: pg_depend)"
},
{
"msg_contents": ">>>Bruce Momjian said:\n[...]\n > > No, we won't, because OID wrap is an issue already for any long-uptime\n > > installation. (64-bit XIDs are not a real practical answer either,\n > > btw.)\n > \n > Have we had a wraparound yet?\n\nJust for the record, I had an OID overflow on production database (most middleware crashed mysteriously but no severe data loss) about a month ago. This was on 7.0.2 which probably had some bug ... preventing real wrap to happen. No new allocations (INSERTs that used autoincrementing sequences) were possible in most tables.\n\nAnyway, I had to dump/restore the database - several hours downtime. The database is not very big in size (around 10 GB in the data directory), but contains many objects (logs) and many objects are inserted/deleted from the database - in my opinion at not very high rate. Many tables are also created/dropped during processing.\n\nWhat is worrying is that this database lived about half a year only...\n\nIn my opinion, making OIDs optional would help things very much. In my case, I don't need OIDs for log databases. Perhaps it would additionally help if OIDs are separately increasing for each database - not single counter for the entire PostgreSQL installation.\n\nRegards,\nDaniel\n\n",
"msg_date": "Thu, 19 Jul 2001 15:30:24 +0300",
"msg_from": "Daniel Kalchev <daniel@digsys.bg>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound (was Re: pg_depend) "
},
{
"msg_contents": "At 12:37 18/07/01 -0400, Tom Lane wrote:\n>Philip Warner <pjw@rhyme.com.au> writes:\n>> At 11:38 18/07/01 -0400, Tom Lane wrote:\n>>> I'd just make the dependency be from view_a to a and keep things\n>>> simple. What's so wrong with recompiling the view for *every* change\n>>> of the underlying table?\n>\n>> Not a problem for views, but when you get to constraints on large tables,\n>> re-evaluating all the constraints unnecessarily could be a nightmare, and\n>> especially frustrating when you just dropped an irrelevant attr.\n>\n>Huh? You seem to be thinking that we'd need to re-check the constraint\n>at each row of the table, but I don't see why we'd need to. I was just\n>envisioning re-parsing the constraint source text.\n\nI'm paranoid, but there could be a case for doing so, especially if we\nallow CHAR(n) to become CHAR(m) where m < n. Or any similar data-affecting\nfield change.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Thu, 19 Jul 2001 23:07:15 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: pg_depend "
},
{
"msg_contents": "On Thu, 19 Jul 2001, Hiroshi Inoue wrote:\n\n> > This step I disagree with. Well, I disagree with the automated aspect\nof\n> > the update. How does postgres know that the new table a is sufficiently\n> > like the old table that it should be used? A way the DBA could say, \"yeah,\n> > restablish that,\" would be fine.\n> > \n> \n> You could DROP a table with CASCADE or RESTRICT keyword if\n> you hate the behavior.\n\nYou didn't answer the question. :-)\n\n\"How does postgres know that the new table a is sufficiently like the old\ntable that it should be used?\"\n\nBy making the reattachment automatic, you are saying that once we make an\nobject of a given name and make objects depend on it, we can never have\nanother object of the same name but different. Because PG is going to try\nto re-attach the dependants for you.\n\nThat's different than current behavior, and strikes me as the system being\noverly helpful (a class of behavior I personally find very annoying).\n\nPlease understand I like the idea of being ABLE to do this reattachment. I\ncan see a lot of places where it would be VERY useful. My vote though is\nto just make reattachment a seperate step or something you flag, like in\nthe CREATE TABLE, say attach me to everything wanting a table of this\nname. Make it something you have to indicate you want.\n\nTake care,\n\nBill\n\n",
"msg_date": "Thu, 19 Jul 2001 10:29:48 -0700 (PDT)",
"msg_from": "Bill Studenmund <wrstuden@zembu.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_depend"
},
{
"msg_contents": "Bill Studenmund wrote:\n> \n> On Thu, 19 Jul 2001, Hiroshi Inoue wrote:\n> \n> > > This step I disagree with. Well, I disagree with the automated aspect\n> of\n> > > the update. How does postgres know that the new table a is sufficiently\n> > > like the old table that it should be used? A way the DBA could say, \"yeah,\n> > > restablish that,\" would be fine.\n> > >\n> >\n> > You could DROP a table with CASCADE or RESTRICT keyword if\n> > you hate the behavior.\n> \n> You didn't answer the question. :-)\n> \n> \"How does postgres know that the new table a is sufficiently like the old\n> table that it should be used?\"\n> \n> By making the reattachment automatic, you are saying that once we make an\n> object of a given name and make objects depend on it, we can never have\n> another object of the same name but different. Because PG is going to try\n> to re-attach the dependants for you.\n> \n> That's different than current behavior, and strikes me as the system being\n> overly helpful (a class of behavior I personally find very annoying).\n> \n> Please understand I like the idea of being ABLE to do this reattachment. I\n> can see a lot of places where it would be VERY useful.\n\nIt doesn't seem preferable that the default(unadorned) DROP\nallows reattachement after the DROP. The default(unadorned) DROP\nshould be the same as DROP RESTRICT(or CASCADE because the current\nbehabior is halfway CASCADE?). How about adding another keyword \nto allow reattachment after the DROP ?\nAll depende(a?)nt objects must be re-complied after the\nreattachment and the re-compilation would fail if the new table\nisn't sufficiently like the old one.\n\nAnyway my opinion seems in a minority as usual.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Fri, 20 Jul 2001 08:45:05 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: pg_depend"
},
{
"msg_contents": "On Fri, 20 Jul 2001, Hiroshi Inoue wrote:\n\n> Bill Studenmund wrote:\n> > \n> > \"How does postgres know that the new table a is sufficiently like the old\n> > table that it should be used?\"\n> > \n> > By making the reattachment automatic, you are saying that once we make an\n> > object of a given name and make objects depend on it, we can never have\n> > another object of the same name but different. Because PG is going to try\n> > to re-attach the dependants for you.\n> > \n> > That's different than current behavior, and strikes me as the system being\n> > overly helpful (a class of behavior I personally find very annoying).\n> > \n> > Please understand I like the idea of being ABLE to do this reattachment. I\n> > can see a lot of places where it would be VERY useful.\n> \n> It doesn't seem preferable that the default(unadorned) DROP\n> allows reattachement after the DROP. The default(unadorned) DROP\n> should be the same as DROP RESTRICT(or CASCADE because the current\n> behabior is halfway CASCADE?). How about adding another keyword \n> to allow reattachment after the DROP ?\n\nHmmm... My preference is for the subsequent CREATE to indicate if reattach\nshould happen or not. But I'm not sure if that would leave dangling depend\nentries around.\n\n> All depende(a?)nt objects must be re-complied after the\n> reattachment and the re-compilation would fail if the new table\n> isn't sufficiently like the old one.\n> \n> Anyway my opinion seems in a minority as usual.\n\nOnly partly. I think everyone likes the idea of being able to reattach\nlater, an idea you came up with. :-)\n\nTake care,\n\nBill\n\n",
"msg_date": "Thu, 19 Jul 2001 17:07:31 -0700 (PDT)",
"msg_from": "Bill Studenmund <wrstuden@zembu.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_depend"
},
{
"msg_contents": "On Fri, Jul 20, 2001 at 08:45:05AM +0900, Hiroshi Inoue wrote:\n> \n> It doesn't seem preferable that the default(unadorned) DROP\n> allows reattachement after the DROP. The default(unadorned) DROP\n> should be the same as DROP RESTRICT(or CASCADE because the current\n> behabior is halfway CASCADE?). How about adding another keyword \n> to allow reattachment after the DROP ?\n> All depende(a?)nt objects must be re-complied after the\n> reattachment and the re-compilation would fail if the new table\n> isn't sufficiently like the old one.\n> \n> Anyway my opinion seems in a minority as usual.\n> \n\nHow about making that functionality happen with ALTER <FOO> REPLACE\nas Tom suggested? If I'm wanting to change an underlying table, how\nlikely is it that I don't have the replacement ready right now?\n\nSo, instead of:\n\nDROP <FOO> <name> WITH INTENT TO REPLACE\n\nCREATE <FOO> <name> <body>\n\nit's just:\n\nALTER <FOO> <name> REPLACE <body>\n\nAll nice and transactional: if the attempt to reattach one of the \nsubordinate objects fails, you roll back to the old one.\n\nRoss\n",
"msg_date": "Thu, 19 Jul 2001 19:13:36 -0500",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: pg_depend"
},
{
"msg_contents": "Tom Lane wrote:\n\n> >> What's wrong with 64-bit oids (except extra 4bytes)?\n> \n> > Portability, mostly.\n> \n> Oh, there's one other small problem: breaking the on-the-wire protocol.\n\nSo 8-byte-OID is for PostgreSQL 8? :-)\n\n-- \nAlessio F. Bragadini\t\talessio@albourne.com\nAPL Financial Services\t\thttp://village.albourne.com\nNicosia, Cyprus\t\t \tphone: +357-2-755750\n\n\"It is more complicated than you think\"\n\t\t-- The Eighth Networking Truth from RFC 1925\n",
"msg_date": "Fri, 20 Jul 2001 11:10:32 +0300",
"msg_from": "Alessio Bragadini <alessio@albourne.com>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound (was Re: pg_depend)"
}
] |
[
{
"msg_contents": "Hi,\n\nCan anyone confirm whether I can do something like the following in a\nPL/pgsql trigger ( on table tab_a )\n\n\tSELECT INTO tab_b * FROM OLD;\n\nor do I have to do -\n\n\tINSERT INTO tab_b SELECT * FROM tab_a WHERE id=OLD.id;\n\nAll that I want to do is insert the records from OLD into a 2nd table.\n\nAny advice would be gratefully received.\n\nBernie Warner\nJFDI Technology Ltd.\n\n",
"msg_date": "Mon, 16 Jul 2001 11:28:30 +0100",
"msg_from": "Bernie Warner <Bernie_W@jfdi-tech.com>",
"msg_from_op": true,
"msg_subject": "OLD in Trigger"
}
] |
[
{
"msg_contents": "Hi,\n\nCan anyone confirm whether I can do something like the following in a\nPL/pgsql trigger ( on table tab_a )\n\n\tSELECT INTO tab_b * FROM OLD;\n\nor do I have to do -\n\n\tINSERT INTO tab_b SELECT * FROM tab_a WHERE id=OLD.id;\n\nAll that I want to do is insert the records from OLD into a 2nd table.\n\nAny advice would be gratefully received.\n\nBernie Warner\nJFDI Technology Ltd.\n\n",
"msg_date": "Mon, 16 Jul 2001 13:07:26 +0100",
"msg_from": "Bernie Warner <Bernie_W@jfdi-tech.com>",
"msg_from_op": true,
"msg_subject": "OLD in Triggers"
}
] |
[
{
"msg_contents": "in testing CVS tip(sort of), I found that you need -lcurses with\n-ledit on NetBSD 1.5.1. \n\n_tputs in undefined otherwise. \n\nLER\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Mon, 16 Jul 2001 07:36:14 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "NetBSD 1.5.1(HP300)"
},
{
"msg_contents": "Larry Rosenman writes:\n\n> in testing CVS tip(sort of), I found that you need -lcurses with\n> -ledit on NetBSD 1.5.1.\n>\n> _tputs in undefined otherwise.\n\nThis is a known problem, but it hasn't been satisfactorily explained so\nfar. The configure test links a program against -ledit and it seems to\nsucceed without -lcurses.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Mon, 16 Jul 2001 21:44:17 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: NetBSD 1.5.1(HP300)"
},
{
"msg_contents": "\nWhen it trys to run the following:\nconfigure:7174: gcc -o conftest -O2 -pipe -L/usr/local/lib conftest.c \n-lz -lcrypt -lresolv -lcompat -lm -lutil -ledit 1>&5\nconfigure: failed program was:\n#line 7170 \"configure\"\n#include \"confdefs.h\"\nint main() { return 0; }\n$ \n\nthat program dies:\n$ cat conftest.c\n#include \"confdefs.h\"\nint main() { return 0; }\n\n$ /lib conftest.c -lz -lcrypt -lresolv -lcompat -lm -lutil -ledit \n <\n$ ./conftest\n/usr/libexec/ld.so: Undefined symbol \"_tputs\" in \nconftest:/usr/lib/libedit.so.2.3\n\n$ \n\nI'm not sure WHY configure doesn't add -lcurses, but it needs to.\n\nI can give you a shell account on this box (WARNING: it's slow, it's a 25 \nMhz 68040) if you want. \n\nLER\n\n>>>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<<\n\nOn 7/16/01, 2:44:17 PM, Peter Eisentraut <peter_e@gmx.net> wrote regarding \nRe: [HACKERS] NetBSD 1.5.1(HP300):\n\n\n> Larry Rosenman writes:\n\n> > in testing CVS tip(sort of), I found that you need -lcurses with\n> > -ledit on NetBSD 1.5.1.\n> >\n> > _tputs in undefined otherwise.\n\n> This is a known problem, but it hasn't been satisfactorily explained so\n> far. The configure test links a program against -ledit and it seems to\n> succeed without -lcurses.\n\n> --\n> Peter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n",
"msg_date": "Mon, 16 Jul 2001 19:50:41 GMT",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: NetBSD 1.5.1(HP300)"
},
{
"msg_contents": "Larry Rosenman writes:\n\n> When it trys to run the following:\n> configure:7174: gcc -o conftest -O2 -pipe -L/usr/local/lib conftest.c\n> -lz -lcrypt -lresolv -lcompat -lm -lutil -ledit 1>&5\n> configure: failed program was:\n> #line 7170 \"configure\"\n> #include \"confdefs.h\"\n> int main() { return 0; }\n> $\n>\n> that program dies:\n> $ cat conftest.c\n> #include \"confdefs.h\"\n> int main() { return 0; }\n>\n> $ /lib conftest.c -lz -lcrypt -lresolv -lcompat -lm -lutil -ledit\n> <\n> $ ./conftest\n> /usr/libexec/ld.so: Undefined symbol \"_tputs\" in\n> conftest:/usr/lib/libedit.so.2.3\n\nYes, I've seen that before. The program links okay but does not execute\nbecause of an undefined symbol. I think that's a linker bug. Why would I\nneed a linker if it doesn't make sure the executable has fully resolved\nsymbols? This can be observed at least with NetBSD -ledit and OpenBSD\n-lreadline.\n\nHere's how I would expect it to work:\n\nconfigure:3249: checking for readline\nconfigure:3271: gcc -o conftest -O2 -g conftest.c -lreadline 1>&5\n/usr/lib/gcc-lib/i386-redhat-linux/2.96/../../../libreadline.so: undefined reference to `tgetnum'\n[snip]\ncollect2: ld returned 1 exit status\nconfigure: failed program was:\n#line 3260 \"configure\"\n#include \"confdefs.h\"\n/* Override any gcc2 internal prototype to avoid an error. */\n/* We use char because int might match the return type of a gcc2\n builtin and then its argument prototype would still apply. */\nchar readline();\n\nint main() {\nreadline()\n; return 0; }\n[snip]\nconfigure:3271: gcc -o conftest -O2 -g conftest.c -lreadline -ltermcap 1>&5\n[success]\n\nCan you take this to the OS developers?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Mon, 16 Jul 2001 22:40:31 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: NetBSD 1.5.1(HP300)"
},
{
"msg_contents": "\nReported to NetBSD as pr BIN/13486\n\nLER\n\n>>>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<<\n\nOn 7/16/01, 3:40:31 PM, Peter Eisentraut <peter_e@gmx.net> wrote regarding \nRe: [HACKERS] NetBSD 1.5.1(HP300):\n\n\n> Larry Rosenman writes:\n\n> > When it trys to run the following:\n> > configure:7174: gcc -o conftest -O2 -pipe -L/usr/local/lib conftest.c\n> > -lz -lcrypt -lresolv -lcompat -lm -lutil -ledit 1>&5\n> > configure: failed program was:\n> > #line 7170 \"configure\"\n> > #include \"confdefs.h\"\n> > int main() { return 0; }\n> > $\n> >\n> > that program dies:\n> > $ cat conftest.c\n> > #include \"confdefs.h\"\n> > int main() { return 0; }\n> >\n> > $ /lib conftest.c -lz -lcrypt -lresolv -lcompat -lm -lutil -ledit\n> > <\n> > $ ./conftest\n> > /usr/libexec/ld.so: Undefined symbol \"_tputs\" in\n> > conftest:/usr/lib/libedit.so.2.3\n\n> Yes, I've seen that before. The program links okay but does not execute\n> because of an undefined symbol. I think that's a linker bug. Why would \nI\n> need a linker if it doesn't make sure the executable has fully resolved\n> symbols? This can be observed at least with NetBSD -ledit and OpenBSD\n> -lreadline.\n\n> Here's how I would expect it to work:\n\n> configure:3249: checking for readline\n> configure:3271: gcc -o conftest -O2 -g conftest.c -lreadline 1>&5\n> /usr/lib/gcc-lib/i386-redhat-linux/2.96/../../../libreadline.so: \nundefined reference to `tgetnum'\n> [snip]\n> collect2: ld returned 1 exit status\n> configure: failed program was:\n> #line 3260 \"configure\"\n> #include \"confdefs.h\"\n> /* Override any gcc2 internal prototype to avoid an error. */\n> /* We use char because int might match the return type of a gcc2\n> builtin and then its argument prototype would still apply. */\n> char readline();\n\n> int main() {\n> readline()\n> ; return 0; }\n> [snip]\n> configure:3271: gcc -o conftest -O2 -g conftest.c -lreadline \n-ltermcap 1>&5\n> [success]\n\n> Can you take this to the OS developers?\n\n> --\n> Peter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n",
"msg_date": "Mon, 16 Jul 2001 20:52:27 GMT",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: NetBSD 1.5.1(HP300)"
},
{
"msg_contents": "Larry Rosenman writes:\n\n> in testing CVS tip(sort of), I found that you need -lcurses with\n> -ledit on NetBSD 1.5.1.\n>\n> _tputs in undefined otherwise.\n\nFixed in current.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Tue, 28 Aug 2001 17:03:46 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: NetBSD 1.5.1(HP300)"
},
{
"msg_contents": "* Peter Eisentraut <peter_e@gmx.net> [010828 09:59]:\n> Larry Rosenman writes:\n> \n> > in testing CVS tip(sort of), I found that you need -lcurses with\n> > -ledit on NetBSD 1.5.1.\n> >\n> > _tputs in undefined otherwise.\n> \n> Fixed in current.\nof NetBSD? \n\nLER\n\n> \n> -- \n> Peter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n",
"msg_date": "Tue, 28 Aug 2001 13:19:48 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": true,
"msg_subject": "Re: NetBSD 1.5.1(HP300)"
},
{
"msg_contents": "Larry Rosenman writes:\n\n> * Peter Eisentraut <peter_e@gmx.net> [010828 09:59]:\n> > Larry Rosenman writes:\n> >\n> > > in testing CVS tip(sort of), I found that you need -lcurses with\n> > > -ledit on NetBSD 1.5.1.\n> > >\n> > > _tputs in undefined otherwise.\n> >\n> > Fixed in current.\n> of NetBSD?\n\nPostgreSQL\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Wed, 29 Aug 2001 23:09:46 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: NetBSD 1.5.1(HP300)"
}
] |
[
{
"msg_contents": "Tom,\n\nwe noticed you changed gist.c to handle NULLS. It seems there is\nproblem with your changes.\nin gist.c\n\n /* GIST indexes don't index nulls, see notes in gistinsert */\n if (! IndexTupleHasNulls(itup))\n {\n /*\n\n....... skipped ....\n\n /*\n * Currently, GIST indexes do not support indexing NULLs; considerable\n * infrastructure work would have to be done to do anything reasonable\n * with a NULL.\n */\n if (IndexTupleHasNulls(itup))\n {\n\n\nWhile it's ok for single key but for multikey indexes removing tuple with NULL\nlooks not right. Consider (a,b,c) where C is NULL. Your changes would\nremove tuple and it would be impossible to find (a,b) using this index.\nDid you think about this particular case ?\n\nI remind we have choosen to leave NULLs because vacuum complained about\ndifferent number of tuples in heap and index and all our opclasses work\ncorrectly with NULLs. Did you change vacuum code so it will not complain ?\n\nIn principle, if you insist on your approach, we propose to extend it\nto multikey case by removing tuple if and only if leading keys are NULLs\n\nWhat do you think ?\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Mon, 16 Jul 2001 17:06:44 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "handling NULLS in GiST"
},
{
"msg_contents": "Oleg Bartunov <oleg@sai.msu.su> writes:\n> we noticed you changed gist.c to handle NULLS. It seems there is\n> problem with your changes.\n\nI would like to see GIST upgraded to handle nulls, but at the moment\nit's not null-safe. Try a few null entries, watch it core dump, if you\ndon't have that patch in place. (At least it does with the contrib/cube\nopclass, didn't bother with any additional experiments.)\n\nAt the very least you'd need to replace all the uses of\nDirectFunctionCallN to invoke the opclass support routines\nwith code that is capable of detecting and signaling nulls.\nThat would allow non-null-safe opclass routines to be protected\nby marking them \"strict\".\n\nBut that's a micro-issue. The macro-issue is what you intend to\ndo with NULLs in the first place. I understand what btree does\nwith them, but what's the corresponding concept for GIST?\n\n> I remind we have choosen to leave NULLs because vacuum complained about\n> different number of tuples in heap and index and all our opclasses work\n> correctly with NULLs. Did you change vacuum code so it will not complain ?\n\nYes.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 16 Jul 2001 10:51:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: handling NULLS in GiST "
},
{
"msg_contents": "On Mon, 16 Jul 2001, Tom Lane wrote:\n\n> Oleg Bartunov <oleg@sai.msu.su> writes:\n> > we noticed you changed gist.c to handle NULLS. It seems there is\n> > problem with your changes.\n>\n> I would like to see GIST upgraded to handle nulls, but at the moment\n> it's not null-safe. Try a few null entries, watch it core dump, if you\n> don't have that patch in place. (At least it does with the contrib/cube\n> opclass, didn't bother with any additional experiments.)\n\nWe also would like to handle NULLs. All our codes handle NULLs properly.\ncontrib/cube is just a bad example :-) In any case if you give an\ninterface to developer it's his responsibility to be aware of possible\nerrors. Developer has always a possibility to divide by zero.\nWe could change contrib/cube to be null-safe.\nAlso multikey split algorithm uses NULL to mark secondary (...) keys\nin tuple for optimization of page splitting and we don't like idea to\nrewrite algorithm. GiST interface functions (split, union -\nuser-level functions) have a pointer to operand vector as argument.\nOperand vector can't be a NULL, but some operands in the vector could\nbe NULL.\n\n\n>\n> At the very least you'd need to replace all the uses of\n> DirectFunctionCallN to invoke the opclass support routines\n> with code that is capable of detecting and signaling nulls.\n> That would allow non-null-safe opclass routines to be protected\n> by marking them \"strict\".\n\nvaguely understand :-) DirectFunctionCallN are already interface to\nopclass support routines. Do we need to build on yet another interface\njust to mark bad users routines ? What should we do with that 'strict'\nmark ?\n\n\n>\n> But that's a micro-issue.\n\nagreed, but I'd like to require people write null-safe contribs\nand remove your stopper.\n\nThe macro-issue is what you intend to\n> do with NULLs in the first place. I understand what btree does\n> with them, but what's the corresponding concept for GIST?\n\nif you mean first NULL keys in multikey GiST than just remove this tuple\nfrom index because it's informativeless. btw, what btree does ?\n\n\n\n> \t\t\tregards, tom lane\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n\n",
"msg_date": "Mon, 16 Jul 2001 19:15:27 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "Re: handling NULLS in GiST "
},
{
"msg_contents": "Oleg Bartunov <oleg@sai.msu.su> writes:\n> contrib/cube is just a bad example :-) In any case if you give an\n> interface to developer it's his responsibility to be aware of possible\n> errors. Developer has always a possibility to divide by zero.\n> We could change contrib/cube to be null-safe.\n\nMy point is that as it stands, GIST is not honoring the defined\ninterface for nulls. AFAICT you are relying on the called opclass\nroutines to test for null pointers, which is not clean. (Among other\nthings, it means that you cannot work with pass-by-value datatypes.)\nThere has to be a separate isNull flag for each value.\n\ncontrib/cube very possibly is broken, but that doesn't mean that the\ncore GIST code isn't at fault too.\n\n> DirectFunctionCallN are already interface to\n> opclass support routines.\n\nBut the FunctionCallN routines do not allow passing or returning NULL.\nThat was a deliberate choice to preserve notational simplicity, because\nmost of the places where they needed to be used didn't have to worry\nabout NULLs. You do, so you can't use those routines.\n\n>> The macro-issue is what you intend to\n>> do with NULLs in the first place. I understand what btree does\n>> with them, but what's the corresponding concept for GIST?\n\n> if you mean first NULL keys in multikey GiST than just remove this tuple\n> from index because it's informativeless. btw, what btree does ?\n\nIf you remove the tuple from the index then you're not storing NULLs.\nYou need to pick a rule that defines where null rows will get placed\nin the index. For btree, the rule is \"nulls sort after all non-nulls\".\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 16 Jul 2001 12:28:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: handling NULLS in GiST "
}
] |
[
{
"msg_contents": "\n> we noticed you changed gist.c to handle NULLS. It seems there is\n> problem with your changes.\n....\n> I remind we have choosen to leave NULLs because vacuum complained about\n> different number of tuples in heap and index and all our opclasses work\n> correctly with NULLs. Did you change vacuum code so it will not complain ?\n\nIf the opclasses handle NULLs, then they should be in the index.\nLeaving them out would imho be better handled with the partial index code.\n\nAndreas\n",
"msg_date": "Mon, 16 Jul 2001 16:43:10 +0200",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: handling NULLS in GiST"
}
] |
[
{
"msg_contents": "Hi,\n I'm not sure if this is the right address to pass comments to PostGreSQL\nteam, but here goes.\n I'm new to PostgreSQL and so far it looks quite interesting as an open\nsource DBMS. There are a few quirks (i.e. can't alter field data types?\nCan't drop fields? ,etc), but I suppose I can live with them . Been using it\nfor some web-based development and I must say I am pleasantly surprised to\nsee how well it works for the last few weeks.\nKeep it up and cheers.\n\nRegards,\n\nJohn Huong\n\nP.S: Looking forward to the replication features in the to-do list ;).\n\n\n",
"msg_date": "Tue, 17 Jul 2001 00:38:03 +0800",
"msg_from": "\"Huong Chia Hiang\" <huongch@bigfoot.com>",
"msg_from_op": true,
"msg_subject": "PostgreSQL : First impressions"
}
] |
[
{
"msg_contents": "Running:\n\n ALTER TABLE table ADD COLUMN column SERIAL;\n\n Defines a column as int4 but does not create the sequence or attempt\nto set the default value.\n\nNot a big deal, but I was surprised when the column values were null.\n\n--\nRod Taylor\n\nYour eyes are weary from staring at the CRT. You feel sleepy. Notice\nhow restful it is to watch the cursor blink. Close your eyes. The\nopinions stated above are yours. You cannot imagine why you ever felt\notherwise.",
"msg_date": "Mon, 16 Jul 2001 12:48:11 -0400",
"msg_from": "\"Rod Taylor\" <rbt@barchord.com>",
"msg_from_op": true,
"msg_subject": "ALTER TABLE ADD COLUMN column SERIAL -- unexpected results"
},
{
"msg_contents": "\"Rod Taylor\" <rbt@barchord.com> writes:\n> Running:\n> ALTER TABLE table ADD COLUMN column SERIAL;\n> Defines a column as int4 but does not create the sequence or attempt\n> to set the default value.\n\nYeah ... SERIAL is implemented as a hack in the parsing of CREATE\nTABLE, but there's no corresponding hack in ALTER TABLE. A bug,\nno doubt about it, but I don't much like the obvious fix of duplicating\nthe hack in two places. Isn't there a cleaner way to deal with this\n\"data type\"?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 16 Jul 2001 15:02:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE ADD COLUMN column SERIAL -- unexpected results "
},
{
"msg_contents": "> \"Rod Taylor\" <rbt@barchord.com> writes:\n> > Running:\n> > ALTER TABLE table ADD COLUMN column SERIAL;\n> > Defines a column as int4 but does not create the sequence or attempt\n> > to set the default value.\n> \n> Yeah ... SERIAL is implemented as a hack in the parsing of CREATE\n> TABLE, but there's no corresponding hack in ALTER TABLE. A bug,\n> no doubt about it, but I don't much like the obvious fix of duplicating\n> the hack in two places. Isn't there a cleaner way to deal with this\n> \"data type\"?\n\nAdded to TODO.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 16 Jul 2001 17:18:13 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE ADD COLUMN column SERIAL -- unexpected results"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> \"Rod Taylor\" <rbt@barchord.com> writes:\n> > Running:\n> > ALTER TABLE table ADD COLUMN column SERIAL;\n> > Defines a column as int4 but does not create the sequence or attempt\n> > to set the default value.\n> \n> Yeah ... SERIAL is implemented as a hack in the parsing of CREATE\n> TABLE, but there's no corresponding hack in ALTER TABLE. A bug,\n> no doubt about it, but I don't much like the obvious fix of duplicating\n> the hack in two places. Isn't there a cleaner way to deal with this\n> \"data type\"?\n> \n\n*ALTER TABLE* isn't as easy as *CREATE TABLE*.\nIt has another problem because it hasn't implemented\n*DEFAULT* yet.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Tue, 17 Jul 2001 09:15:58 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE ADD COLUMN column SERIAL -- unexpected results"
},
{
"msg_contents": "> *ALTER TABLE* isn't as easy as *CREATE TABLE*.\n> It has another problem because it hasn't implemented\n> *DEFAULT* yet.\n\nJust out of interest, is there a special reason it's difficult to implement\nthe DEFAULT feature of alter table add column?\n\nChris\n\n",
"msg_date": "Tue, 17 Jul 2001 10:01:31 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "RE: ALTER TABLE ADD COLUMN column SERIAL -- unexpected results"
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n> \n> > *ALTER TABLE* isn't as easy as *CREATE TABLE*.\n> > It has another problem because it hasn't implemented\n> > *DEFAULT* yet.\n> \n> Just out of interest, is there a special reason it's difficult to implement\n> the DEFAULT feature of alter table add column?\n> \n\nWithout *DEFAULT* we don't have to touch the table file\nat all. With *DEFAULT* we have to fill the new column\nwith the *DEFAULT* value for all existent rows.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Tue, 17 Jul 2001 11:24:08 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE ADD COLUMN column SERIAL -- unexpected results"
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> Christopher Kings-Lynne wrote:\n>> Just out of interest, is there a special reason it's difficult to implement\n>> the DEFAULT feature of alter table add column?\n\n> Without *DEFAULT* we don't have to touch the table file\n> at all. With *DEFAULT* we have to fill the new column\n> with the *DEFAULT* value for all existent rows.\n\nDo we? We could simply declare by fiat that the behavior of ALTER ADD\nCOLUMN is to fill the new column with nulls. Let the user do an UPDATE\nto fill the column with a default, if he wants to. After all, I'd not\nexpect that an ALTER that adds a DEFAULT spec to an existing column\nwould go through and replace existing NULL entries for me.\n\nThis is a little trickier if one wants to make a NOT NULL column,\nhowever. Seems the standard technique for that could be\n\n\tALTER tab ADD COLUMN newcol without the not null spec;\n\tUPDATE tab SET newcol = something;\n\tALTER tab ALTER COLUMN newcol ADD CONSTRAINT NOT NULL;\n\nwhere the last command would verify that the column contains no nulls\nbefore setting the flag, just like ALTER TABLE ADD CONSTRAINT does now\n(but I think we don't have a variant for NULL/NOT NULL constraints).\n\nThis is slightly ugly, maybe, but it sure beats not having the feature\nat all. Besides, it seems to me there are cases where you don't really\n*want* the DEFAULT value to be used to fill the column, but something\nelse (or even want NULLs). Why should the system force an update of\nevery row in the table with a value that might not be what the user\nwants?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Jul 2001 11:04:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE ADD COLUMN column SERIAL -- unexpected results "
},
{
"msg_contents": "Tom Lane writes:\n\n> Besides, it seems to me there are cases where you don't really\n> *want* the DEFAULT value to be used to fill the column, but something\n> else (or even want NULLs).\n\nThen you could use\n\nALTER TABLE t1 ADD COLUMN cn text;\nALTER TABLE t1 ALTER COLUMN cn SET DEFAULT 'what you really wanted';\n\nA subtle difference, but it's perfectly consistent. -- And it works\nalready.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Tue, 17 Jul 2001 19:43:22 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE ADD COLUMN column SERIAL -- unexpected\n results"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > Christopher Kings-Lynne wrote:\n> >> Just out of interest, is there a special reason it's difficult to implement\n> >> the DEFAULT feature of alter table add column?\n> \n> > Without *DEFAULT* we don't have to touch the table file\n> > at all. With *DEFAULT* we have to fill the new column\n> > with the *DEFAULT* value for all existent rows.\n> \n> Do we? We could simply declare by fiat that the behavior of ALTER ADD\n> COLUMN is to fill the new column with nulls. Let the user do an UPDATE\n> to fill the column with a default, if he wants to. \n\nI don't like to fill the column of the existent rows but\nit seems to be the spec.\n\n> After all, I'd not\n> expect that an ALTER that adds a DEFAULT spec to an existing column\n> would go through and replace existing NULL entries for me.\n> \n> This is a little trickier if one wants to make a NOT NULL column,\n> however. Seems the standard technique for that could be\n> \n> ALTER tab ADD COLUMN newcol without the not null spec;\n> UPDATE tab SET newcol = something;\n> ALTER tab ALTER COLUMN newcol ADD CONSTRAINT NOT NULL;\n> \n\nYes I love this also.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Wed, 18 Jul 2001 09:07:56 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE ADD COLUMN column SERIAL -- unexpected results"
},
{
"msg_contents": "> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > Christopher Kings-Lynne wrote:\n> >> Just out of interest, is there a special reason it's difficult to implement\n> >> the DEFAULT feature of alter table add column?\n> \n> > Without *DEFAULT* we don't have to touch the table file\n> > at all. With *DEFAULT* we have to fill the new column\n> > with the *DEFAULT* value for all existent rows.\n> \n> Do we? We could simply declare by fiat that the behavior of ALTER ADD\n> COLUMN is to fill the new column with nulls. Let the user do an UPDATE\n> to fill the column with a default, if he wants to. After all, I'd not\n> expect that an ALTER that adds a DEFAULT spec to an existing column\n> would go through and replace existing NULL entries for me.\n> \n> This is a little trickier if one wants to make a NOT NULL column,\n> however. Seems the standard technique for that could be\n> \n> \tALTER tab ADD COLUMN newcol without the not null spec;\n> \tUPDATE tab SET newcol = something;\n> \tALTER tab ALTER COLUMN newcol ADD CONSTRAINT NOT NULL;\n> \n> where the last command would verify that the column contains no nulls\n> before setting the flag, just like ALTER TABLE ADD CONSTRAINT does now\n> (but I think we don't have a variant for NULL/NOT NULL constraints).\n> \n> This is slightly ugly, maybe, but it sure beats not having the feature\n> at all. Besides, it seems to me there are cases where you don't really\n> *want* the DEFAULT value to be used to fill the column, but something\n> else (or even want NULLs). Why should the system force an update of\n> every row in the table with a value that might not be what the user\n> wants?\n\nI am trying to find a way to get this information to users. I have\nmodified command.c to output a different error message:\n\ntest=> alter table x add column z int default 4;\nERROR: Adding columns with defaults is not implemented because it\n is unclear whether existing rows should have the DEFAULT value\n or NULL. Add the column, then use ALTER TABLE SET DEFAULT.\n You may then use UPDATE to give a non-NULL value to existing rows.\n\nHow does this sound? Peter, should I keep it for 7.3 so I don't mess up\nthe translations in 7.2?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 27 Nov 2001 20:35:24 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE ADD COLUMN column SERIAL -- unexpected results"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I am trying to find a way to get this information to users. I have\n> modified command.c to output a different error message:\n\n> test=> alter table x add column z int default 4;\n> ERROR: Adding columns with defaults is not implemented because it\n> is unclear whether existing rows should have the DEFAULT value\n> or NULL. Add the column, then use ALTER TABLE SET DEFAULT.\n> You may then use UPDATE to give a non-NULL value to existing rows.\n\nKindly put the error message back as it was.\n\nIt's not \"unclear\" what the command should do; SQL92 is perfectly\nclear about it.\n\nI would also remind you that we've got quite a few sets of error message\ntranslations in place now. Gratuitous changes to message wording in the\nlast week of beta are *not* appropriate, because they break all the\ntranslations.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 27 Nov 2001 23:06:48 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE ADD COLUMN column SERIAL -- unexpected results "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I am trying to find a way to get this information to users. I have\n> > modified command.c to output a different error message:\n\nI should have used different wording here. I meant I tested a\nmodification to command.c.\n\n> > test=> alter table x add column z int default 4;\n> > ERROR: Adding columns with defaults is not implemented because it\n> > is unclear whether existing rows should have the DEFAULT value\n> > or NULL. Add the column, then use ALTER TABLE SET DEFAULT.\n> > You may then use UPDATE to give a non-NULL value to existing rows.\n> \n> Kindly put the error message back as it was.\n> \n> It's not \"unclear\" what the command should do; SQL92 is perfectly\n> clear about it.\n> \n> I would also remind you that we've got quite a few sets of error message\n> translations in place now. Gratuitous changes to message wording in the\n> last week of beta are *not* appropriate, because they break all the\n> translations.\n\nIf you read a little further you would have seen:\n\n> How does this sound? Peter, should I keep it for 7.3 so I don't mess up\n> the translations in 7.2?\n\nI was not about to apply it. I need comments on how we should\ncommunicate this to the user. I have email from you from July saying:\n\n> > Without *DEFAULT* we don't have to touch the table file\n> > at all. With *DEFAULT* we have to fill the new column\n> > with the *DEFAULT* value for all existent rows.\n> \n> Do we? We could simply declare by fiat that the behavior of ALTER ADD\n> COLUMN is to fill the new column with nulls. Let the user do an UPDATE\n> to fill the column with a default, if he wants to. After all, I'd not\n> expect that an ALTER that adds a DEFAULT spec to an existing column\n> would go through and replace existing NULL entries for me.\n\nThen Hiroshi saying:\n\n> I don't like to fill the column of the existent rows but\n> it seems to be the spec.\n\nSo, are we now all agreed that we have to fill in the existing rows with\nthe default value? If so, I can document that in the TODO list and\ndiscard this patch.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 28 Nov 2001 00:27:18 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE ADD COLUMN column SERIAL -- unexpected results"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> So, are we now all agreed that we have to fill in the existing rows with\n> the default value?\n\nI think there's no debate that that's what the spec says to do. There\nmight be some discontent in the ranks about whether to follow SQL92's\nmarching orders, however. Doubling the filesize of the table to apply\nchanges that the user might not even want seems a heavy penalty.\n\n> If so, I can document that in the TODO list and\n> discard this patch.\n\nThis is certainly a TODO or TOARGUEABOUT item, not something to be\npatched on short notice.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 28 Nov 2001 00:36:20 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE ADD COLUMN column SERIAL -- unexpected results "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > So, are we now all agreed that we have to fill in the existing rows with\n> > the default value?\n> \n> I think there's no debate that that's what the spec says to do. There\n> might be some discontent in the ranks about whether to follow SQL92's\n> marching orders, however. Doubling the filesize of the table to apply\n> changes that the user might not even want seems a heavy penalty.\n> \n> > If so, I can document that in the TODO list and\n> > discard this patch.\n> \n> This is certainly a TODO or TOARGUEABOUT item, not something to be\n> patched on short notice.\n\nAgreed. I didn't think it was going into 7.2 anyway. I am just looking\nto see if we will solve this with code or we will solve it with\ndocumentation. If it was documentation, I was going to see if I could\ncome up with some wording.\n\nHowever, because the spec is clear, seems we will have to do some\ncodework on this later. Added to TODO:\n\n o ALTER TABLE ADD COLUMN column SET DEFAULT should fill existing\n rows with DEFAULT value or allow NULLs in existing rows\n\nMy guess is that we will have to do spec behavior by default, and add a\nflag to allow NULLs in existing rows.\n\nYuck, but at least we have a direction.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 28 Nov 2001 00:44:05 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE ADD COLUMN column SERIAL -- unexpected results"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> o ALTER TABLE ADD COLUMN column SET DEFAULT should fill existing\n> rows with DEFAULT value or allow NULLs in existing rows\n>\n> My guess is that we will have to do spec behavior by default, and add a\n> flag to allow NULLs in existing rows.\n\nMay I point you to the already existing set of commands that do exactly\nwhat you want:\n\nALTER TABLE t1 ADD COLUMN name type;\nALTER TABLE t1 ALTER COLUMN name SET DEFAULT foo;\n\nThere's no reason to muck around with the spec here.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Wed, 28 Nov 2001 21:47:21 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE ADD COLUMN column SERIAL -- unexpected"
},
{
"msg_contents": "> Bruce Momjian writes:\n> \n> > o ALTER TABLE ADD COLUMN column SET DEFAULT should fill existing\n> > rows with DEFAULT value or allow NULLs in existing rows\n> >\n> > My guess is that we will have to do spec behavior by default, and add a\n> > flag to allow NULLs in existing rows.\n> \n> May I point you to the already existing set of commands that do exactly\n> what you want:\n> \n> ALTER TABLE t1 ADD COLUMN name type;\n> ALTER TABLE t1 ALTER COLUMN name SET DEFAULT foo;\n> \n> There's no reason to muck around with the spec here.\n\nOK, so we will implement the spec, and allow people to avoid the\ndouble-disk space by doing the steps manually and skipping the UPDATE. \nSounds good to me. TODO updated.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 28 Nov 2001 15:56:00 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE ADD COLUMN column SERIAL -- unexpected"
}
] |
[
{
"msg_contents": "This might not be the correct list to send this to, but none of the other\nlists seemed appropriate. A friend of mine who uses postgres extensively at\nhis job suggested I might send y'all a note outlining what we do with it\nhere. \n\nIn general, I am discouraged from providing specific data to non-employees\nabout what we do. But Dan (the aforementioned friend) said that you guys\nwould be interested in knowing what I am currently doing with postgres, so\nthat you know that its up to the challenges we don�t often get to put\nhardware and software to.\n\nI am working in the publications division of the American Chemical Society.\nWe are in the process of taking all of our 30+ journals from the last 150 or\nso years and digitizing them. This process entails scanning over 2.5 million\npages (though this is really only a rough estimate. It could be much higher)\nand digitizing them. Our output is in several formats. First, we have the\ninput TIFF (from the scans), we have PDF's which we render using Adobe\nCapture, XML (which we pay a vendor for), and a proprietary format called\nDjVu which is kind of.... Well, its like metadata. Initially, we were using\nperl scripts and shell scripts to traverse the entire filesystem looking for\nfiles.\n\nThis got rather difficult and was time consuming. My suggestion was to just\nuse a database for keeping track of stuff. We have something like 27\ndifferent instances of oracle running here on 4 or 5 different machines. I\ndon't know much about our oracle stuff. My solution was to just go download\nand install postgres.\n\nOur hardware is a cluster of 3 ultra 10's, a pair of 700-dvd jukeboxes (with\nburners), a 2.5tb SAN, 10 DAT tape readers, a pair of dvd-roms, and 2 200gb\ndisk packs (one for each of our tape-reading suns -- the other one manages\nthe DVD jukes). We also run capture on four dell poweredge servers running\nNT. We run the DjVu software on an additional 3 poweredge servers. That\nstuff is NT. The SAN is run on a cluster of 4 sun e 3500's.\n\nI am pumping about 200gb a week through the pg database, and our estimated\ndatabase size is something like 4tb by the end of the year.\n\nWe populate the database with perl scripts. The sun that runs the dvd jukes\nis also our database server. We have shell scripts that look over our data\non the disk, and we use sun's NFS to keep disks between the suns and some\nfunky Sun smb-esque software to keep disks mounted on the nt boxes.\n\nAnd that's just the \"large\" database. I have an additional database that I\nam using to store the textual data we receive in the form of\n\"crystallography information files\" (http://www.iucr.org/) which are roughly\n6,000 lines long. I have 10,000 of them stored at the moment in the\ndatabase, going back to about 1996. As you can tell, this database is going\nto get much bigger. At the moment it's living on an Ultra 2 in a 2gb\npartition.\n\nIn some ways, I am amazed that postgres has stood up to the challenge. In\nothers, however, I am not in the least surprised. Its a fantastic piece of\nsoftware that requires almost no intervention on my part. I talked to one of\nour oracle dba's about it. He actually (im not kidding here) did not believe\nit could be a database if it did not require maintenance.\n\nI am very happy with postgres and I am glad to provide information about our\nsetup if you'd like to know anything else.\n\nIf you'd like to quote me on the environment if youre interested in putting\nsomething in a FAQ (i.e., \"can postgres scale up to > tb scale?\"), that�s\nfine as well, but I would like to make sure that it doesn�t point to ACS and\nis not too specific.\n\nAnyhow, thanks for your hard work guys/gals.\n\nalex\n\n",
"msg_date": "Mon, 16 Jul 2001 14:48:56 -0400",
"msg_from": "alex avriette <a_avriette@acs.org>",
"msg_from_op": true,
"msg_subject": "What I do with PostgreSQL"
},
{
"msg_contents": "On Monday 16 July 2001 14:48, alex avriette wrote:\n> Our hardware is a cluster of 3 ultra 10's, a pair of 700-dvd jukeboxes\n> (with burners), a 2.5tb SAN, 10 DAT tape readers, a pair of dvd-roms, and 2\n> 200gb disk packs (one for each of our tape-reading suns -- the other one\n> manages the DVD jukes). We also run capture on four dell poweredge servers\n> running NT. We run the DjVu software on an additional 3 poweredge servers.\n> That stuff is NT. The SAN is run on a cluster of 4 sun e 3500's.\n\n> I am pumping about 200gb a week through the pg database, and our estimated\n> database size is something like 4tb by the end of the year.\n\n> In some ways, I am amazed that postgres has stood up to the challenge. In\n> others, however, I am not in the least surprised. Its a fantastic piece of\n> software that requires almost no intervention on my part. I talked to one\n> of our oracle dba's about it. He actually (im not kidding here) did not\n> believe it could be a database if it did not require maintenance.\n\nCan anyone say 'Woof!'?\n\nThis is awesome. Thank you, Alex, for sharing this testimonial -- your \ndatabase sounds like a serious test of 'scalability' no matter which way you \nslice it.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Tue, 17 Jul 2001 11:46:10 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: What I do with PostgreSQL"
},
{
"msg_contents": ">> I am pumping about 200gb a week through the pg database,\n>> and our estimated database size is something like 4tb by\n>> the end of the year.\n>\n> Can anyone say 'Woof!'?\n\nAmen, Lamar. I was trying to think of something myself besides\n'Wow!'...\n\nAs a side note, there's a blurb in the July 16, 2001 Interactive Week\nabout the MySQL AB vs NuSphere spat and the last paragraph of the\narticle casts a very favorable nod towards PostgreSQL.\n\nI quote (any typos are mine) ...\n\n\"Analysts said MySQL must find a way to generate a development community\nand support if it wants to compete with another open source database,\nPostgreSQL, distributed by Red Hat and Great Bridge.\"\n\nArticle doesn't say who the \"analysts\" are, but the implication that\nMySQL isn't up to competing with PostgreSQL was interesting to my eyes!\n:)\n\nDarren\n\n",
"msg_date": "Tue, 17 Jul 2001 14:38:47 -0400",
"msg_from": "\"Darren King\" <darrenk@insightdist.com>",
"msg_from_op": false,
"msg_subject": "RE: What I do with PostgreSQL"
}
] |
[
{
"msg_contents": "I've done some searching through the mailing lists and haven't found \nany info regarding what I need.\n\nI have an array of values of type int8.\nI want to be able to rollup the data and have postgres do all the \nsumming for me. I've looked at the commands and haven't found what I \nneed.\n\nHere is what I get:\ndetailed=# select sum(valuearray) from data where objid=34;\nERROR: Unable to select an aggregate function sum(_int8)\n\nSo I decided I would write my own function that I would load into \npostgres.\nThe problem is, how do I access each element in the array?\nI can get the array and return it, but in the function I would like to \nget each separate value and do the summing, then return the summed \narray.\n\nCan anyone help?\n\nThanks,\nChris\n\n",
"msg_date": "Mon, 16 Jul 2001 17:03:55 -0500",
"msg_from": "Christopher Yap <cyap@linmor.com>",
"msg_from_op": true,
"msg_subject": "deferencing array of int8"
}
] |
[
{
"msg_contents": "Let me clearify. I am suggesting system table relid for each entry:\n\n> \tobject sysrelid\n> \tobject oid\n> \treference sysrelid\n> \treferences oid\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 16 Jul 2001 19:23:36 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pg_depend"
}
] |
[
{
"msg_contents": "\nmorannon:~>pg_dump -t bboard openacs | less\ngetTables(): SELECT (for VIEW ec_subsubcategories_augmented) returned NULL oid\nSELECT was: SELECT definition as viewdef, (select oid from pg_rewrite\nwhere rulename='_RET' || viewname) as view_oid from pg_views where\nviewname = 'ec_subsubcategories_augmented';\n\nAny ideas what would cause this?\n\n\n\n-- \nDominic J. Eidson\n \"Baruk Khazad! Khazad ai-menu!\" - Gimli\n-------------------------------------------------------------------------------\nhttp://www.the-infinite.org/ http://www.the-infinite.org/~dominic/\n\n",
"msg_date": "Mon, 16 Jul 2001 22:12:26 -0500 (CDT)",
"msg_from": "\"Dominic J. Eidson\" <sauron@the-infinite.org>",
"msg_from_op": true,
"msg_subject": "Odd error..."
}
] |
[
{
"msg_contents": "At 22:12 16/07/01 -0500, Dominic J. Eidson wrote:\n>morannon:~>pg_dump -t bboard openacs | less\n>getTables(): SELECT (for VIEW ec_subsubcategories_augmented) returned NULL\noid\n>SELECT was: SELECT definition as viewdef, (select oid from pg_rewrite\n>where rulename='_RET' || viewname) as view_oid from pg_views where\n>viewname = 'ec_subsubcategories_augmented';\n>\n>Any ideas what would cause this?\n\nProbably the length of the view name; which version are you running? I\nhaven't look at PG for a while, but I thought this was fixed in 7.1.2\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Tue, 17 Jul 2001 16:23:51 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": true,
"msg_subject": "Re: Odd error..."
},
{
"msg_contents": "On Tue, 17 Jul 2001, Philip Warner wrote:\n\n> At 22:12 16/07/01 -0500, Dominic J. Eidson wrote:\n> >morannon:~>pg_dump -t bboard openacs | less\n> >getTables(): SELECT (for VIEW ec_subsubcategories_augmented) returned NULL\n> oid\n> >SELECT was: SELECT definition as viewdef, (select oid from pg_rewrite\n> >where rulename='_RET' || viewname) as view_oid from pg_views where\n> >viewname = 'ec_subsubcategories_augmented';\n> >\n> >Any ideas what would cause this?\n> \n> Probably the length of the view name; which version are you running? I\n> haven't look at PG for a while, but I thought this was fixed in 7.1.2\n\nopenacs=# select version();\n version \n-------------------------------------------------------------\n PostgreSQL 7.1 on i686-pc-linux-gnu, compiled by GCC 2.95.2\n(1 row)\n\n(pretty sure that's 7.1.0, btw)\n\nopenacs=# SELECT definition as viewdef, (select oid from pg_rewrite where rulename='_RET' || viewname) as view_oid from pg_views where viewname = 'ec_subsubcategories_augmented'; \n viewdef | view_oid\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------\n SELECT subsubs.subsubcategory_id, subsubs.subcategory_id, subsubs.subsubcategory_name, subsubs.sort_key, subsubs.last_modified, subsubs.last_modifying_user, subsubs.modified_ip_address, subs.subcategory_name, cats.category_id, cats.category_name FROM ec_subsubcategories subsubs, ec_subcategories subs, ec_categories cats WHERE ((subsubs.subcategory_id = subs.subcategory_id) AND (subs.category_id = cats.category_id)); |\n\nAs you can see, it gets the \"viewdef\" part fine, but not the \"select oid\nfrom pg_rewrite where ... \" part.\n\n\n-- \nDominic J. Eidson\n \"Baruk Khazad! Khazad ai-menu!\" - Gimli\n-------------------------------------------------------------------------------\nhttp://www.the-infinite.org/ http://www.the-infinite.org/~dominic/\n\n",
"msg_date": "Tue, 17 Jul 2001 08:15:57 -0500 (CDT)",
"msg_from": "\"Dominic J. Eidson\" <sauron@the-infinite.org>",
"msg_from_op": false,
"msg_subject": "Re: Odd error..."
},
{
"msg_contents": "\"Dominic J. Eidson\" <sauron@the-infinite.org> writes:\n> On Tue, 17 Jul 2001, Philip Warner wrote:\n> Any ideas what would cause this?\n>> \n>> Probably the length of the view name; which version are you running? I\n>> haven't look at PG for a while, but I thought this was fixed in 7.1.2\n\n> PostgreSQL 7.1 on i686-pc-linux-gnu, compiled by GCC 2.95.2\n\nIIRC, that was a post-7.1 bug fix. Update to 7.1.2, or shorten your\nview name by a few characters.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Jul 2001 11:42:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Odd error... "
}
] |
[
{
"msg_contents": "Hello\n\nI know that exists a script file for postgreSQL that have the operator *=\nand others.\n\nWhere can I find that script ?\n\nThanks\n\nLuis Sousa\n\n",
"msg_date": "Tue, 17 Jul 2001 14:15:03 +0100",
"msg_from": "Luis Sousa <llsousa@ualg.pt>",
"msg_from_op": true,
"msg_subject": "Operator *="
}
] |
[
{
"msg_contents": "I have noticed that a large fraction of the I/O done by 7.1 is\nassociated with initializing new segments of the WAL log for use.\n(We have to physically fill each segment with zeroes to ensure that\nthe system has actually allocated a whole 16MB to it; otherwise we\nfall victim to the \"hole-saving\" allocation technique of most Unix\nfilesystems.) I just had an idea about how to avoid this cost:\nwhy not recycle old log segments? At the point where the code\ncurrently deletes a no-longer-needed segment, just rename it to\nbecome the next created-in-advance segment.\n\nWith this approach, shortly after installation the system would converge\nto a steady state with a constant number of WAL segments (basically\nCHECKPOINT_SEGMENTS + WAL_FILES + 1, maybe one or two more if load is\nreally high). So, in addition to eliminating initialization writes,\nwe would also reduce the metadata traffic (inode and indirect blocks)\nto a very low level. That has to be good both for performance and for\nimproving the odds that the WAL files will survive a system crash.\n\nThe sole disadvantage I can see to this approach is that a recycled\nsegment would not contain zeroes, but valid WAL records. We'd need\nto take care that in a recovery situation, we not mistake old records\nbeyond the last one we actually wrote for new records we should redo.\nWhile checking the xl_prev back-pointers in each record should be\nsufficient to detect this, I'd feel more comfortable if we extended\nthe XLogPageHeader record to contain the file/segment number that it\nbelongs to. This'd cost an extra 8 bytes per 8K XLOG page, which seems\nworth it to me.\n\nAnother issue is whether the recycling logic should be \"always recycle\"\n(hence number of extant WAL segments will never decrease), or should\nit be more like \"recycle if there are fewer than WAL_FILES advance\nsegments, else delete\". If we were supporting WAL-based UNDO then I\nthink it'd have to be the latter, so that we could reduce the WAL usage\nfrom a peak created by a long-running transaction. But with the present\nlogic that the WAL log is truncated after each checkpoint, I think it'd\nbe better just to never delete. Otherwise, the behavior is likely to\nbe that the system varies between N and N+1 extant segments due to\nroundoff effects (ie, depending on just where you are in the current\nsegment when a checkpoint happens). That's exactly what we do not want.\n\nA possible answer is \"recycle if there are fewer than WAL_FILES + SLOP\nadvance files, else delete\", where SLOP is (say) about three or four\nsegments. That would avoid unwanted oscillations in the number of\nextant files, while still allowing decrease from a peak for UNDO.\n\nComments, better ideas?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Jul 2001 10:56:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Idea: recycle WAL segments, don't delete/recreate 'em"
},
{
"msg_contents": "> I have noticed that a large fraction of the I/O done by 7.1 is\n> associated with initializing new segments of the WAL log for use.\n> (We have to physically fill each segment with zeroes to ensure that\n> the system has actually allocated a whole 16MB to it; otherwise we\n> fall victim to the \"hole-saving\" allocation technique of most Unix\n> filesystems.) I just had an idea about how to avoid this cost:\n> why not recycle old log segments? At the point where the code\n> currently deletes a no-longer-needed segment, just rename it to\n> become the next created-in-advance segment.\n\nThis sounds good and with UNDO far off, would be a big win. The\nsegement number seems like a good idea. I can't see any disadvantages.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 17 Jul 2001 13:31:16 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Idea: recycle WAL segments, don't delete/recreate 'em"
},
{
"msg_contents": "\ntgl wrote:\n\n: [...] (We have to physically fill each segment with zeroes to\n: ensure that the system has actually allocated a whole 16MB to it;\n: otherwise we fall victim to the \"hole-saving\" allocation technique\n: of most Unix filesystems.) [...]\n\nCould you explain how postgresql can \"fall victim\" the filesystem hole\nmechanism? Just hoping to force actual storage allocation, or hoping\nto discourage fragmentation?\n\n- FChE\n",
"msg_date": "17 Jul 2001 13:53:40 -0400",
"msg_from": "fche@redhat.com (Frank Ch. Eigler)",
"msg_from_op": false,
"msg_subject": "Re: Idea: recycle WAL segments, don't delete/recreate 'em"
},
{
"msg_contents": "Tom,\n\nWhat you are describing is a pseudo circular log. Other database\nsystems (such as DB2) support the concept of both circular and\nrecoverable logs. Recoverable is named this way because \nrecoverable logs can be used in point-in-time recovery. Both \nmethods support crash recovery.\n\nIn general, a user defines the number of log extents to be used in\nthe log cycle. He/she also defines the number of secondary logs to\nuse if by chance the circular log becomes full. If a secondary log\nextent is created, it is added to the cycle list. At a consistent\nshutdown, the secondary log extents are deleted. Since logs\nare deleted, any hope of point-in-time recovery is deleted with them.\n\nI understand your solution is for the existing architecture which does\nnot support point-in-time recovery. If this item is picked up, your\nsolution will become a stumbling block due the above mentioned log\nextent deletions. The other issues you list are of concern but are\nmanageable with some coding. \n\nSo, my question is, should PostgreSQL support both types of logging?\nThere will be databases where you require the ability to perform \npoint-in-time recovery. Conversely, there will be databases where\nan overwritten log extent (as you describe) is acceptable. I think\nit would be useful to be able to define which logging method you\nrequire for a database. This way, you incur the I/O hit only when\nforward recovery is a requirement.\n\nThoughts/comments?\n\nCheer,\nPatrick \n\n \n\nTom Lane wrote:\n> \n> I have noticed that a large fraction of the I/O done by 7.1 is\n> associated with initializing new segments of the WAL log for use.\n> (We have to physically fill each segment with zeroes to ensure that\n> the system has actually allocated a whole 16MB to it; otherwise we\n> fall victim to the \"hole-saving\" allocation technique of most Unix\n> filesystems.) I just had an idea about how to avoid this cost:\n> why not recycle old log segments? At the point where the code\n> currently deletes a no-longer-needed segment, just rename it to\n> become the next created-in-advance segment.\n> \n> With this approach, shortly after installation the system would converge\n> to a steady state with a constant number of WAL segments (basically\n> CHECKPOINT_SEGMENTS + WAL_FILES + 1, maybe one or two more if load is\n> really high). So, in addition to eliminating initialization writes,\n> we would also reduce the metadata traffic (inode and indirect blocks)\n> to a very low level. That has to be good both for performance and for\n> improving the odds that the WAL files will survive a system crash.\n> \n> The sole disadvantage I can see to this approach is that a recycled\n> segment would not contain zeroes, but valid WAL records. We'd need\n> to take care that in a recovery situation, we not mistake old records\n> beyond the last one we actually wrote for new records we should redo.\n> While checking the xl_prev back-pointers in each record should be\n> sufficient to detect this, I'd feel more comfortable if we extended\n> the XLogPageHeader record to contain the file/segment number that it\n> belongs to. This'd cost an extra 8 bytes per 8K XLOG page, which seems\n> worth it to me.\n> \n> Another issue is whether the recycling logic should be \"always recycle\"\n> (hence number of extant WAL segments will never decrease), or should\n> it be more like \"recycle if there are fewer than WAL_FILES advance\n> segments, else delete\". If we were supporting WAL-based UNDO then I\n> think it'd have to be the latter, so that we could reduce the WAL usage\n> from a peak created by a long-running transaction. But with the present\n> logic that the WAL log is truncated after each checkpoint, I think it'd\n> be better just to never delete. Otherwise, the behavior is likely to\n> be that the system varies between N and N+1 extant segments due to\n> roundoff effects (ie, depending on just where you are in the current\n> segment when a checkpoint happens). That's exactly what we do not want.\n> \n> A possible answer is \"recycle if there are fewer than WAL_FILES + SLOP\n> advance files, else delete\", where SLOP is (say) about three or four\n> segments. That would avoid unwanted oscillations in the number of\n> extant files, while still allowing decrease from a peak for UNDO.\n> \n> Comments, better ideas?\n> \n> regards, tom lane\n",
"msg_date": "Tue, 17 Jul 2001 14:21:19 -0400",
"msg_from": "Patrick Macdonald <patrickm@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: Idea: recycle WAL segments, don't delete/recreate 'em"
},
{
"msg_contents": "Patrick Macdonald <patrickm@redhat.com> writes:\n> I understand your solution is for the existing architecture which does\n> not support point-in-time recovery. If this item is picked up, your\n> solution will become a stumbling block due the above mentioned log\n> extent deletions.\n\nHmm, I don't see why it's a stumbling block. There is a notion in the\npresent code that log segments might be moved someplace else for\narchiving (rather than just be deleted), and I wasn't planning on\neliminating that option. I think however that a realistic archival\nmechanism would not simply keep the log segments verbatim. It could\ndrop the page images, for a huge space savings, and perhaps also\neliminate records from aborted transactions. So in reality one could\nstill expect to recycle the log segments, just with a somewhat longer\ncycle time --- ie, after the archiver is done copying a segment, then\nyou rename it into place as a forward file.\n\nIn any case, a two-or-three-line change is hardly likely to create much\nof an obstacle to PIT recovery, compared to some of the more fundamental\naspects of the existing WAL design (like its need to start from a\ncomplete physical copy of the database files). So I'm not sure why\nyou're objecting on these grounds.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Jul 2001 14:52:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Idea: recycle WAL segments, don't delete/recreate 'em "
},
{
"msg_contents": "> Could you explain how postgresql can \"fall victim\" the filesystem hole\n> mechanism? Just hoping to force actual storage allocation, or hoping\n> to discourage fragmentation?\n\nMost Unix filesystems will not allocate disk blocks until you write in\nthem. If you just seek out past end-of-file, the file pointer is moved\nbut the blocks are unallocated. This is how 'ls' can show a 1gb file\nthat only uses 4k of disk space.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 17 Jul 2001 15:14:47 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: Idea: recycle WAL segments, don't delete/recreate\n 'em"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Patrick Macdonald <patrickm@redhat.com> writes:\n> > I understand your solution is for the existing architecture which does\n> > not support point-in-time recovery. If this item is picked up, your\n> > solution will become a stumbling block due the above mentioned log\n> > extent deletions.\n> \n> Hmm, I don't see why it's a stumbling block. There is a notion in the\n> present code that log segments might be moved someplace else for\n> archiving (rather than just be deleted), and I wasn't planning on\n> eliminating that option. I think however that a realistic archival\n> mechanism would not simply keep the log segments verbatim. It could\n> drop the page images, for a huge space savings, and perhaps also\n> eliminate records from aborted transactions. So in reality one could\n> still expect to recycle the log segments, just with a somewhat longer\n> cycle time --- ie, after the archiver is done copying a segment, then\n> you rename it into place as a forward file.\n\nWell, notion and actual practice can be mutually exclusive. Your\ninitial message stated that you would like to rename the log segment.\nThis insinuated that the log segment was not moved. Therefore, a\nstraight rename would cause problems with the future point-in-time\nrecovery item (ie. the only existing version of log segment N has\nbeen renamed to N+5). A backup of the database could not roll forward\nthrough this name change as stated. That was my objection. \n\n> In any case, a two-or-three-line change is hardly likely to create much\n> of an obstacle to PIT recovery, compared to some of the more fundamental\n> aspects of the existing WAL design (like its need to start from a\n> complete physical copy of the database files). So I'm not sure why\n> you're objecting on these grounds.\n\nHmmm, stating that it is less of a problem than others doesn't make\nit the right thing to do. If the two or three lines you mention renames\na segment I want to roll forward through, that's a problem. Yeah, I\nknow it's not a problem now but it'll have to be changed when PIT comes\ninto play. \n\nYou didn't comment on the idea of two logging methods... circular and\nrecoverable. Any thoughts?\n\nCheers,\nPatrick\n",
"msg_date": "Tue, 17 Jul 2001 15:39:16 -0400",
"msg_from": "Patrick Macdonald <patrickm@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: Idea: recycle WAL segments, don't delete/recreate 'em"
},
{
"msg_contents": "Patrick Macdonald <patrickm@redhat.com> writes:\n> Well, notion and actual practice can be mutually exclusive. Your\n> initial message stated that you would like to rename the log segment.\n> This insinuated that the log segment was not moved. Therefore, a\n> straight rename would cause problems with the future point-in-time\n> recovery item (ie. the only existing version of log segment N has\n> been renamed to N+5). A backup of the database could not roll forward\n> through this name change as stated. That was my objection. \n\nI think you are missing the point completely. The rename will occur\nonly at the time when we would otherwise DELETE the old log segment.\nIf, for PIT or any other purpose, we do not wish to delete a log\nsegment, then it's not going to get recycled either. My proposal is\nthen when, and only when, we are prepared to discard an old log segment\nforever, we instead rename it to be a created-in-advance future log\nsegment.\n\nWhat you may really be saying is that the existing scheme for management\nof log segments is inappropriate for PIT usage; if so feel free to\npropose a better one. But I don't see how recycling of no-longer-wanted\nsegments can break anything.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Jul 2001 16:20:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Idea: recycle WAL segments, don't delete/recreate 'em "
},
{
"msg_contents": "fche@redhat.com (Frank Ch. Eigler) writes:\n> Could you explain how postgresql can \"fall victim\" the filesystem hole\n> mechanism? Just hoping to force actual storage allocation, or hoping\n> to discourage fragmentation?\n\nThe former. We'd prefer not to get an unexpected \"disk full\" failure\nwhile writing to a log file we thought was good.\n\nTo the extent that prewriting the WAL segment discourages fragmentation,\nthat's good too, but it's just a side benefit.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Jul 2001 16:36:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Re: Idea: recycle WAL segments, don't delete/recreate 'em "
},
{
"msg_contents": "* Bruce Momjian <pgman@candle.pha.pa.us> wrote:\n\n| Most Unix filesystems will not allocate disk blocks until you write in\n| them. If you just seek out past end-of-file, the file pointer is moved\n| but the blocks are unallocated. This is how 'ls' can show a 1gb file\n| that only uses 4k of disk space.\n\nDoes this imply that we could get a performance gain by preallocating space\nfor indexes and data itself as well ? I've seen that other database products\nhave a setup step where you have to specify the size of the database. \n\nOr does PostgreSQL do any other tricks to prevent fragmentation of data ?\n\n\n-- \nGunnar R�nning - gunnar@polygnosis.com\nSenior Consultant, Polygnosis AS, http://www.polygnosis.com/\n",
"msg_date": "18 Jul 2001 12:46:54 +0200",
"msg_from": "Gunnar =?iso-8859-1?q?R=F8nning?= <gunnar@polygnosis.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: Idea: recycle WAL segments, don't delete/recreate 'em"
},
{
"msg_contents": "Hmmm... my prior appends to this newsgroup are stalled. Hopefully,\nthey'll be available soon.\n\nTom Lane wrote:\n> \n> What you may really be saying is that the existing scheme for management\n> of log segments is inappropriate for PIT usage; if so feel free to\n> propose a better one. But I don't see how recycling of no-longer-wanted\n> segments can break anything.\n\nYes, but in a very roundabout way (or so it seems). The main point\nthat I was trying to illustrate was that if a database supports \npoint-in-time recovery, recycling of the only available log segments \nis a bad thing. And, yes, in practice if you have point-in-time\nrecovery enabled you better archive your logs with your backup to\nensure that you can roll forward as expected.\n\nA possible solution (as I mentioned before)) is to have 2 methods\nof logging available: circular and forward-recoverable. When a\ndatabase is created, the creator selects which type of logging to\nperform. The log segments are exactly the same, only the recycling\nmethod is different.\n\nHmmm... the more I look at this, the more interested I become.\n\nCheers,\nPatrick\n",
"msg_date": "Wed, 18 Jul 2001 10:26:51 -0400",
"msg_from": "Patrick Macdonald <patrickm@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: Idea: recycle WAL segments, don't delete/recreate 'em"
},
{
"msg_contents": "Patrick Macdonald <patrickm@redhat.com> writes:\n> Yes, but in a very roundabout way (or so it seems). The main point\n> that I was trying to illustrate was that if a database supports \n> point-in-time recovery, recycling of the only available log segments \n> is a bad thing.\n\nCertainly, but deleting them is just as bad ;-).\n\nWhat would need to be changed to use the WAL log for archival purposes\nis the control logic that decides when an old log segment is no longer\nneeded. Rather than zapping them as soon as they're not needed for\ncrash recovery (our current approach), they'd have to stick around until\narchived offline, or perhaps for some DBA-specified length of time\nrepresenting how far back you want to allow for PIT recovery.\n\nNonetheless, at some point an old WAL segment will become deletable\n(unless you have infinite space on your WAL disk). ISTM that at that\npoint, it makes sense to consider recycling the file rather than\ndeleting it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Jul 2001 11:19:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Idea: recycle WAL segments, don't delete/recreate 'em "
},
{
"msg_contents": "> * Bruce Momjian <pgman@candle.pha.pa.us> wrote:\n> \n> | Most Unix filesystems will not allocate disk blocks until you write in\n> | them. If you just seek out past end-of-file, the file pointer is moved\n> | but the blocks are unallocated. This is how 'ls' can show a 1gb file\n> | that only uses 4k of disk space.\n> \n> Does this imply that we could get a performance gain by preallocating space\n> for indexes and data itself as well ? I've seen that other database products\n> have a setup step where you have to specify the size of the database. \n> \n> Or does PostgreSQL do any other tricks to prevent fragmentation of data ?\n\nIf we stored all our tables in one file that would be needed. Since we\nuse the OS to do the defragmenting, I don't think it is an issue. We do\nallocate in 8k chunks to allow the OS to allocate full filesystem blocks\nalready. Not sure if preallocating even more would help.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 18 Jul 2001 11:35:40 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: Idea: recycle WAL segments, don't delete/recreate\n 'em"
},
{
"msg_contents": "> Hmmm... my prior appends to this newsgroup are stalled. Hopefully,\n> they'll be available soon.\n> \n> Tom Lane wrote:\n> > \n> > What you may really be saying is that the existing scheme for management\n> > of log segments is inappropriate for PIT usage; if so feel free to\n> > propose a better one. But I don't see how recycling of no-longer-wanted\n> > segments can break anything.\n> \n> Yes, but in a very roundabout way (or so it seems). The main point\n> that I was trying to illustrate was that if a database supports \n> point-in-time recovery, recycling of the only available log segments \n> is a bad thing. And, yes, in practice if you have point-in-time\n> recovery enabled you better archive your logs with your backup to\n> ensure that you can roll forward as expected.\n\nI assume you are not going to do point-in-time recovery by keeping all\nthe WAL segments around on the same disk. You have to copy them off\nsomewhere, right, and once you have copied them, why not reuse them?\n\n> A possible solution (as I mentioned before)) is to have 2 methods\n> of logging available: circular and forward-recoverable. When a\n> database is created, the creator selects which type of logging to\n> perform. The log segments are exactly the same, only the recycling\n> method is different.\n\nWill not fly. We need a solution that is flexible.\n\n> Hmmm... the more I look at this, the more interested I become.\n\nMy assumption is that once a log is full the point-in-time recovery\ndaemon will copy that off somewhere, either to a different disk, tape,\nor over the network to another machine. Once it is done making a copy,\nthe WAL log can be recycled, right? Am I missing something here?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 18 Jul 2001 11:54:50 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Idea: recycle WAL segments, don't delete/recreate 'em"
},
{
"msg_contents": "> Nonetheless, at some point an old WAL segment will become deletable\n> (unless you have infinite space on your WAL disk). ISTM that at that\n> point, it makes sense to consider recycling the file rather than\n> deleting it.\n\nOf course, if you plan to keep your WAL files on the same drive, you\ndon't really need point-in-time recovery anyway because you have the\nphysical data files. The only case I can keeping WAL files around for\npoint-in-time is if your WAL files are on a separate drive from the data\nfiles, but even then, the page images should be stripped out and the WAL\narchived somewhere else, hopefully in a configurable way to another\ndisk, tape, or networked computer.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 18 Jul 2001 12:11:51 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Idea: recycle WAL segments, don't delete/recreate 'em"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > Hmmm... my prior appends to this newsgroup are stalled. Hopefully,\n> > they'll be available soon.\n> >\n> > Tom Lane wrote:\n> > >\n> > > What you may really be saying is that the existing scheme for management\n> > > of log segments is inappropriate for PIT usage; if so feel free to\n> > > propose a better one. But I don't see how recycling of no-longer-wanted\n> > > segments can break anything.\n> >\n> > Yes, but in a very roundabout way (or so it seems). The main point\n> > that I was trying to illustrate was that if a database supports\n> > point-in-time recovery, recycling of the only available log segments\n> > is a bad thing. And, yes, in practice if you have point-in-time\n> > recovery enabled you better archive your logs with your backup to\n> > ensure that you can roll forward as expected.\n> \n> I assume you are not going to do point-in-time recovery by keeping all\n> the WAL segments around on the same disk.\n\nOf course not. As mentioned, you'd probably archive them with your\nbackup(s).\n\n> You have to copy them off\n> somewhere, right, and once you have copied them, why not reuse them?\n\nI'm not arguing that point. I stated \"recycling of the only available\nlog segments\". Once the log segment is archived (copied) elsewhere\nyou have two available images of the same segment. You can rename\nthe local copy. \n \n> > A possible solution (as I mentioned before)) is to have 2 methods\n> > of logging available: circular and forward-recoverable. When a\n> > database is created, the creator selects which type of logging to\n> > perform. The log segments are exactly the same, only the recycling\n> > method is different.\n> \n> Will not fly. We need a solution that is flexible.\n\nCould you expand on that a little (ie. flexible in which way).\nOffering the user a choice of two is more flexible than offering no \nchoice.\n \n> > Hmmm... the more I look at this, the more interested I become.\n> \n> My assumption is that once a log is full the point-in-time recovery\n> daemon will copy that off somewhere, either to a different disk, tape,\n> or over the network to another machine. Once it is done making a copy,\n> the WAL log can be recycled, right? Am I missing something here?\n\nOk... I wasn't thinking of having a point-in-time daemon. Some other\ndatabases provide, for lack of a better term, user exits to allow\nuser defined scripts or programs to be called to perform log segment\narchiving. This archiving is somewhat orthogonal to point-in-time\nrecovery proper.\n\nYep, once the archiving is complete, you can do whatever you want\nwith the local log segment.\n\nCheers,\nPatrick\n",
"msg_date": "Wed, 18 Jul 2001 12:27:41 -0400",
"msg_from": "Patrick Macdonald <patrickm@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: Idea: recycle WAL segments, don't delete/recreate 'em"
},
{
"msg_contents": "> > > Yes, but in a very roundabout way (or so it seems). The main point\n> > > that I was trying to illustrate was that if a database supports\n> > > point-in-time recovery, recycling of the only available log segments\n> > > is a bad thing. And, yes, in practice if you have point-in-time\n> > > recovery enabled you better archive your logs with your backup to\n> > > ensure that you can roll forward as expected.\n> > \n> > I assume you are not going to do point-in-time recovery by keeping all\n> > the WAL segments around on the same disk.\n> \n> Of course not. As mentioned, you'd probably archive them with your\n> backup(s).\n\nYou mean the nigthly backup? Why not do a pg_dump and be done with it.\n\n> > You have to copy them off\n> > somewhere, right, and once you have copied them, why not reuse them?\n> \n> I'm not arguing that point. I stated \"recycling of the only available\n> log segments\". Once the log segment is archived (copied) elsewhere\n> you have two available images of the same segment. You can rename\n> the local copy. \n\nYes, OK, I see now. As Tom mentioned, there would have to be some delay\nwhere we allow the WAL log to be archived before reusing it.\n\n> > > A possible solution (as I mentioned before)) is to have 2 methods\n> > > of logging available: circular and forward-recoverable. When a\n> > > database is created, the creator selects which type of logging to\n> > > perform. The log segments are exactly the same, only the recycling\n> > > method is different.\n> > \n> > Will not fly. We need a solution that is flexible.\n> \n> Could you expand on that a little (ie. flexible in which way).\n> Offering the user a choice of two is more flexible than offering no \n> choice.\n\nWe normally don't give users choices unless we can't come up with a\nwin-win solution to the problem. In this case, we could just query to\nsee if the WAL PIT archiver is running and handle tune reuse of log\nsegments on the fly. In fact, my guess is that the PIT archiver will\nhave to tell the system when it is done with WAL logs anyway.\n\n> > > Hmmm... the more I look at this, the more interested I become.\n> > \n> > My assumption is that once a log is full the point-in-time recovery\n> > daemon will copy that off somewhere, either to a different disk, tape,\n> > or over the network to another machine. Once it is done making a copy,\n> > the WAL log can be recycled, right? Am I missing something here?\n> \n> Ok... I wasn't thinking of having a point-in-time daemon. Some other\n> databases provide, for lack of a better term, user exits to allow\n> user defined scripts or programs to be called to perform log segment\n> archiving. This archiving is somewhat orthogonal to point-in-time\n> recovery proper.\n> \n> Yep, once the archiving is complete, you can do whatever you want\n> with the local log segment.\n\nWe will clearly need something to transfer these WAL logs somewhere\nelse, and it would be nice if it could be easily configured. I think a\nPIT logger daemon is the only solution, especially since tape/network\ntransfer could take a long time. It would be forked by the postmaster\nso would cover all users and databases.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 18 Jul 2001 12:35:04 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Idea: recycle WAL segments, don't delete/recreate 'em"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > > > Yes, but in a very roundabout way (or so it seems). The main point\n> > > > that I was trying to illustrate was that if a database supports\n> > > > point-in-time recovery, recycling of the only available log segments\n> > > > is a bad thing. And, yes, in practice if you have point-in-time\n> > > > recovery enabled you better archive your logs with your backup to\n> > > > ensure that you can roll forward as expected.\n> > >\n> > > I assume you are not going to do point-in-time recovery by keeping all\n> > > the WAL segments around on the same disk.\n> >\n> > Of course not. As mentioned, you'd probably archive them with your\n> > backup(s).\n> \n> You mean the nigthly backup? Why not do a pg_dump and be done with it.\n\nBut the purpose of point-in-time recovery is to restore your backup \nand then use the WAL to bring the backed up image up to a more current\nversion. \n\n> > > > A possible solution (as I mentioned before)) is to have 2 methods\n> > > > of logging available: circular and forward-recoverable. When a\n> > > > database is created, the creator selects which type of logging to\n> > > > perform. The log segments are exactly the same, only the recycling\n> > > > method is different.\n> > >\n> > > Will not fly. We need a solution that is flexible.\n> >\n> > Could you expand on that a little (ie. flexible in which way).\n> > Offering the user a choice of two is more flexible than offering no\n> > choice.\n> \n> We normally don't give users choices unless we can't come up with a\n> win-win solution to the problem. In this case, we could just query to\n> see if the WAL PIT archiver is running and handle tune reuse of log\n> segments on the fly. In fact, my guess is that the PIT archiver will\n> have to tell the system when it is done with WAL logs anyway.\n\nBut this could be a win-win situation. If a user doesn't not care \nabout point-in-time recovery, circular logs can be used. When a\ndatabase is created, a configurable number of log segments are\nallocated. The database uses those logs in a cyclic manner. No\nnew log segments need to be created under normal use. Automatic\nreuse.\n\nA database requiring point-in-time functionality will log very\nsimilar to the method in place today. New log segments will be\ncreated when needed. \n\n> > > > Hmmm... the more I look at this, the more interested I become.\n> > >\n> > > My assumption is that once a log is full the point-in-time recovery\n> > > daemon will copy that off somewhere, either to a different disk, tape,\n> > > or over the network to another machine. Once it is done making a copy,\n> > > the WAL log can be recycled, right? Am I missing something here?\n> >\n> > Ok... I wasn't thinking of having a point-in-time daemon. Some other\n> > databases provide, for lack of a better term, user exits to allow\n> > user defined scripts or programs to be called to perform log segment\n> > archiving. This archiving is somewhat orthogonal to point-in-time\n> > recovery proper.\n> >\n> > Yep, once the archiving is complete, you can do whatever you want\n> > with the local log segment.\n> \n> We will clearly need something to transfer these WAL logs somewhere\n> else, and it would be nice if it could be easily configured. I think a\n> PIT logger daemon is the only solution, especially since tape/network\n> transfer could take a long time. It would be forked by the postmaster\n> so would cover all users and databases.\n\nActually, it would be better if the entire logger was split out into\nit's own process like the large commercial databases. Archiving the\nlog segments would just be one of the many functions of the logger\nprocess. Just a thought.\n\nCheers,\nPatrick\n",
"msg_date": "Wed, 18 Jul 2001 12:51:14 -0400",
"msg_from": "Patrick Macdonald <patrickm@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: Idea: recycle WAL segments, don't delete/recreate 'em"
},
{
"msg_contents": "> > > Of course not. As mentioned, you'd probably archive them with your\n> > > backup(s).\n> > \n> > You mean the nigthly backup? Why not do a pg_dump and be done with it.\n> \n> But the purpose of point-in-time recovery is to restore your backup \n> and then use the WAL to bring the backed up image up to a more current\n> version. \n\nMy point was that the WAL logs are going to be archived after the backup\noccurs, right? From the text below, I see you are addressing that.\n\n> > > > > A possible solution (as I mentioned before)) is to have 2 methods\n> > > > > of logging available: circular and forward-recoverable. When a\n> > > > > database is created, the creator selects which type of logging to\n> > > > > perform. The log segments are exactly the same, only the recycling\n> > > > > method is different.\n> > > >\n> > > > Will not fly. We need a solution that is flexible.\n> > >\n> > > Could you expand on that a little (ie. flexible in which way).\n> > > Offering the user a choice of two is more flexible than offering no\n> > > choice.\n> > \n> > We normally don't give users choices unless we can't come up with a\n> > win-win solution to the problem. In this case, we could just query to\n> > see if the WAL PIT archiver is running and handle tune reuse of log\n> > segments on the fly. In fact, my guess is that the PIT archiver will\n> > have to tell the system when it is done with WAL logs anyway.\n> \n> But this could be a win-win situation. If a user doesn't not care \n> about point-in-time recovery, circular logs can be used. When a\n> database is created, a configurable number of log segments are\n> allocated. The database uses those logs in a cyclic manner. No\n> new log segments need to be created under normal use. Automatic\n> reuse.\n> \n> A database requiring point-in-time functionality will log very\n> similar to the method in place today. New log segments will be\n> created when needed. \n\nBasically, when the user asks for point-in-time, we can then control how\nwe recycle the logs, right? \n\n> > > > > Hmmm... the more I look at this, the more interested I become.\n> > > >\n> > > > My assumption is that once a log is full the point-in-time recovery\n> > > > daemon will copy that off somewhere, either to a different disk, tape,\n> > > > or over the network to another machine. Once it is done making a copy,\n> > > > the WAL log can be recycled, right? Am I missing something here?\n> > >\n> > > Ok... I wasn't thinking of having a point-in-time daemon. Some other\n> > > databases provide, for lack of a better term, user exits to allow\n> > > user defined scripts or programs to be called to perform log segment\n> > > archiving. This archiving is somewhat orthogonal to point-in-time\n> > > recovery proper.\n> > >\n> > > Yep, once the archiving is complete, you can do whatever you want\n> > > with the local log segment.\n> > \n> > We will clearly need something to transfer these WAL logs somewhere\n> > else, and it would be nice if it could be easily configured. I think a\n> > PIT logger daemon is the only solution, especially since tape/network\n> > transfer could take a long time. It would be forked by the postmaster\n> > so would cover all users and databases.\n> \n> Actually, it would be better if the entire logger was split out into\n> it's own process like the large commercial databases. Archiving the\n> log segments would just be one of the many functions of the logger\n> process. Just a thought.\n\nI think we already have a daemon that does checkpoints.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 18 Jul 2001 13:25:41 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Idea: recycle WAL segments, don't delete/recreate 'em"
},
{
"msg_contents": "\nErr.... PG_DUMP nightly on a 38,000,000+row table that takes forever to \ndump/unload, and gets updated every 5 minutes with 256KChar worth of \nupdates? \n\nGive me a FAST pg_dump, and I'll think about it, until then, no....\n\nLER\n(PS: this is also a reason for making a pg_upgrade work IN PLACE on a \ntable). \n\nLER\n>>>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<<\n\nOn 7/18/01, 11:35:04 AM, Bruce Momjian <pgman@candle.pha.pa.us> wrote \nregarding Re: [HACKERS] Idea: recycle WAL segments, don't delete/recreate \n'em:\n\n\n> > > > Yes, but in a very roundabout way (or so it seems). The main point\n> > > > that I was trying to illustrate was that if a database supports\n> > > > point-in-time recovery, recycling of the only available log segments\n> > > > is a bad thing. And, yes, in practice if you have point-in-time\n> > > > recovery enabled you better archive your logs with your backup to\n> > > > ensure that you can roll forward as expected.\n> > >\n> > > I assume you are not going to do point-in-time recovery by keeping all\n> > > the WAL segments around on the same disk.\n> >\n> > Of course not. As mentioned, you'd probably archive them with your\n> > backup(s).\n\n> You mean the nigthly backup? Why not do a pg_dump and be done with it.\n\n> > > You have to copy them off\n> > > somewhere, right, and once you have copied them, why not reuse them?\n> >\n> > I'm not arguing that point. I stated \"recycling of the only available\n> > log segments\". Once the log segment is archived (copied) elsewhere\n> > you have two available images of the same segment. You can rename\n> > the local copy.\n\n> Yes, OK, I see now. As Tom mentioned, there would have to be some delay\n> where we allow the WAL log to be archived before reusing it.\n\n> > > > A possible solution (as I mentioned before)) is to have 2 methods\n> > > > of logging available: circular and forward-recoverable. When a\n> > > > database is created, the creator selects which type of logging to\n> > > > perform. The log segments are exactly the same, only the recycling\n> > > > method is different.\n> > >\n> > > Will not fly. We need a solution that is flexible.\n> >\n> > Could you expand on that a little (ie. flexible in which way).\n> > Offering the user a choice of two is more flexible than offering no\n> > choice.\n\n> We normally don't give users choices unless we can't come up with a\n> win-win solution to the problem. In this case, we could just query to\n> see if the WAL PIT archiver is running and handle tune reuse of log\n> segments on the fly. In fact, my guess is that the PIT archiver will\n> have to tell the system when it is done with WAL logs anyway.\n\n> > > > Hmmm... the more I look at this, the more interested I become.\n> > >\n> > > My assumption is that once a log is full the point-in-time recovery\n> > > daemon will copy that off somewhere, either to a different disk, tape,\n> > > or over the network to another machine. Once it is done making a copy,\n> > > the WAL log can be recycled, right? Am I missing something here?\n> >\n> > Ok... I wasn't thinking of having a point-in-time daemon. Some other\n> > databases provide, for lack of a better term, user exits to allow\n> > user defined scripts or programs to be called to perform log segment\n> > archiving. This archiving is somewhat orthogonal to point-in-time\n> > recovery proper.\n> >\n> > Yep, once the archiving is complete, you can do whatever you want\n> > with the local log segment.\n\n> We will clearly need something to transfer these WAL logs somewhere\n> else, and it would be nice if it could be easily configured. I think a\n> PIT logger daemon is the only solution, especially since tape/network\n> transfer could take a long time. It would be forked by the postmaster\n> so would cover all users and databases.\n\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n\n> http://www.postgresql.org/search.mpl\n",
"msg_date": "Wed, 18 Jul 2001 20:00:29 GMT",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: Idea: recycle WAL segments, don't delete/recreate 'em"
}
] |
[
{
"msg_contents": "in the old days (7.0.3) i could list databases via\n\n\tpsql -l\n\nbut these days (7.1) i must\n\n\tpsql -l [-d] nameOfADatabaseFromPreordainedKnowledge\n\nprobably because of some fuxnored setting. but which?\n\n-- \nI'd concentrate on \"living in the now\" because it is fun\nand on building a better world because it is possible.\n\t- Tod Steward\n\nwill@serensoft.com\nhttp://sourceforge.net/projects/newbiedoc -- we need your brain!\nhttp://www.dontUthink.com/ -- your brain needs us!\n",
"msg_date": "Tue, 17 Jul 2001 11:11:56 -0500",
"msg_from": "will trillich <will@serensoft.com>",
"msg_from_op": true,
"msg_subject": "psql -l"
},
{
"msg_contents": "will trillich writes:\n\n> in the old days (7.0.3) i could list databases via\n>\n> \tpsql -l\n>\n> but these days (7.1) i must\n>\n> \tpsql -l [-d] nameOfADatabaseFromPreordainedKnowledge\n>\n> probably because of some fuxnored setting. but which?\n\nEvidence please?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Tue, 17 Jul 2001 21:26:02 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: psql -l"
},
{
"msg_contents": "will trillich <will@serensoft.com> writes:\n> in the old days (7.0.3) i could list databases via\n> \tpsql -l\n> but these days (7.1) i must\n> \tpsql -l [-d] nameOfADatabaseFromPreordainedKnowledge\n> probably because of some fuxnored setting. but which?\n\nSounds like you've got pg_hba.conf set to disallow connections to\ntemplate1, which is what psql tries to connect to when executing\na plain \"psql -l\".\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Jul 2001 16:34:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: psql -l "
},
{
"msg_contents": "> in the old days (7.0.3) i could list databases via\n>\n> psql -l\n>\n> but these days (7.1) i must\n>\n> psql -l [-d] nameOfADatabaseFromPreordainedKnowledge\n>\n> probably because of some fuxnored setting. but which?\n\npsql -l works fine for me... 7.1.2. Does it return an error or just not\nwork?\n\nGreg\n\n",
"msg_date": "Tue, 17 Jul 2001 16:54:29 -0400",
"msg_from": "\"Gregory Wood\" <gregw@com-stock.com>",
"msg_from_op": false,
"msg_subject": "Re: psql -l"
},
{
"msg_contents": "> will trillich <will@serensoft.com> writes:\n> > in the old days (7.0.3) i could list databases via\n> > \tpsql -l\n> > but these days (7.1) i must\n> > \tpsql -l [-d] nameOfADatabaseFromPreordainedKnowledge\n> > probably because of some fuxnored setting. but which?\n> \n> Sounds like you've got pg_hba.conf set to disallow connections to\n> template1, which is what psql tries to connect to when executing\n> a plain \"psql -l\".\n\nEwe, I never realized the problems with disabling template1\nconnections.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 17 Jul 2001 17:25:56 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: psql -l"
},
{
"msg_contents": "On Tue, Jul 17, 2001 at 09:26:02PM +0200, Peter Eisentraut wrote:\n> will trillich writes:\n> \n> > in the old days (7.0.3) i could list databases via\n> >\n> > \tpsql -l\n> >\n> > but these days (7.1) i must\n> >\n> > \tpsql -l [-d] nameOfADatabaseFromPreordainedKnowledge\n> >\n> > probably because of some fuxnored setting. but which?\n> \n> Evidence please?\n\nthis was 7.0.3, as it'll tell you:\n\n$ psql -V\npsql (PostgreSQL) 7.0.3\ncontains readline, history, multibyte support\nPortions Copyright (c) 1996-2000, PostgreSQL, Inc\nPortions Copyright (c) 1996 Regents of the University of California\nRead the file COPYRIGHT or use the command \\copyright to see the\nusage and distribution terms.\n\n$ psql -l\n List of databases\n Database | Owner | Encoding \n-----------+----------+----------\n admin | rdt | LATIN1\n agf | will | LATIN1\n camp | rdt | LATIN1\n ed | will | LATIN1\n gunk | will | LATIN1\n puz | will | LATIN1\n secsed | will | LATIN1\n template1 | postgres | LATIN1\n testorama | will | LATIN1\n tharp | rdt | LATIN1\n tips | will | LATIN1\n will | will | LATIN1\n(12 rows)\n\nbut with v 7.1 i get\n\n$ psql -V\nNo database specified\n$ psql -l\nNo database specified\n$ psql -V mydb\npsql (PostgreSQL) 7.1\ncontains readline, history, multibyte support\nPortions Copyright (c) 1996-2001, PostgreSQL Global Development Group\nPortions Copyright (c) 1996 Regents of the University of California\nRead the file COPYRIGHT or use the command \\copyright to see the\nusage and distribution terms.\n\n\n-- \nI'd concentrate on \"living in the now\" because it is fun\nand on building a better world because it is possible.\n\t- Tod Steward\n\nwill@serensoft.com\nhttp://sourceforge.net/projects/newbiedoc -- we need your brain!\nhttp://www.dontUthink.com/ -- your brain needs us!\n",
"msg_date": "Tue, 17 Jul 2001 22:42:48 -0500",
"msg_from": "will trillich <will@serensoft.com>",
"msg_from_op": true,
"msg_subject": "Re: psql -l"
},
{
"msg_contents": "On Tue, Jul 17, 2001 at 04:34:29PM -0400, Tom Lane wrote:\n> will trillich <will@serensoft.com> writes:\n> > in the old days (7.0.3) i could list databases via\n> > \tpsql -l\n> > but these days (7.1) i must\n> > \tpsql -l [-d] nameOfADatabaseFromPreordainedKnowledge\n> > probably because of some fuxnored setting. but which?\n> \n> Sounds like you've got pg_hba.conf set to disallow connections to\n> template1, which is what psql tries to connect to when executing\n> a plain \"psql -l\".\n\nhere's the totality of pg_hba.conf (sans comments):\n\n# grep -v '#' /etc/postgresql/pg_hba.conf | uniq\n\nlocal all trust\nhost all 127.0.0.1 255.0.0.0 trust\nhost all 192.168.1.0 255.255.255.0 trust\nhost all 0.0.0.0 0.0.0.0 crypt\n\n$ psql -V\nNo database specified\n\n$ psql -V template0\npsql (PostgreSQL) 7.1\ncontains readline, history, multibyte support\nPortions Copyright (c) 1996-2001, PostgreSQL Global Development Group\nPortions Copyright (c) 1996 Regents of the University of California\nRead the file COPYRIGHT or use the command \\copyright to see the\nusage and distribution terms.\n\n\ncuriouser and curiouser.\n\n-- \nI'd concentrate on \"living in the now\" because it is fun\nand on building a better world because it is possible.\n\t- Tod Steward\n\nwill@serensoft.com\nhttp://sourceforge.net/projects/newbiedoc -- we need your brain!\nhttp://www.dontUthink.com/ -- your brain needs us!\n",
"msg_date": "Tue, 17 Jul 2001 22:50:21 -0500",
"msg_from": "will trillich <will@serensoft.com>",
"msg_from_op": true,
"msg_subject": "Re: psql -l"
},
{
"msg_contents": "will trillich <will@serensoft.com> writes:\n> $ psql -V\n> No database specified\n\nThis seems awfully fishy, since (a) there is no such error message\nanywhere in 7.1, and (b) I don't get that behavior out of 7.1:\n\n$ ~postgres/version71/bin/psql -V\npsql (PostgreSQL) 7.1.2\ncontains readline, history support\nPortions Copyright (c) 1996-2001, PostgreSQL Global Development Group\nPortions Copyright (c) 1996 Regents of the University of California\nRead the file COPYRIGHT or use the command \\copyright to see the\nusage and distribution terms.\n$\n\nPerhaps you are invoking psql through some kind of wrapper script that\nis doing the wrong thing?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Jul 2001 11:57:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: psql -l "
},
{
"msg_contents": "On Wed, Jul 18, 2001 at 11:57:35AM -0400, Tom Lane wrote:\n> will trillich <will@serensoft.com> writes:\n> > $ psql -V\n> > No database specified\n> \n> This seems awfully fishy, since (a) there is no such error message\n> anywhere in 7.1, and (b) I don't get that behavior out of 7.1:\n> \n> $ ~postgres/version71/bin/psql -V\n> psql (PostgreSQL) 7.1.2\n> contains readline, history support\n> Portions Copyright (c) 1996-2001, PostgreSQL Global Development Group\n> Portions Copyright (c) 1996 Regents of the University of California\n> Read the file COPYRIGHT or use the command \\copyright to see the\n> usage and distribution terms.\n> $\n> \n> Perhaps you are invoking psql through some kind of wrapper script that\n> is doing the wrong thing?\n\ni'm wondering now if this may be a debian packaging mishap...\nor maybe i've got psql v7.0.3 trying to work with postmaster\n7.1, but that doesn't seem likely...\n\nat any rate -- maybe this session with 'strings' might help you\nsee identify what's goofing up here.\n\n-- \nI'd concentrate on \"living in the now\" because it is fun\nand on building a better world because it is possible.\n\t- Tod Steward\n\nwill@serensoft.com\nhttp://sourceforge.net/projects/newbiedoc -- we need your brain!\nhttp://www.dontUthink.com/ -- your brain needs us!\n\nScript started on Wed Jul 18 12:21:42 2001\n\n$ which psql\n/usr/bin/psql\n\n$ ls -l `which psql`\nlrwxrwxrwx 1 root root 10 Jul 1 22:28 /usr/bin/psql -> pg_wrapper\n\n$ psql -V\nNo database specified\n\n$ psql -V ed\npsql (PostgreSQL) 7.1\ncontains readline, history, multibyte support\nPortions Copyright (c) 1996-2001, PostgreSQL Global Development Group\nPortions Copyright (c) 1996 Regents of the University of California\nRead the file COPYRIGHT or use the command \\copyright to see the\nusage and distribution terms.\n\n$ strings `which psql` | grep -i speci\nNo database specified\n\n$ strings `which psql` | wc -l\n 74\n\n$ strings `which psql`\n/lib/ld-linux.so.2\n__gmon_start__\nlibc.so.6\nstrcpy\ngetopt_long\ngetenv\npclose\nmalloc\noptarg\npopen\nrindex\nfprintf\n__deregister_frame_info\ngetdelim\noptind\nsetenv\n__strdup\nexecv\nstrncat\nmemset\nstderr\nexit\n_IO_stdin_used\n__libc_start_main\n__register_frame_info\nGLIBC_2.1\nGLIBC_2.0\nPTRh\nVj=3DWj\n/usr/lib/postgresql/bin/\n/var/lib/postgres/data\nPGDATA\n/usr/lib/postgresql/lib\nPGLIB\n5432\nPGPORT\npg_wrapper\npg_wrapper cannot be run as itself, but only through a link\nwhose name is that of the real program to be run.\npsql\nhelp\nno-psqlrc\nexpanded\npassword\nversion\nvariable\nusername\ntable-attr\ntuples-only\nsingle-line\nsingle-step\nrecord-separator\npset\nport\noutput\nlist\nhtml\nhost\nfield-separator\nfile\necho-hidden\necho-queries\ndbname\ncommand\nno-align\necho-all\naAc:d:Eef:F:h:Hlo:p:P:qR:sStT:uU:v:VWxX?\nPGDATABASE\nCould not execv %s\nNo database specified\nenv - /usr/lib/postgresql/bin/readpgenv\nCould not run %s\nUnexpected input from %s:\n%s=3D%s\n\n$ exit\n\nScript done on Wed Jul 18 12:21:53 2001",
"msg_date": "Wed, 18 Jul 2001 12:42:19 -0500",
"msg_from": "will trillich <will@serensoft.com>",
"msg_from_op": true,
"msg_subject": "Re: psql -l"
},
{
"msg_contents": "On Wednesday 18 July 2001 11:57, Tom Lane wrote:\n> will trillich <will@serensoft.com> writes:\n> > $ psql -V\n> > No database specified\n\n> This seems awfully fishy, since (a) there is no such error message\n> anywhere in 7.1, and (b) I don't get that behavior out of 7.1:\n\n> Perhaps you are invoking psql through some kind of wrapper script that\n> is doing the wrong thing?\n\nDebian, perhaps? From the Debian patchfile: ( \nhttp://non-us.debian.org/debian-non-US/pool/non-US/main/p/postgresql/postgresql_7.1.2-1.diff.gz \n)\n+Debian-specific features\n+========================\n+\n+There are certain differences between the Debian version of PostgreSQL\n+and the upstream version. There are two reasons for this. First,\n+because Debian policy requires certain things to be done in a manner\n+different from that used by the upstream developers, and second, because\n+I perceive a difference between a piece of software that is put onto \n+a machine by an ordinary user and one that is installed, as part of a\n+distribution, by the system administrator.\n+\n+1. Environment variables: Debian forbids packages to depend on users'\n+ setting environment variables. For this reason, certain front-end\n+ programs, especially psql, are run through a wrapper that sets up\n+ the environment.\n+\n+2. Default database: the upstream version defaults to a database whose\n+ name is the same as the name of the PostgreSQL user who is trying to\n+\taccess it. I do not think this is appropriate to a distribution, so\n+\tin Debian, the database must be specified on the command line or in\n+\tthe environment variable PGDATABASE.\n+\n+3. Initialising the postmaster: the upstream version uses a program called\n+ pg_ctl, that was introduced at release 7.0, to start up and stop the\n+ postmaster. I do not feel that this sits very comfortably with\n+ Debian's way of starting backend processes, so I have continued to use\n+ the procedure I developed for previous versions, whereby\n+ /etc/init.d/postgresql calls postgresql-startup or start-stop-daemon.\n+ I will be borrowing nice features from pg_ctl to incorporate in the\n+ init.d script.\n+\n+4. Initial environment: Debian stores its setup files in /etc/postgresql.\n+ These files are postmaster.conf, pg_hba.conf and postgresql.env, and any\n+ files referenced by pg_hba.conf. They are self-documented, so you are\n+ advised to leave the coments alone if you edit them. Where necessary,\n+ there are symbolic links to the locations where the upstream code\n+ expects to find them.\n+\n+5. Location of socket: in previous versions the socket file was located\n+ in /tmp/. It has now been moved to /var/run/postgresql/ so as to avoid\n+\tproblems with packages such as tmpreaper and to be more consistent\n+\twith Debian policy. This location can be altered by setting\n+\tUNIX_SOCKET_DIRECTORY in postgresql.conf.\n+\n+6. Unix socket authentication is provided (authentication type \"peer\").\n+ This works just like ident, but for Unix sockets; this provides a more\n+\tsecure method of authentication than ident, and does not require\n+\tadministrators to run identd on their servers. This authentication\n+\tmethod has been submitted to the upstream developers, but is not\n+\tcurrently part of the upstream release.\n+\n\nThis excerpt from the file README.Debian.\n\nThe error message itself is being issued by the Debian 'pg_wrapper' program \n(see the pg_wrapper source embedded in the patchfile starting at line 5434 -- \nthe error message itself is at line 5646 in the patchfile).\n\nI can sympathize with Oliver here -- distribution policy can be a pain to \ndeal with, although Red Hat's policy isn't apparently as strict as Debian's. \nI also sympatchize and agree with Oliver's statement about differences \nbetween packages that are installed by a user in /usr/local and packages \ninstalled by the system administrator as part of an operating system.\n\nWe're not, eh, 'distribution-friendly' in reality -- although Peter's work on \nthe build system really helped the RPM side of things. The 'traditional \nPostgreSQL installation' is more 'user-install'-centric -- which is OK, as \nlong as everybody knows what the packagers are doing... :-)\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Wed, 18 Jul 2001 15:30:29 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: psql -l"
},
{
"msg_contents": "On Wednesday 18 July 2001 15:30, Lamar Owen wrote:\n> Debian's. I also sympatchize and agree with Oliver's statement about\n ^^^^^^^^^^^\nFreudian slip... ROTFL....\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Wed, 18 Jul 2001 16:07:17 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: psql -l"
},
{
"msg_contents": "\nWhat is not appropriate is that we are getting error reports for\nprograms we didn't write!\n\n> On Wednesday 18 July 2001 11:57, Tom Lane wrote:\n> > will trillich <will@serensoft.com> writes:\n> > > $ psql -V\n> > > No database specified\n> \n> > This seems awfully fishy, since (a) there is no such error message\n> > anywhere in 7.1, and (b) I don't get that behavior out of 7.1:\n> \n> > Perhaps you are invoking psql through some kind of wrapper script that\n> > is doing the wrong thing?\n\n[ Much pontificating skipped.]\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 18 Jul 2001 16:30:04 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: psql -l"
},
{
"msg_contents": "On Wednesday 18 July 2001 16:30, Bruce Momjian wrote:\n> What is not appropriate is that we are getting error reports for\n> programs we didn't write!\n\nWhich is why I believe that Red Hat's 'relabeling' is a Good Thing -- _they_ \nwill get bug reports for 'Red Hat Database' -- then _they_ can contact the \nhackers list after verifying that it isn't their 'modifications' causing the \nproblem.\n\nI really try to make the RPM's patching minimal -- directory paths, a README, \nmigration helper scripts, and the initscript are the only 'enhancements' I \nprovide. And I try to field any reports I see on the list....\n\nHowever, Oliver is a part of 'we' isn't he? \n\n> [ Much pontificating skipped.]\n\nInteresting choice of words.... I wasn't intending to sound authoritarian.... \nor religious. Sorry if I did sound pontifical.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Wed, 18 Jul 2001 17:38:09 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: psql -l"
},
{
"msg_contents": "> On Wednesday 18 July 2001 16:30, Bruce Momjian wrote:\n> > What is not appropriate is that we are getting error reports for\n> > programs we didn't write!\n> \n> Which is why I believe that Red Hat's 'relabeling' is a Good Thing -- _they_ \n> will get bug reports for 'Red Hat Database' -- then _they_ can contact the \n> hackers list after verifying that it isn't their 'modifications' causing the \n> problem.\n> \n> I really try to make the RPM's patching minimal -- directory paths, a README, \n> migration helper scripts, and the initscript are the only 'enhancements' I \n> provide. And I try to field any reports I see on the list....\n> \n> However, Oliver is a part of 'we' isn't he? \n\nThe strange part is that we are running around trying to figure out if\nit is a bug and no one knows that Debian has modified it.\n\nThe Red Hat stuff isn't a problem because you recoginize it as such.\n\n> > [ Much pontificating skipped.]\n> \n> Interesting choice of words.... I wasn't intending to sound authoritarian.... \n> or religious. Sorry if I did sound pontifical.\n\nNo, it was the Debian guy who sounded funny.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 18 Jul 2001 18:03:26 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: psql -l"
},
{
"msg_contents": "On Wednesday 18 July 2001 18:03, Bruce Momjian wrote:\n> The strange part is that we are running around trying to figure out if\n> it is a bug and no one knows that Debian has modified it.\n\nWhile I understand Oliver's reasons for having the Debian stuff on the debian \nserver, I believe it would be appropriate to have the patchfile and the \nvarious Debian README's available on the main postgresql site.\n\nIt was a hunch that sent me looking there -- I had read the Debian patchfile \nbefore (about the 'peer' authentication deal), and apparently my subconscious \npicked up on it.\n\n> The Red Hat stuff isn't a problem because you recoginize it as such.\n\nThe RPM has a rather distinctive footprint, yes....\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Wed, 18 Jul 2001 18:16:27 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: psql -l"
},
{
"msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> On Wednesday 18 July 2001 18:03, Bruce Momjian wrote:\n>> The strange part is that we are running around trying to figure out if\n>> it is a bug and no one knows that Debian has modified it.\n\n> While I understand Oliver's reasons for having the Debian stuff on the\n> debian server, I believe it would be appropriate to have the patchfile\n> and the various Debian README's available on the main postgresql site.\n\nISTM that it'd be a good thing if current versions of all the add-on\nsource files for both Debian and RedHat RPMs were part of our CVS tree\n(perhaps in /contrib, perhaps somewhere else, but anyway in the tree).\nHad I been able to find that \"No database specified\" string by grepping\nthe sources, I'd have been much less mystified. Likewise for the \"peer\"\nquestion a week or two back, and the questions we sometimes get about\nthe behavior of startup scripts that aren't even part of our tarball.\n\nThis sort of thing is going to keep coming up, so we might as well admit\nthat we need to know what is in the RPMs.\n\nOliver, Lamar, what say you?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Jul 2001 22:42:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "RPM source files should be in CVS (was Re: psql -l)"
},
{
"msg_contents": "On Wednesday 18 July 2001 10:42 pm, Tom Lane wrote:\n> Lamar Owen <lamar.owen@wgcr.org> writes:\n> > While I understand Oliver's reasons for having the Debian stuff on the\n> > debian server, I believe it would be appropriate to have the patchfile\n> > and the various Debian README's available on the main postgresql site.\n\n> ISTM that it'd be a good thing if current versions of all the add-on\n> source files for both Debian and RedHat RPMs were part of our CVS tree\n> (perhaps in /contrib, perhaps somewhere else, but anyway in the tree).\n> Had I been able to find that \"No database specified\" string by grepping\n> the sources, I'd have been much less mystified. Likewise for the \"peer\"\n> question a week or two back, and the questions we sometimes get about\n> the behavior of startup scripts that aren't even part of our tarball.\n\nDeja vu... didn't we have this discussion a month or two back?? :-) ( \nhttp://fts.postgresql.org/db/mw/msg.html?mid=115437#thread )\n\nI'm all for it for the RPM's, at least, if others are game. We left off with \nthe question of where it would best be stored....\n\nThere is, in fact, an outstanding issue with the RPM initscript that I'm \nstill working on -- the 'sometimes I get a failure that isn't really a \nfailure' deal....I can't reproduce it.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Wed, 18 Jul 2001 23:55:15 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: RPM source files should be in CVS (was Re: psql -l)"
},
{
"msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> On Wednesday 18 July 2001 10:42 pm, Tom Lane wrote:\n>> ISTM that it'd be a good thing if current versions of all the add-on\n>> source files for both Debian and RedHat RPMs were part of our CVS tree\n\n> Deja vu... didn't we have this discussion a month or two back?? :-) ( \n> http://fts.postgresql.org/db/mw/msg.html?mid=115437#thread )\n\nYeah, we did. You seemed willing, but there was a notable silence\nfrom the Debian quarter.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Jul 2001 00:11:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: RPM source files should be in CVS (was Re: psql -l) "
},
{
"msg_contents": "On Thu, Jul 19, 2001 at 12:11:47AM -0400, Tom Lane wrote:\n> Lamar Owen <lamar.owen@wgcr.org> writes:\n> > On Wednesday 18 July 2001 10:42 pm, Tom Lane wrote:\n> >> ISTM that it'd be a good thing if current versions of all the add-on\n> >> source files for both Debian and RedHat RPMs were part of our CVS tree\n> \n> > Deja vu... didn't we have this discussion a month or two back?? :-) ( \n> > http://fts.postgresql.org/db/mw/msg.html?mid=115437#thread )\n> \n> Yeah, we did. You seemed willing, but there was a notable silence\n> from the Debian quarter.\n\nThere have been discussions in the past on the debian mailing lists about\nwhether it is a good idea for upstream sources to include the debian patch.\nThe gist of it that since debian builds packages based on a pristine source\ntar ball and a patch, if the patch were upstream, would the patch have to\npatch its own upstream version?\n\nIf however they were merely stored in contrib/distributions/patches or some\nsuch and there was an understanding that that may not match what is\ncurrently available from debian, then I see no problem.\n\n-- \nMartijn van Oosterhout <kleptog@svana.org>\nhttp://svana.org/kleptog/\n> It would be nice if someone came up with a certification system that\n> actually separated those who can barely regurgitate what they crammed over\n> the last few weeks from those who command secret ninja networking powers.\n",
"msg_date": "Thu, 19 Jul 2001 15:33:33 +1000",
"msg_from": "Martijn van Oosterhout <kleptog@svana.org>",
"msg_from_op": false,
"msg_subject": "Re: RPM source files should be in CVS (was Re: psql -l)"
},
{
"msg_contents": "Tom Lane writes:\n\n> ISTM that it'd be a good thing if current versions of all the add-on\n> source files for both Debian and RedHat RPMs were part of our CVS tree\n\nIf you want to take the job of keeping these up to date or the job of\nconvincing all the 143 package developers out there to do that...\n\nAll of these package files are in a CVS tree somewhere (or should be), and\nI would be happy to provide you with the links I use to poke around them\non occasion, but it should not be our job to keep track of the fancies of\noperating system developers.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Fri, 20 Jul 2001 16:05:30 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: RPM source files should be in CVS (was Re: psql -l)"
},
{
"msg_contents": "On Friday 20 July 2001 10:05, Peter Eisentraut wrote:\n> Tom Lane writes:\n> > ISTM that it'd be a good thing if current versions of all the add-on\n> > source files for both Debian and RedHat RPMs were part of our CVS tree\n\n> If you want to take the job of keeping these up to date or the job of\n> convincing all the 143 package developers out there to do that...\n\nThat would be my job to do for the RPM's, right? :-) I'm already keeping \nthem up -- and I know how to use CVS to do it, having commit access on a \ncouple of other projects.\n\nLet me just say on thing about 'our' RPMs -- they are intended as a \nsemi-standard base from which OS distributors may make their own packages, \nfor which they are responsible. A side benefit is that people can grab the \nlatest hopefully-generic RPMset from us if their OS vendor isn't forthcoming \nwith updated packages.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Fri, 20 Jul 2001 11:07:05 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: RPM source files should be in CVS (was Re: psql -l)"
},
{
"msg_contents": "> Deja vu... didn't we have this discussion a month or two back?? :-) ( \n> http://fts.postgresql.org/db/mw/msg.html?mid=115437#thread )\n> \n> I'm all for it for the RPM's, at least, if others are game. We left off with \n> the question of where it would best be stored....\n> \n> There is, in fact, an outstanding issue with the RPM initscript that I'm \n> still working on -- the 'sometimes I get a failure that isn't really a \n> failure' deal....I can't reproduce it.\n\nLet me add that Red Hat is now distributing a different RPM with their\nRed Hat Database, or at least I think they are. Can someone confirm?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 20 Jul 2001 11:24:32 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: RPM source files should be in CVS (was Re: psql -l)"
},
{
"msg_contents": "> Tom Lane writes:\n> \n> > ISTM that it'd be a good thing if current versions of all the add-on\n> > source files for both Debian and RedHat RPMs were part of our CVS tree\n> \n> If you want to take the job of keeping these up to date or the job of\n> convincing all the 143 package developers out there to do that...\n> \n> All of these package files are in a CVS tree somewhere (or should be), and\n> I would be happy to provide you with the links I use to poke around them\n> on occasion, but it should not be our job to keep track of the fancies of\n> operating system developers.\n\nI am slightly concerned about bloating our CVS tree.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 20 Jul 2001 11:25:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: RPM source files should be in CVS (was Re: psql -l)"
},
{
"msg_contents": "On Friday 20 July 2001 11:24, Bruce Momjian wrote:\n> Let me add that Red Hat is now distributing a different RPM with their\n> Red Hat Database, or at least I think they are. Can someone confirm?\n\nTrond may be able to.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Fri, 20 Jul 2001 11:26:59 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: RPM source files should be in CVS (was Re: psql -l)"
},
{
"msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n\n> On Friday 20 July 2001 11:24, Bruce Momjian wrote:\n> > Let me add that Red Hat is now distributing a different RPM with their\n> > Red Hat Database, or at least I think they are. Can someone confirm?\n> \n> Trond may be able to.\n\nThe rpms of the Red Hat database are called rhdb, to avoid confusion.\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n",
"msg_date": "20 Jul 2001 11:35:37 -0400",
"msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)",
"msg_from_op": false,
"msg_subject": "Re: RPM source files should be in CVS (was Re: psql -l)"
},
{
"msg_contents": "On Friday 20 July 2001 11:25, Bruce Momjian wrote:\n> I am slightly concerned about bloating our CVS tree.\n\n[lowen@utility pgsql]$ du -s *\n4 COPYRIGHT\n20 CVS\n276 ChangeLogs\n4 GNUmakefile.in\n124 HISTORY\n36 INSTALL\n4 Makefile\n4 README\n4 aclocal.m4\n160 config\n240 configure\n36 configure.in\n3520 contrib\n7196 doc\n4 register.txt\n30128 src\n[lowen@utility pgsql]$\n\nThe RPM additions are:\n56 contrib-intarray.tar.gz\n4 file-lists.tar.gz\n180 jdbc7.0-1.1.jar\n92 jdbc7.1-1.2.jar\n8 migration-scripts.tar.gz\n4 postgresql-7.1.plperl.patch\n4 postgresql-7.1.s390x.patch\n4 postgresql-bashprofile\n4 postgresql-dump.1.gz\n8 postgresql.init\n20 README.rpm-dist\n4 rh-pgdump.sh\n8 rpm-pgsql-7.1.patch\n\nOf which the two jar files are derived from the source and wouldn't be \nnecessary. This totals 124K if I've done my math right.\n\nThe contrib-intarray.tar.gz is a new intarray from Red Hat -- I really need \nto investigate this more closely.... \n\n'file-lists.tar.gz' may go away if I don't need to support older SuSE \ndistributions. \n\n'migration-scripts.tar.gz' is a packaging of a few scripts used to perform a \ndump using saved binaries from the previous installation. \n\nThe .patch files are system-specific and RPM-specific patches that probably \nare not necessary to be applied to the main tree, as they mostly involve \npaths (particularly for the regression tests), with the exception of the s390 \npatch. I can send these patches to the list easily enough -- the s390 patch \nis also a Red Hat contribution.\n\n'postgresql-bashprofile' is a .bashprofile placed into $PGDATA during RPM \ninstallation (and, yes, unless someone goes and changes it the default shell \nfor the 'postgres' user is installed as bash).\n\n'postgresql-dump.1.gz' is a man page for 'postgresql-dump' -- part of the \nmigrations-scripts tarball.\n\n'rh-dump.sh' is another migration script that I really should fold into the \nmigrations-scripts tarball.... Although any of these migration scripts are a \nmite cantankerous, as a pg_dumpall from 6.5.x to be restored on 7.1.x has \nbeen shown to fail in certain instances (documented in the archives). Most \nnoticeable is the '60 minute' problem in 6.5's time types. 7.1.x takes a 6.5 \ndump a little easier than 7.0 did, though, IME.\n\n'postgresql.init' is the initscript installed into \n/etc/rc.d/init.d/postgresql.\n\n'README.rpm-dist' tries to document the RPM differences and _why_ there are \nRPM differences in the first place. While I technically _could_ patch the \ninstalled SGML documentation sources to reflect the RPM installation, I have \nnot yet done so. README.rpm-dist is also available for perusal via the \nwebsite as \nhttp://www.postgresql.org/ftpsite/binary/v7.1.2/RPMS/README.rpm-dist linked \noff of http://www.postgresql.org/sites.html .\n\nI'll be happy to document or discuss any of these files with anyone.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Fri, 20 Jul 2001 11:51:19 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: RPM source files should be in CVS (was Re: psql -l)"
},
{
"msg_contents": "> On Friday 20 July 2001 11:25, Bruce Momjian wrote:\n> > I am slightly concerned about bloating our CVS tree.\n> The RPM additions are:\n> 56 contrib-intarray.tar.gz\n> 4 file-lists.tar.gz\n> 180 jdbc7.0-1.1.jar\n> 92 jdbc7.1-1.2.jar\n> 8 migration-scripts.tar.gz\n> 4 postgresql-7.1.plperl.patch\n> 4 postgresql-7.1.s390x.patch\n> 4 postgresql-bashprofile\n> 4 postgresql-dump.1.gz\n> 8 postgresql.init\n> 20 README.rpm-dist\n> 4 rh-pgdump.sh\n> 8 rpm-pgsql-7.1.patch\n> \n> Of which the two jar files are derived from the source and wouldn't be \n> necessary. This totals 124K if I've done my math right.\n\nBag the JAR's and it looks fine.\n\n> The contrib-intarray.tar.gz is a new intarray from Red Hat -- I really need \n> to investigate this more closely.... \n\nCan you research that? Why are they doing it?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 20 Jul 2001 12:03:01 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: RPM source files should be in CVS (was Re: psql -l)"
},
{
"msg_contents": "On Friday 20 July 2001 12:03, Bruce Momjian wrote:\n> > The contrib-intarray.tar.gz is a new intarray from Red Hat -- I really\n> > need to investigate this more closely....\n\n> Can you research that? Why are they doing it?\n\nIt looks like the updated intarray from Oleg. The diff between what is in the \n7.1.2 tarball (which is the same as what is in current CVS) is 26K (the whole \nintarray directory du's at 192K), and appears to be extensive in nature, with \na warning that this is _only_ for PostgreSQL 7.1 and above. Diff to 7.1.2 \nattached.\n\nOleg announced the new intarray in this message: \nhttp://fts.postgresql.org/db/mw/msg.html?mid=120655 and there was discussion \nfollowing. But I don't see this version in CURRENT CVS??? Hmmm.... I don't \nsee the README changes in current CVS, but I do see the code changes....\n\nThe contrib support in the RPMset is fairly new, and Trond made this change \nthat I synced in place. Should I not ship the updated intarray?\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11",
"msg_date": "Fri, 20 Jul 2001 12:28:33 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: RPM source files should be in CVS (was Re: psql -l)"
},
{
"msg_contents": "Lamar Owen writes:\n\n> The RPM additions are:\n> 56 contrib-intarray.tar.gz\n> 4 file-lists.tar.gz\n> 8 migration-scripts.tar.gz\n\nNo tar or gz files in CVS.\n\n> 4 postgresql-7.1.plperl.patch\n> 4 postgresql-7.1.s390x.patch\n\nNo patches in CVS.\n\n> 4 postgresql-bashprofile\n\nNot sure what's in there exactly, so I can't comment.\n\n> 4 postgresql-dump.1.gz\n\nWhat's wrong with pg_dump?\n\n> 8 postgresql.init\n\nWe already have one of those.\n\n> 4 rh-pgdump.sh\n\nSee above.\n\n> 8 rpm-pgsql-7.1.patch\n\nSee above.\n\n\nSee, if we want to get into the packaging business for real, then there\nshould not be any patches or extra programs or extra features distributed.\nFixes should be incorporated, not archived.\n\nThen again, there are at least six other packaging efforts out there which\ncome with yet another set of patches and what not so I see this getting\nmessy.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Fri, 20 Jul 2001 18:30:36 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: RPM source files should be in CVS (was Re: psql -l)"
},
{
"msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> The contrib-intarray.tar.gz is a new intarray from Red Hat -- I really need \n> to investigate this more closely.... \n\nSeems like that should go into our regular contrib tree.\n\n> The .patch files are system-specific and RPM-specific patches that\n> probably are not necessary to be applied to the main tree, as they\n> mostly involve paths (particularly for the regression tests), with the\n> exception of the s390 patch. I can send these patches to the list\n> easily enough -- the s390 patch is also a Red Hat contribution.\n\nPortability patches also seem like they should go into the main sources.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 20 Jul 2001 12:35:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: RPM source files should be in CVS (was Re: psql -l) "
},
{
"msg_contents": "On Friday 20 July 2001 12:30, Peter Eisentraut wrote:\n> Lamar Owen writes:\n> > 8 migration-scripts.tar.gz\n\n> No tar or gz files in CVS.\n\nThat's easy enough to fix.\n\n> > 4 postgresql-7.1.plperl.patch\n> > 4 postgresql-7.1.s390x.patch\n\n> No patches in CVS.\n\nThen the configure system needs to support an optional FHS-compliant mode, as \nwell as alternative builds for plperl.... :-) The biggest patching by far is \nin the regression tests, which really are not designed to live outside the \nsource tree, but can be munged into shape fairly easily.\n\nWhy no patches in CVS? How do you propose to handle the differences, \nparticularly if the debian stuff is placed in CVS (that diff is 10,884 lines \nand 362,119 bytes in length)?\n\n> > 4 postgresql-dump.1.gz\n\n> What's wrong with pg_dump?\n\nIt only uses the current executables. After an 'rpm -U' older executables \nare kept around by the server subpackage's %pre scriptlet, and \npostgresql-dump/rh-dump massage the ld envvars to force execution of those \nbinaries to take a dump of the old database. If a good upgrade path existed, \nthis kludge would be unnecessary.\n\n> > 8 postgresql.init\n\n> We already have one of those.\n\nThat is too generic. This one is rather simple and is designed to work well \nin the FHS-compliant environment, without lots of guessing where things are \n-- the RPMset has a rigid structure where no guesswork is really necessary \n(except for the location of documentation, which changes from dist to dist). \nThis one also automatically initdb's for you in the correct (for the RPMset) \nplace and with saving of locale values, etc, for later use, making sure to \nstart the postmaster with the same locale as initdb was executed with... \namongst other things. \n\nIf an older database is found, a message pointing to the README.rpm-dist is \nsupplied, and postmaster is not started.\n\n> See, if we want to get into the packaging business for real, then there\n> should not be any patches or extra programs or extra features distributed.\n> Fixes should be incorporated, not archived.\n\nThen there needs to be a little more willingness to accept patches which \nprovide FHS-compliance (as an option). So your opinion is that we're not \nreally in the packaging business, right? Or am I misunderstanding that?\n\nShould package-specific programs be distributed as part of the tarball that \neveryone downloads? Not likely. \n\nBeing in CVS != being in the tarball(s), does it?\n\nAnd having the spec file (which I inadvertently omitted above -- it's another \n30k or so) in CVS would be a real boon to many.\n\n> Then again, there are at least six other packaging efforts out there which\n> come with yet another set of patches and what not so I see this getting\n> messy.\n\nSix? I know of PLD's difference, and SuSE's difference -- but both of them \nhave synced with our set periodically. There's four others? Or are you \nreferring to non-RPM packages, such as Cygwin, Debian, and (four others)?\n\nHey, I'm fine either way -- it was Tom's suggestion, in order to help all the \ndevelopers out, not just me. It matters little to me if it's in the \npostgresql.org CVS or not -- but it could help him and others track stuff \ndown (recursive grep with error messages, for instance).\n\nPoint being that bug reports that involve changes to the core code by \npackages are happening, and confusion has resulted. A solution needs to be \nfound -- and, frankly, the packages aren't going away.\n\nI'm going to go back two and a half years and re-read the stuff that lead up \nto me fielding this particular portion of our development, to refresh my \nperspective.....\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Fri, 20 Jul 2001 12:51:24 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: RPM source files should be in CVS (was Re: psql -l)"
},
{
"msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> Oleg announced the new intarray in this message:\n> http://fts.postgresql.org/db/mw/msg.html?mid=120655 and there was\n> discussion following. But I don't see this version in CURRENT CVS???\n\nI believe the state of play is that we have some catalog-changing work\nto do to support GIST (ie, make \"haskeytype\" work cleanly), and merging\nthe updated intarray code is on hold until that gets done --- hopefully,\nearly next month. We're probably not exactly in sync with what Oleg\nhas, but I'm not worried about it until the GIST dust settles.\n\nI dunno whether it makes sense to be shipping the updated intarray code\nwith 7.1; does it work properly in 7.1? Ask Oleg ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 20 Jul 2001 16:33:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: RPM source files should be in CVS (was Re: psql -l) "
},
{
"msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> The biggest patching by far is \n> in the regression tests, which really are not designed to live outside the \n> source tree, but can be munged into shape fairly easily.\n\nPeter has already done good work in making it possible to build outside\nthe source tree. ISTM that it would make logical sense to allow\nregression tests to be run outside the source tree as well, as long as\nthe changes don't break the existing procedures. I have not looked at\nyour patches in this area --- what do they need to do, exactly?\n\n> Being in CVS != being in the tarball(s), does it?\n\nYes. When this was discussed last time, I think the conclusion was that\npackaging scripts should be in a different cvs module from the core\nsources.\n\nI think there are really two separate discussions going on here: one is\nwhether we shouldn't try harder to roll some of the RPMset diffs back\ninto the main sources, and the other is how we can make information\nabout some of the popular packages more readily visible/available to the\ndevelopers. Peter's stance on the latter seems to be \"go look at the\npackagers' sites\", which is defensible, but that's the current approach\nand I think it's not working. Leastwise I sure have no idea what's in\nthe packages. If I can pull down one additional CVS module from hub.org\nand include that in my Postgres glimpse searches, I am actually likely\nto expend that much effort, and as a result will be a lot better\ninformed.\n\n> Point being that bug reports that involve changes to the core code by \n> packages are happening, and confusion has resulted. A solution needs to be \n> found -- and, frankly, the packages aren't going away.\n\nExactly.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 20 Jul 2001 17:14:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: RPM source files should be in CVS (was Re: psql -l) "
},
{
"msg_contents": "[cc: to GENERAL replacedby cc: to HACKERS]\nOn Friday 20 July 2001 17:14, Tom Lane wrote:\n> Lamar Owen <lamar.owen@wgcr.org> writes:\n> > The biggest patching by far is\n> > in the regression tests, which really are not designed to live outside\n> > the source tree, but can be munged into shape fairly easily.\n\n> Peter has already done good work in making it possible to build outside\n> the source tree. ISTM that it would make logical sense to allow\n> regression tests to be run outside the source tree as well, as long as\n> the changes don't break the existing procedures. I have not looked at\n> your patches in this area --- what do they need to do, exactly?\n\nOk, let's look. First, there is a createlang issue: during build, @libdir@ \nas referenced in the createlang script references /usr/lib, instead of \n/usr/lib/pgsql, which is desired. So the first patch is:\ndiff -uNr postgresql-7.1.2.orig/src/bin/scripts/createlang.sh \npostgresql-7.1.2/src/bin/scripts/createlang.sh\n--- postgresql-7.1.2.orig/src/bin/scripts/createlang.sh\tSun Feb 18 13:34:01 \n2001\n+++ postgresql-7.1.2/src/bin/scripts/createlang.sh\tWed Jun 13 16:00:55 2001\n@@ -164,7 +164,7 @@\n # Check that we have PGLIB\n # ----------\n if [ -z \"$PGLIB\" ]; then\n-\tPGLIB='@libdir@'\n+\tPGLIB='/usr/lib/pgsql'\n fi\n \n # ----------\n\nTo handle that, as $PGLIB does indeed point to /usr/lib/pgsql for most \nthings, but a user is not guaranteed to set the envvar. @libdir@ points to \n/usr/lib during the build, as it should -- but createlang's PGLIB and \nautoconf's libdir are not equal. \n\nThis is desireable because the procedural languages aren't generally loadable \ninto any arbitrary program by ld.so; rather, they are postgresql-specifc \nmodules, warranting a separate directory under FHS. This patch fixes the \nRPM-specific case only, obviously, as /usr/lib/pgsql is going to be the wrong \nchoice for non-RPM users :-).\n\nNext, we have patches to make the perl client honor RPM_BUILD_ROOT (otherwise \nknown as DESTDIR). I'm omitting them here, as Peter has mentioned a build \noverhaul for the perl and python clients to make them do DESTDIR and in \ngeneral fit in better with the rest of the package.\n\nOn to the next batch.... There are a few perl and python scripts shipped as \nexamples -- every last one of them shebangs to '/usr/local/perl' or \n'/usr/local/python' -- to make them usable, I patch this to '/usr/bin/perl' \nor python, as appropriate. I only ship \npostgresql-7.1.2/src/interfaces/perl5/test.pl at this time.\n\nNow to the regression tests. First off, I:\ndiff -uNr postgresql-7.1.2.orig/src/test/regress/GNUmakefile \npostgresql-7.1.2/src/test/regress/GNUmakefile\n--- postgresql-7.1.2.orig/src/test/regress/GNUmakefile\tWed Apr 4 17:15:56 \n2001\n+++ postgresql-7.1.2/src/test/regress/GNUmakefile\tWed Jun 13 16:00:55 2001\n@@ -67,8 +67,8 @@\n abs_builddir := $(shell pwd)\n \n define sed-command\n-sed -e 's,@abs_srcdir@,$(abs_srcdir),g' \\\n- -e 's,@abs_builddir@,$(abs_builddir),g' \\\n+sed -e 's,@abs_srcdir@,/usr/lib/pgsql/test/regress,g' \\\n+ -e 's,@abs_builddir@,/usr/lib/pgsql/test/regress,g' \\\n -e 's/@DLSUFFIX@/$(DLSUFFIX)/g' $< >$@\n endef\n \nsince the tests aren't in the build tree anymore, but in \n/usr/lib/pgsql/test/regress. Well _technically_ they're really NOT in \n/usr/lib/pgsql/test/regress, but in DESTDIR/usr/lib/pgsql/test/regress during \nthe build -- but they will be executed in the coded location after the RPM \ninstallation.\n\nThen, I:\n- AS '@abs_builddir@/regress@DLSUFFIX@'\n+ AS '/usr/lib/pgsql/test/regress/regress.so'\neverywhere that is used, along with its likenesses pointing to refint.so and \nautoinc.so, which I prebuild and stuff into /usr/lib/pgsql/test/regress. \nAlthough /usr/lib/pgsql would be a more consistent place, I guess. That \nconstruct is used in \npostgresql-7.1.2/src/test/regress/input/create_function_1.source and \nostgresql-7.1.2/src/test/regress/output/create_function_1.source.\n\nFinally, I patch postgresql-7.1.2/src/test/regress/pg_regress.sh:\n@@ -69,7 +69,7 @@\n : ${inputdir=.}\n : ${outputdir=.}\n \n-libdir='@libdir@'\n+libdir='/usr/lib/pgsql'\n bindir='@bindir@'\n datadir='@datadir@'\n host_platform='@host_tuple@'\n\nAgain, @libdir@ != $PGLIB.\n\nThis set is quite a bit smaller than the 7.0.x and 6.5.x sets, thanks in no \nsmall part to Peter's work, as you have already said.\n\n> I think there are really two separate discussions going on here: one is\n> whether we shouldn't try harder to roll some of the RPMset diffs back\n> into the main sources, and the other is how we can make information\n> about some of the popular packages more readily visible/available to the\n> developers.\n\nMy diffs are nowhere near as large as the debian set. There are other things \nI could patch, instead of frobbing in the specfile, though -- things like the \npython and perl clients' makefile's DESTDIR ignorance, and the fact that \n'make install' puts the procedural languages in /usr/lib instead of \n/usr/lib/pgsql. The easy answer: 'Use the --libdir configure switch!' won't \nwork, though, as I can't just tell configure that libdir needs to be \n/usr/lib/pgsql -- libpq.so and friends need to go in /usr/lib!\n\nAnd I've not tried to make my patches fit the general case as yet -- they \nhaven't needed to be general in scope.\n\nThere is some munging done in contrib that could be put in a patch, though -- \nin particular, the following construct executes _14_ times....\n# some-contrib-module\npushd some-contrib-module\nperl -pi -e \"s|/usr/lib/contrib|/usr/lib/pgsql/contrib/some-contrib-module|\" *\npopd\n\nlibdir !=$PGLIB, again.\n\nAnd more path munging for the rserv and earthdistance contrib modules happen.\n\nAnd the whole contrib tree, since there isn't a good 'make install' that \nhonors DESTDIR for that tree, gets a kick in the pants to \n/usr/lib/pgsql/contrib, from $RPM_BUILD_DIR/postgresql-7.1.2.\n\n> Peter's stance on the latter seems to be \"go look at the\n> packagers' sites\", which is defensible, but that's the current approach\n> and I think it's not working. \n\nThe biggest RPM difference is simply where things are put. Otherwise there's \na mere handful of sysadmin scripts added.\n\nWith the specfile, README.rpm-dist, and the other scriptfiles in a CVS \nmodule, I'd sleep better, knowing that someone else might have an easier time \npicking things up if I kick the big bucket.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Fri, 20 Jul 2001 18:16:58 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: RPM source files should be in CVS (was Re: [GENERAL] psql -l)"
},
{
"msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> Ok, let's look. First, there is a createlang issue: during build, @libdir@ \n> as referenced in the createlang script references /usr/lib, instead of \n> /usr/lib/pgsql, which is desired.\n\nOkay, that problem is gone in current sources, anyway (createlang no\nlonger needs to know any absolute paths).\n\n> On to the next batch.... There are a few perl and python scripts shipped as \n> examples -- every last one of them shebangs to '/usr/local/perl' or \n> '/usr/local/python' -- to make them usable, I patch this to '/usr/bin/perl' \n> or python, as appropriate.\n\nHmm. Given that they're only examples, and are clearly going to be\nbroken until hand-edited on many systems not only RedHat, it's not clear\nthat this is worth your worrying about. But by the same token, I\nwouldn't have a problem with applying that change to the masters ---\nsurely there are as many systems where '/usr/bin/perl' is correct as\nthere are where the other is correct. (In fact, a quick grep shows that\nwe have more '/usr/bin/perl' than '/usr/local/bin/perl' in the\ndistribution, so your claim that they're all the latter is mistaken.\nWe should certainly try to make them consistent, whichever is\npreferred.)\n\nBTW, the only python shebangs I can find in CVS look like\n\t#! /usr/bin/env python\nIsn't that OK on RedHat?\n\n> Now to the regression tests. First off, I:\n> define sed-command\n> -sed -e 's,@abs_srcdir@,$(abs_srcdir),g' \\\n> - -e 's,@abs_builddir@,$(abs_builddir),g' \\\n> +sed -e 's,@abs_srcdir@,/usr/lib/pgsql/test/regress,g' \\\n> + -e 's,@abs_builddir@,/usr/lib/pgsql/test/regress,g' \\\n> -e 's/@DLSUFFIX@/$(DLSUFFIX)/g' $< >$@\n> endef\n\nClearly, this needs to be generalized ...\n\n> Then, I:\n> - AS '@abs_builddir@/regress@DLSUFFIX@'\n> + AS '/usr/lib/pgsql/test/regress/regress.so'\n> everywhere that is used, along with its likenesses pointing to refint.so and \n> autoinc.so, which I prebuild and stuff into /usr/lib/pgsql/test/regress. \n\nMuch of this could be eliminated given the new path-searching behavior\nfor CREATE FUNCTION, I think. Actually I thought Peter had cleaned it\nup already, but I see he hasn't touched the regression tests. IMHO we\ncould have \"make installcheck\" copy the .so files to $LIBDIR, and then\nthe regression test input and output files themselves wouldn't need to\nknow these paths at all. (OTOH, there'd still be paths in the COPY\ncommands. Would it be okay to eliminate testing of backend COPY and\ninstead make these regression tests use psql \\copy?)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 20 Jul 2001 18:45:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: RPM source files should be in CVS (was Re: [GENERAL] psql -l) "
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> BTW, the only python shebangs I can find in CVS look like\n> \t#! /usr/bin/env python\n> Isn't that OK on RedHat?\n\nIt is.\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n",
"msg_date": "20 Jul 2001 19:05:46 -0400",
"msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)",
"msg_from_op": false,
"msg_subject": "Re: Re: RPM source files should be in CVS (was Re: [GENERAL] psql -l)"
},
{
"msg_contents": "On Fri, Jul 20, 2001 at 07:05:46PM -0400, Trond Eivind Glomsr?d wrote:\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n> \n> > BTW, the only python shebangs I can find in CVS look like\n> > \t#! /usr/bin/env python\n> > Isn't that OK on RedHat?\n> \n> It is.\n\nProbably the perl scripts should say, likewise, \n\n #!/usr/bin/env perl\n\nNathan Myers\nncm@zembu.com\n",
"msg_date": "Fri, 20 Jul 2001 16:20:53 -0700",
"msg_from": "ncm@zembu.com (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: Re: RPM source files should be in CVS (was Re: [GENERAL] psql -l)"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n\n> > On Friday 20 July 2001 11:25, Bruce Momjian wrote:\n> > > I am slightly concerned about bloating our CVS tree.\n> > The RPM additions are:\n> > 56 contrib-intarray.tar.gz\n> > 4 file-lists.tar.gz\n> > 180 jdbc7.0-1.1.jar\n> > 92 jdbc7.1-1.2.jar\n> > 8 migration-scripts.tar.gz\n> > 4 postgresql-7.1.plperl.patch\n> > 4 postgresql-7.1.s390x.patch\n> > 4 postgresql-bashprofile\n> > 4 postgresql-dump.1.gz\n> > 8 postgresql.init\n> > 20 README.rpm-dist\n> > 4 rh-pgdump.sh\n> > 8 rpm-pgsql-7.1.patch\n> > \n> > Of which the two jar files are derived from the source and wouldn't be \n> > necessary. This totals 124K if I've done my math right.\n> \n> Bag the JAR's and it looks fine.\n> \n> > The contrib-intarray.tar.gz is a new intarray from Red Hat -- I really need \n> > to investigate this more closely.... \n> \n> Can you research that? Why are they doing it?\n\nIt's a new one from upstream - it fixed some bugs, AFAIK.\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n",
"msg_date": "20 Jul 2001 19:21:55 -0400",
"msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)",
"msg_from_op": false,
"msg_subject": "Re: RPM source files should be in CVS (was Re: psql -l)"
},
{
"msg_contents": "On Friday 20 July 2001 18:45, Tom Lane wrote:\n> Lamar Owen <lamar.owen@wgcr.org> writes:\n> > On to the next batch.... There are a few perl and python scripts shipped\n> > as examples -- every last one of them shebangs to '/usr/local/perl' or\n> > '/usr/local/python' -- to make them usable, I patch this to\n> > '/usr/bin/perl' or python, as appropriate.\n\n> Hmm. Given that they're only examples, and are clearly going to be\n> broken until hand-edited on many systems not only RedHat, it's not clear\n\nWell, there were more than just a few at one point. In any case, it's been \nawhile since I combed through the example scripts -- of which I only now ship \nthe one, which is designed to test the perl client -- which I find to be a \nuseful thing.\n\n> BTW, the only python shebangs I can find in CVS look like\n> \t#! /usr/bin/env python\n> Isn't that OK on RedHat?\n\nYeah, that construct is OK. 7.0.x was different, unless I'm far off-base. \nBut I'm not shipping any patched python scripts with 7.1.x anyway -- the \n6.5.x and 7.0.x dists had some scripts with #!/usr/local/bin/python.\n\nSo much for my 'every last one,' eh? :-)\n\n> Much of this could be eliminated given the new path-searching behavior\n> for CREATE FUNCTION, I think. Actually I thought Peter had cleaned it\n> up already, but I see he hasn't touched the regression tests. \n\nHow is this search path defined? Blindly using libdir is not ok -- \nlibdir!=PGLIB, and PGLIB may not be defined in the environment -- it might be \nthere, but we can't count on it.\n\n> IMHO we\n> could have \"make installcheck\" copy the .so files to $LIBDIR,\n\nlibdir!=PGLIB for the RPMs. libdir=/usr/lib; PGLIB=/usr/lib/pgsql. I was so \nhappy when the bki sources were no longer referenced by PGLIB -- when the \nprocedural language handlers aren't thusly referenced will be a Happy Day. \nIf PGLIB could = libdir, and something like PGHANDLER= where the handlers \nlive, I'd also be happy. If this function search path can be configured to \nsearch in /usr/lib/pgsql and all or any of its subs, while libpq and kin live \nin /usr/lib, I _will_ be happy.\n\n> and then\n> the regression test input and output files themselves wouldn't need to\n> know these paths at all. (OTOH, there'd still be paths in the COPY\n> commands. Would it be okay to eliminate testing of backend COPY and\n> instead make these regression tests use psql \\copy?)\n\nThe COPY paths are munged into form by the GNUmakefile patch -- so, if the \nGNUmakefile can generally deal with the paths by placing relative paths \n(relative to what, though?) in the @abs_srcdir@/@abs_builddir@ substitutions, \nthen those paths aren't an issue.\n\nAlthough a psql \\copy regression test might be a good thing in its own right.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Fri, 20 Jul 2001 19:32:38 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Re: RPM source files should be in CVS (was Re: [GENERAL] psql -l)"
},
{
"msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> How is this search path defined? Blindly using libdir is not ok -- \n\nWhy not?\n\nThe search path is defined in postgresql.conf (and I see Peter forgot\nto add an example to the postgresql.conf.sample file), but the default\nis the backend-compile-time $libdir. Offhand I don't see what's wrong\nwith it.\n\n> libdir!=PGLIB, and PGLIB may not be defined in the environment -- it\n> might be there, but we can't count on it.\n\nAFAICT we do not depend on environment PGLIB any more. Configure-time\n$libdir is what counts.\n\n> If this function search path can be configured to search in\n> /usr/lib/pgsql and all or any of its subs, while libpq and kin live in\n> /usr/lib, I _will_ be happy.\n\nI think all you need to do is set up postgresql.conf that way.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 20 Jul 2001 19:56:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: RPM source files should be in CVS (was Re: [GENERAL] psql -l)"
},
{
"msg_contents": "> > On to the next batch.... There are a few perl and python scripts shipped as\n> > examples -- every last one of them shebangs to '/usr/local/perl' or\n> > '/usr/local/python' -- to make them usable, I patch this to '/usr/bin/perl'\n> > or python, as appropriate.\n> Hmm. Given that they're only examples, and are clearly going to be\n> broken until hand-edited on many systems not only RedHat, it's not clear\n> that this is worth your worrying about.\n\nAck! There is a way to write this stuff to be portable. The tricks\nchange a bit depending on the scripting language you are using, but for\nperl this is how the header should look:\n\n#!/bin/sh\n# -*- perl -*-\n# the line above helps with emacs, and put other comments here...\n\neval '(exit $?0)' && eval 'exec perl -S $0 ${1+\"$@\"}'\n & eval 'exec perl -S $0 $argv:q'\n if 0;\n\n# real perl code follows...\n\n\nThere is no reason to have a dependency on anything but the location of\nsh, which is much more reliable than locations for perl, tcl, etc etc.\nNot sure the exact form of this technique for python (maybe the same as\nabove) but there is a similar but not identical form for tcl code\n(examples available on request; the above for perl is demonstrated in\ncontrib/rserv/*.in).\n\n - Thomas\n",
"msg_date": "Sat, 21 Jul 2001 00:31:51 +0000",
"msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>",
"msg_from_op": false,
"msg_subject": "Re: RPM source files should be in CVS (was Re: [GENERAL] psql -l)"
},
{
"msg_contents": "On Friday 20 July 2001 19:56, Tom Lane wrote:\n> Lamar Owen <lamar.owen@wgcr.org> writes:\n> > How is this search path defined? Blindly using libdir is not ok --\n\n> Why not?\n\nDuring RPM build, libdir will point to /usr/lib. This is OK and appropriate \nfor the generally-loadable shared libs. HOWEVER, the shared objects that are \naffected by this function handler load search path will not be located in \n/usr/lib -- as they are not generally loadable shared libraries, but \nPostgreSQL-only shared libraries. Thus, /usr/lib/pgsql is the home for \nthose, and, in an RPM installation, should be the head of any shared object \nload path searched by the PostgreSQL dynaloader.\n\n/usr/lib -> system dynaloader default for non-essentials (/lib of course for \nlibs essential for boot)\n/usr/lib/pgsql -> PostgreSQL dynaloader default (ideally, on an FHS-compliant \ninstallation).\n\n> The search path is defined in postgresql.conf (and I see Peter forgot\n> to add an example to the postgresql.conf.sample file), but the default\n> is the backend-compile-time $libdir. Offhand I don't see what's wrong\n> with it.\n\nI can patch postgresql.conf.sample easily enough -- but, if I'm trying to get \naway from RPM-specific patches.... :-) I have heretofore not modified the \ndefault postgresql.conf in anyway -- no tuning, no tcpip_socket setting (for \nwhich I get some grief, as people running RPMs of 7.0.x are used to -i being \non by default), no nothing.\n\nNot knowing alot about autoconf, I hesitate to suggest or ask the following, \nbut I will anyway -- is it possible to define an optional \n'so-search-path-default' switch for the backend's dynaloader? This has \nnothing to do with the OS dynaloader path, for which there is a well-defined \nconfig file on most OS's -- just the backend's function handler dynaloader.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Sat, 21 Jul 2001 07:10:59 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Re: RPM source files should be in CVS (was Re: [GENERAL] psql -l)"
}
] |
[
{
"msg_contents": "\nChecking application/pgp-signature: FAILURE\n-- Start of PGP signed section.\n> Hi -\n> \n> pgman wrote:\n> \n> : Most Unix filesystems will not allocate disk blocks until you write in\n> : them. [...]\n> \n> Yes, I understand that, but how is it a problem for postgresql?\n\nUh, I thought we did that so we were not allocating file system blocks\nduring WAL writes. Performance is bad when we do that.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 17 Jul 2001 15:18:40 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Re: Idea: recycle WAL segments, don't delete/recreate\n 'em"
},
{
"msg_contents": "Bruce Momjian wrote:\n>\n> Checking application/pgp-signature: FAILURE\n> -- Start of PGP signed section.\n> > Hi -\n> >\n> > pgman wrote:\n> >\n> > : Most Unix filesystems will not allocate disk blocks until you write in\n> > : them. [...]\n> >\n> > Yes, I understand that, but how is it a problem for postgresql?\n>\n> Uh, I thought we did that so we were not allocating file system blocks\n> during WAL writes. Performance is bad when we do that.\n\n Performance isn't the question. The problem is when you get a\n \"disk full\" just in the middle of the need to write important\n WAL information. While preallocation of a new WAL file, it's\n OK and controlled, but there are more delicate portions of\n the code.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Tue, 17 Jul 2001 16:36:56 -0400 (EDT)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: Idea: recycle WAL segments, don't delete/recreate\n 'emm"
}
] |
[
{
"msg_contents": "Hi Hackers,\n\nOne of the biggest gaps I've found while doing performance tuning is\ncollecting execution statistics. There's EXPLAIN for the planner, but\nnothing for the executor. Maybe another verb; ACCOUNT?\n\nI'm not suggesting this as work that someone else do. I don't mind\ntrying it myself, but I wouldn't mind some guidance on how to make it\nan acceptable patch.\n\n-Steve\n",
"msg_date": "Tue, 17 Jul 2001 15:49:22 -0400",
"msg_from": "svanegmond@bang.dhs.org",
"msg_from_op": true,
"msg_subject": "Execution statistics"
}
] |
[
{
"msg_contents": "As some of you know, Nusphere is trying to sell MySQL with an additional\ntransaction-based table manager called Gemini. They enabled download of\nthe source code yesterday at:\n\n\thttp://mysql.org/download3.php?file_id=1118\n\nLooking through the 122k lines of C code in the Gemini directory, it is\npretty clear from a 'grep -i progress' that the Gemini code is actually\nthe database storage code for the Progress database. Progress is the\nparent company of Nusphere.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 17 Jul 2001 22:13:22 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "MySQL Gemini code"
},
{
"msg_contents": "Bruce Momjian wrote:\n> As some of you know, Nusphere is trying to sell MySQL with an additional\n> transaction-based table manager called Gemini. They enabled download of\n> the source code yesterday at:\n>\n> http://mysql.org/download3.php?file_id=1118\n>\n> Looking through the 122k lines of C code in the Gemini directory, it is\n> pretty clear from a 'grep -i progress' that the Gemini code is actually\n> the database storage code for the Progress database. Progress is the\n> parent company of Nusphere.\n\n And this press release\n\n http://www.nusphere.com/releases/071601.htm\n\n also explains why they had to do it this way. They disagreed\n with the policy that every code added to the core system must\n be owned by MySQL AB, so that these guys can sell it for\n money in their commercial licenses.\n\n IMHO, the MySQL community gives a few people far too much\n credit anyway. The MySQL AB folks degrade contributions from\n their community to \"personal donations\" to \"Monty\", which he\n has to \"scrutinize\" and often rewrite so that they can stand\n their (MySQL AB's) standards. Give me a break, but does the\n entire MySQL community only consist of 16 year old junior\n pacman players, or are there some \"real programmers (tm)\"\n too?\n\n But maybe Mr. Mickos told the truth, that there never have\n been substantial contributions from the outside and nearly\n all the code has been written by \"Monty\" himself (with little\n \"donations\" from David). In that case, NuSphere's launch of\n mysql.org was long overdue.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Wed, 18 Jul 2001 08:35:58 -0400 (EDT)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: MySQL Gemini code"
},
{
"msg_contents": "> And this press release\n> \n> http://www.nusphere.com/releases/071601.htm\n> \n> also explains why they had to do it this way. They disagreed\n> with the policy that every code added to the core system must\n> be owned by MySQL AB, so that these guys can sell it for\n> money in their commercial licenses.\n\nThis is interesting. They mention PostgreSQL twice as an example to\nemulate for MySQL. They feel the pressure of companies involved with\nPostgreSQL and see the benefit of a community around the database.\n\nOn a more significant note, I hear the word \"fork\" clearly suggested in\nthat text. It is almost like MySQL AB GPL'ed the MySQL code and now\nthey may not be able to keep control of it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 18 Jul 2001 11:45:54 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: MySQL Gemini code"
},
{
"msg_contents": "And the story goes on...\n\n http://www.newsforge.com/comments.pl?sid=01/07/18/0226219&commentsort=0&mode=flat&threshold=0&pid=0\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Wed, 18 Jul 2001 13:29:19 -0400 (EDT)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: MySQL Gemini code"
},
{
"msg_contents": "On Wed, Jul 18, 2001 at 11:45:54AM -0400, Bruce Momjian wrote:\n> > And this press release\n> > \n> > http://www.nusphere.com/releases/071601.htm\n> ...\n> On a more significant note, I hear the word \"fork\" clearly suggested\n> in that text. It is almost like MySQL AB GPL'ed the MySQL code and\n> now they may not be able to keep control of it.\n\nAnybody is free to fork MySQL or PostgreSQL alike. The only difference\nis that all published MySQL forks must remain public, where PostgreSQL \nforks need not. MySQL AB is demonstrating their legal right to keep as\nmuch control as they chose, and NuSphere will lose if it goes to court.\n\nThe interesting event here is that since NuSphere violated the license \nterms, they no longer have any rights to use or distribute the MySQL AB \ncode, and won't until they get forgiveness from MySQL AB. MySQL AB \nwould be within their rights to demand that the copyright to Gemini be \nsigned over, before offering forgiveness.\n\nIf Red Hat forks PostgreSQL, nobody will have any grounds for complaint.\n(It's been forked lots of times already, less visibly.)\n\nNathan Myers \nncm@zembu.com\n",
"msg_date": "Wed, 18 Jul 2001 13:14:50 -0700",
"msg_from": "ncm@zembu.com (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: MySQL Gemini code"
},
{
"msg_contents": "On Wed, Jul 18, 2001 at 08:35:58AM -0400, Jan Wieck wrote:\n> And this press release\n> \n> http://www.nusphere.com/releases/071601.htm\n> \n> also explains why they had to do it this way.\n\nThey were always free to fork, but doing it the way they did --\nviolating MySQL AB's license -- they shot the dog.\n\nThe lesson? Ask somebody competent, first, before you bet your\ncompany playing license games.\n\nNathan Myers\nncm@zembu.com\n",
"msg_date": "Wed, 18 Jul 2001 13:25:36 -0700",
"msg_from": "ncm@zembu.com (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: MySQL Gemini code"
},
{
"msg_contents": "\nHi!\n\nAs I do have some insight in these matters, I thought I would comment\non this thing\n\n>>>>> \"Jan\" == Jan Wieck <JanWieck@Yahoo.com> writes:\n\nJan> Bruce Momjian wrote:\n>> As some of you know, Nusphere is trying to sell MySQL with an additional\n>> transaction-based table manager called Gemini. They enabled download of\n>> the source code yesterday at:\n>> \n>> http://mysql.org/download3.php?file_id=1118\n>> \n>> Looking through the 122k lines of C code in the Gemini directory, it is\n>> pretty clear from a 'grep -i progress' that the Gemini code is actually\n>> the database storage code for the Progress database. Progress is the\n>> parent company of Nusphere.\n\nJan> And this press release\n\nJan> http://www.nusphere.com/releases/071601.htm\n\nJan> also explains why they had to do it this way. They disagreed\nJan> with the policy that every code added to the core system must\nJan> be owned by MySQL AB, so that these guys can sell it for\nJan> money in their commercial licenses.\n\nPlease note that we NEVER have asked NuSphere to sign over copyright\nof Gemini to us. We do it only for the core server, and this is\nactually not an uncommon thing among open source companies. For\nexample QT (Trolltech) and Ximian (a lot of gnome applications) does\nthe same thing. Assigning over the code is also something that FSF\nrequires for all code contributions. If you criticize us at MySQL AB,\nyou should also criticize the above.\n\nWe did never have any problems to include any of GEMINI code into\nMySQL. We had tried to get them to submit Gemini into MySQL since\nMarch, but they didn't want to do that. It was not until we sued\nNuSphere for, among other things, breaking the GPL that they did\nfinally release Gemini under GPL.\n\nWe wouldn't mind if they did this 'community thing' with a site named\nsomething like NUSPHERE.ORG, but by doing this with MYSQL.ORG and\nviolating our trademark is not something that we can just look upon\nwithout reacting. That NuSphere also have had very little regard for\nthe GPL copyright, keeps copyrighted material on their web site and\nuses mysql.org to push out their own commercial (not free) MySQL\ndistribution tells a lot of their intentions.\n\nI had actually hoped to get support from you guy's at PostgreSQL\nregarding this. You may have similar experience or at least\nunderstand our position. The RedHat database may be a good thing for\nPostgreSQL, but I am not sure if it's a good thing for RedHat or for\nthe main developers to PostgreSQL. Anyway, I think that we open source\ndevelopers should stick together. We may have our own disagreements,\nbut at least we are working for the same common goal (open source\ndomination).\n\nIf you ever need any support from us regarding the RedHat database,,\nplease contact me personally about this. I really liked all the\nPostgreSQL developers I met last year at OSDN; I found it great to be\nable to exchange ideas, suggest features and talk openly about our\nproducts without any restrictions. I hope to be able to do it again\nthis year!\n\nThose that has seen my postings knows that I don't publicly criticize\nPostgreSQL; I do also recommend PostgreSQL for projects where I think\nit's better suitable than MySQL. I have at many times defended\nPostgreSQL when I heard people criticize it without a good reason. I\nam not afraid of pointing out weaknesses in a product if I am sure\nthat I have discovered one, but I try to do that in a professional\nmanner. I don't think you will find that NuSphere is going to be as\nfair if they get more control over MySQL through mysql.org.\n\nJan> IMHO, the MySQL community gives a few people far too much\nJan> credit anyway. The MySQL AB folks degrade contributions from\nJan> their community to \"personal donations\" to \"Monty\", which he\nJan> has to \"scrutinize\" and often rewrite so that they can stand\nJan> their (MySQL AB's) standards. Give me a break, but does the\nJan> entire MySQL community only consist of 16 year old junior\nJan> pacman players, or are there some \"real programmers (tm)\"\nJan> too?\n\nI only rewrite things that are going to be in the MySQL server, not in\nthe clients. As MySQL needs to work in 24/7 systems, we have to be\nvery carefully of what we put into the server. With a background of\n20 years of programming, it's also not that hard to rewrite code to\nmake it better so why not do it? Because I know the whole MySQL core\ncode intimately, its much easier for me to remove duplicated functions,\noptimize things and generalize code to make things works better than\nthe original author had thought of.\n\nI am sure that it's the same thing with those of you that has worked a\nlot of time on the PostgreSQL code...\n\nYou must also understand that we have a totally different development\nstructure here at MySQL AB than you have. We are 30 people of which 14\nare full time developers. 99.99 % of the code in the core MySQL server\nis written by us or by people that we have paid for the code. We get\nvery few code contributions on the server code from other people (we\ndo get LOTS of contributions on the client code).\n\nWe get the money to develop MySQL from support, licensing and the use\nof our trademark. I don't think you should have any problem with this?\nWith mysql.org NuSphere is trying to take away 2 of the above things\nfrom us and that's why we have to defend ourselves.\n\nJan> But maybe Mr. Mickos told the truth, that there never have\nJan> been substantial contributions from the outside and nearly\nJan> all the code has been written by \"Monty\" himself (with little\nJan> \"donations\" from David). In that case, NuSphere's launch of\nJan> mysql.org was long overdue.\n\nWhy do you think that?\n\nMySQL AB is a totally open source company. Everything we develop and\nsell we also put on open source. I think we have are doing and have\nalways done the right thing for the open source community.\n\nI don't think it's really fair to be compare us to NuSphere :(\n\nRegards,\nMonty\n",
"msg_date": "Thu, 19 Jul 2001 00:04:26 +0300 (EEST)",
"msg_from": "Michael Widenius <monty@mysql.com>",
"msg_from_op": false,
"msg_subject": "Re: MySQL Gemini code"
},
{
"msg_contents": "Michael Widenius <monty@mysql.com> writes:\n\n> Please note that we NEVER have asked NuSphere to sign over copyright\n> of Gemini to us. We do it only for the core server, and this is\n> actually not an uncommon thing among open source companies. For\n> example QT (Trolltech) and Ximian (a lot of gnome applications)\n\nXimian isn't doing a lot of gnome applications, just a few\n(\"Evolution\" springs to mind, and their installer). Signing over\ncopyright to Ximian wouldn't make much sense - GNOME isn't a Ximian\nproject, so they can't dual license it anyway.\n\n> Assigning over the code is also something that FSF requires for all\n> code contributions. If you criticize us at MySQL AB, you should\n> also criticize the above.\n\nThis is slightly different - FSF wants it so it will have a legal\nposition to defend its programs:\n\n************************************************************************\nhttp://www.fsf.org/prep/maintain_6.html\n\nIf you maintain an FSF-copyrighted package, then you should follow\ncertain legal procedures when incorporating changes written by other\npeople. This ensures that the FSF has the legal right to distribute\nthe package, and the right to defend its free status in court if\nnecessary.\n\nBefore incorporating significant changes, make sure that the person\nwho wrote the changes has signed copyright papers and that the Free\nSoftware Foundation has received and signed them. We may also need a\ndisclaimer from the person's employer.\n************************************************************************\n\nMySQL and TrollTech requires copyright assignment in order to sell\nnon-open licenses. Some people will have a problem with this, while\nnot having a problem with the FSF copyright assignment.\n \n> I had actually hoped to get support from you guy's at PostgreSQL\n> regarding this. You may have similar experience or at least\n> understand our position. The RedHat database may be a good thing for\n> PostgreSQL, but I am not sure if it's a good thing for RedHat or for\n> the main developers to PostgreSQL. \n\nThis isn't even a remotely similar situation:\n\n* For MySQL, the scenario is that a company made available an open\n version of its product while continuing to sell it under other\n licenses. \n\n* For PostgreSQL, it has been a long living project which spawned\n companies which then hired some of the core developers. \n\nWe're not reselling someone elses product with minor enhancements\n(companies have been known to be doing that to products we create), \nwe're selling support and working on additions to an open project.\n\nThat may make it harder for the companies now employing the core\ndevelopers (or may help, as PostgreSQL gets more much deserved\npublicity and technical credit), but doesn't violate the project's\nlicenses and a company's trademark the way NuSphere did with MySQL.\n\n> Anyway, I think that we open source developers should stick\n> together. We may have our own disagreements, but at least we are\n> working for the same common goal (open source domination).\n> \n> If you ever need any support from us regarding the RedHat database,,\n> please contact me personally about this. \n\nRed Hat is firmly committed to open source, and is definitely a big\nopen source developer.\n\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n",
"msg_date": "18 Jul 2001 18:37:48 -0400",
"msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)",
"msg_from_op": false,
"msg_subject": "Re: MySQL Gemini code"
},
{
"msg_contents": "> This is slightly different - FSF wants it so it will have a legal\n> position to defend its programs:\n\n There is at least one documented case where the FSF has used\n that right to sell a non-open license for GCC to Motorola.\n\n - Sascha Experience IRCG\n http://schumann.cx/ http://schumann.cx/ircg\n\n",
"msg_date": "Thu, 19 Jul 2001 01:44:51 +0200 (MEST)",
"msg_from": "Sascha Schumann <sascha@schumann.cx>",
"msg_from_op": false,
"msg_subject": "Re: MySQL Gemini code"
},
{
"msg_contents": "On Wed, Jul 18, 2001 at 06:37:48PM -0400, Trond Eivind Glomsr?d wrote:\n> Michael Widenius <monty@mysql.com> writes:\n> > Assigning over the code is also something that FSF requires for all\n> > code contributions. If you criticize us at MySQL AB, you should\n> > also criticize the above.\n> \n> This is slightly different - FSF wants it so it will have a legal\n> position to defend its programs: ...\n> MySQL and TrollTech requires copyright assignment in order to sell\n> non-open licenses. Some people will have a problem with this, while\n> not having a problem with the FSF copyright assignment.\n\nNobody who works on MySQL is unaware of MySQL AB's business model.\nAnybody who contributes to the core server has to expect that MySQL \nAB will need to relicense anything accepted into the core; that's \ntheir right as originators. Everybody who contributes has a choice \nto make: fork, or sign over. (With the GPL, forking remains possible;\nApple and Sun \"community\" licenses don't allow it.)\n\nAnybody who contributes to PG has to make the same choice: fork, \nor put your code under the PG license. The latter choice is \nequivalent to \"signing over\" to all proprietary vendors, who are \nthen free to take your code proprietary. Some of us like that.\n\n> > I had actually hoped to get support from you guys at PostgreSQL\n> > regarding this. You may have similar experience or at least\n> > understand our position. The RedHat database may be a good thing\n> > for PostgreSQL, but I am not sure if it's a good thing for RedHat\n> > or for the main developers to PostgreSQL. \n> \n> This isn't even a remotely similar situation: ...\n\nIt's similar enough. One difference is that PG users are less\nafraid to fork. Another is that without the GPL, we have elected \nnot to (and indeed cannot) stop any company from doing with PG what \nNuSphere is doing with MySQL.\n\nThis is why characterizing the various licenses as more or less\n\"business-friendly\" is misleading (i.e. dishonest) -- it evades the \nquestion, \"friendly to whom?\". Businesses sometimes compete...\n\nNathan Myers\nncm@zembu.com\n",
"msg_date": "Wed, 18 Jul 2001 17:21:18 -0700",
"msg_from": "ncm@zembu.com (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: MySQL Gemini code"
},
{
"msg_contents": "Michael Widenius wrote:\n>\n> Hi!\n\nMoin Monty,\ndear fence-guests,\n\n> Please note that we NEVER have asked NuSphere to sign over copyright\n> of Gemini to us. We do it only for the core server, and this is\n> actually not an uncommon thing among open source companies. For\n> example QT (Trolltech) and Ximian (a lot of gnome applications) does\n> the same thing. Assigning over the code is also something that FSF\n> requires for all code contributions. If you criticize us at MySQL AB,\n> you should also criticize the above.\n\n I should not criticize the others and Trond already explained\n why (thank you).\n\n All I was doing was summing up some of the latest press\n releases from NuSphere and MySQL AB. You as CTO and your own\n CEO have explained detailed enough why the assignment of\n copyright for all core system related code is so important\n for your company because of your business modell. As the\n original banker I am, and as the 13+ year IT consultant I am,\n I don't have the slightest problem with that and understand\n it completely. It's not my business at all anyway, so it\n doesn't matter if I personally think it's good or not.\n\n But NuSphere said, that the problem with contributing the\n Gemini code was because of the copyright questions. Looking\n at the code now and realizing that it's part of the Progress\n storage system fits perfectly. NuSphere might have had\n permission from Progress to release it under the GPL, but not\n to assign the copyright to MySQL AB. The copyright of parts\n of the Gemini code is still property of Progress (Britt\n please come down from the fence and correct me if I'm wrong\n here).\n\n> I had actually hoped to get support from you guy's at PostgreSQL\n> regarding this. You may have similar experience or at least\n> understand our position. The RedHat database may be a good thing for\n> PostgreSQL, but I am not sure if it's a good thing for RedHat or for\n> the main developers to PostgreSQL. Anyway, I think that we open source\n> developers should stick together. We may have our own disagreements,\n> but at least we are working for the same common goal (open source\n> domination).\n\n The RedHAT database IS PostgreSQL. And I don't see it\n becoming something different. All I've seen up to now is that\n RedHAT will be a contributing member of the PostgreSQL open\n source community in the same way, PostgreSQL Inc. and Great\n Bridge LLC are. That they use BIG RED letters while GB uses\n BIG BLUE ones and PgSQL Inc. a bavarian mix for the\n marketing, yeah - that's marketing - these folks like logos\n and colors. The real difference will mature somehow in the\n service portfolios over time. And since there are many\n different customers with a broad variety of demands, we'll\n all find more food than we can eat. No need to fight against\n each other.\n\n The major advantage in the PostgreSQL case is, that we don't\n need no dispute about licensing, because whoever thinks he\n can make a deal out of keeping something proprietary is\n allowed to. People contributing under the BSD license are\n just self-confident enough to know that this will become a\n niche solution or die anyway.\n\n And there we are at the point about \"support regarding THIS\".\n If you're asking for support for the MySQL project, well, I\n created two procedural languages in PostgreSQL so far and\n know enough about the query rewriting techniques used by\n Stonebraker and his team to implement views in PostgreSQL.\n As the open source developer I am, I might possibly find one\n or the other spare hour to create something similar. The\n reason I did it for PostgreSQL was because a couple of years\n ago Bruce Momjian asked me to fix the rule system. Noone ever\n asked me to do anything for MySQL. But if you're asking for\n direct support for your company, sorry, but I'm a Great\n Bridge employee and that's clearly against my interests.\n\n\n> Jan> But maybe Mr. Mickos told the truth, that there never have\n> Jan> been substantial contributions from the outside and nearly\n> Jan> all the code has been written by \"Monty\" himself (with little\n> Jan> \"donations\" from David). In that case, NuSphere's launch of\n> Jan> mysql.org was long overdue.\n>\n> Why do you think that?\n>\n> MySQL AB is a totally open source company. Everything we develop and\n> sell we also put on open source. I think we have are doing and have\n> always done the right thing for the open source community.\n\n That is what your CEO said on NewsForge, SlashDot and\n whereever. I am committed to free source. Thus I think that\n the best thing for open source is a free community, which and\n who's product is not controlled by any commercial entity.\n\n> I don't think it's really fair to be compare us to NuSphere :(\n\n Did I? That wasn't my intention. And nothing I wrote was\n meant personally. Even if the PostgreSQL and MySQL projects\n had some differences in the past, there has never been\n something between Monty and Jan (not to my knowledge).\n\n Let's meet next week at O'Reilly (you're there, aren't you)\n and have a beer.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Wed, 18 Jul 2001 20:47:15 -0400 (EDT)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: MySQL Gemini code"
},
{
"msg_contents": "\nHi!\n\n>>>>> \"Nathan\" == Nathan Myers <ncm@zembu.com> writes:\n\nNathan> On Wed, Jul 18, 2001 at 08:35:58AM -0400, Jan Wieck wrote:\n>> And this press release\n>> \n>> http://www.nusphere.com/releases/071601.htm\n>> \n>> also explains why they had to do it this way.\n\nNathan> They were always free to fork, but doing it the way they did --\nNathan> violating MySQL AB's license -- they shot the dog.\n\nYes, we wouldn't have minded a fork as long as they would have done it\nunder their own name. Now they are causing a lot of confusion and\ngiving both MySQL and open source a bad name :(\n\nOf course, PostgreSQL will benefit from this, but I would rather have\nseen that we would compete with technology instead of with bad PR :(\n\nNathan> The lesson? Ask somebody competent, first, before you bet your\nNathan> company playing license games.\n\nThe problem is that this doesn't always help. For example if the other\npart is not playing by the rules, but counts on the fact that because\nhe has more money he will win by the end even if he breaks all the\nrules going there.\n\nRegards,\nMonty\n",
"msg_date": "Thu, 19 Jul 2001 03:58:43 +0300 (EEST)",
"msg_from": "Michael Widenius <monty@mysql.com>",
"msg_from_op": false,
"msg_subject": "Re: MySQL Gemini code"
},
{
"msg_contents": "\nHi!\n\n>>>>> \"Jan\" == Jan Wieck <JanWieck@yahoo.com> writes:\n\n\nJan> Moin Monty,\nJan> dear fence-guests,\n\nThanks.\n\n>> Please note that we NEVER have asked NuSphere to sign over copyright\n>> of Gemini to us. We do it only for the core server, and this is\n>> actually not an uncommon thing among open source companies. For\n>> example QT (Trolltech) and Ximian (a lot of gnome applications) does\n>> the same thing. Assigning over the code is also something that FSF\n>> requires for all code contributions. If you criticize us at MySQL AB,\n>> you should also criticize the above.\n\nJan> I should not criticize the others and Trond already explained\nJan> why (thank you).\n\nJan> All I was doing was summing up some of the latest press\nJan> releases from NuSphere and MySQL AB. You as CTO and your own\nJan> CEO have explained detailed enough why the assignment of\nJan> copyright for all core system related code is so important\nJan> for your company because of your business modell. As the\nJan> original banker I am, and as the 13+ year IT consultant I am,\nJan> I don't have the slightest problem with that and understand\nJan> it completely. It's not my business at all anyway, so it\nJan> doesn't matter if I personally think it's good or not.\n\nJan> But NuSphere said, that the problem with contributing the\nJan> Gemini code was because of the copyright questions. Looking\nJan> at the code now and realizing that it's part of the Progress\nJan> storage system fits perfectly. NuSphere might have had\nJan> permission from Progress to release it under the GPL, but not\nJan> to assign the copyright to MySQL AB. The copyright of parts\nJan> of the Gemini code is still property of Progress (Britt\nJan> please come down from the fence and correct me if I'm wrong\nJan> here).\n\nWe have never asked for the copyright to Gemini; We don't need the\ncopyright to do an embedded version of MySQL, as MySQL works perfectly\nwithout Gemini; We have an agreement with Innobase Oy and an\nunderstanding with Sleepycat so we can provide ACID transactions even\nwithout Gemini, if any of our commercial customers would require this.\n(Sorry for the 'business talk', but I just wanted to fill in the\nbackground)\n\nIn my opinion the whole thing with the copyright is a public stunt of\nNuSphere to explain why they are now doing a fork. I don't have any\nproblems with a fork as long as they don't call it MySQL and don't do\nit on a site called mysql.org.\n\n>> I had actually hoped to get support from you guy's at PostgreSQL\n>> regarding this. You may have similar experience or at least\n>> understand our position. The RedHat database may be a good thing for\n>> PostgreSQL, but I am not sure if it's a good thing for RedHat or for\n>> the main developers to PostgreSQL. Anyway, I think that we open source\n>> developers should stick together. We may have our own disagreements,\n>> but at least we are working for the same common goal (open source\n>> domination).\n\nJan> The RedHAT database IS PostgreSQL. And I don't see it\nJan> becoming something different. All I've seen up to now is that\nJan> RedHAT will be a contributing member of the PostgreSQL open\nJan> source community in the same way, PostgreSQL Inc. and Great\nJan> Bridge LLC are. That they use BIG RED letters while GB uses\nJan> BIG BLUE ones and PgSQL Inc. a bavarian mix for the\nJan> marketing, yeah - that's marketing - these folks like logos\nJan> and colors. The real difference will mature somehow in the\nJan> service portfolios over time. And since there are many\nJan> different customers with a broad variety of demands, we'll\nJan> all find more food than we can eat. No need to fight against\nJan> each other.\n\nSound's good. I really hope it will be that way in the long run!\nOn the other hand, in the beginning our deal with NuSphere also\nappeared to be good:(\n\nJan> The major advantage in the PostgreSQL case is, that we don't\nJan> need no dispute about licensing, because whoever thinks he\nJan> can make a deal out of keeping something proprietary is\nJan> allowed to. People contributing under the BSD license are\nJan> just self-confident enough to know that this will become a\nJan> niche solution or die anyway.\n\nYes, in your case the BSD license is a good license. For us at MySQL\nAB, that have paid staff doing all most all development work on the\nserver, the GPL license is a better license as this allows to put all\nsoftware we develop under open source and still make a living. (I am\nnot trying to start a flame war here; I am just saying that both\nlicenses have their use and both benefit open source, but in different\nways)\n\nJan> And there we are at the point about \"support regarding THIS\".\nJan> If you're asking for support for the MySQL project, well, I\nJan> created two procedural languages in PostgreSQL so far and\nJan> know enough about the query rewriting techniques used by\nJan> Stonebraker and his team to implement views in PostgreSQL.\nJan> As the open source developer I am, I might possibly find one\nJan> or the other spare hour to create something similar. The\nJan> reason I did it for PostgreSQL was because a couple of years\nJan> ago Bruce Momjian asked me to fix the rule system. Noone ever\nJan> asked me to do anything for MySQL. But if you're asking for\nJan> direct support for your company, sorry, but I'm a Great\nJan> Bridge employee and that's clearly against my interests.\n\nThe only thing I ask for support is against mysql.org, as this clearly\nviolates our trademark, and public support against any company that\nbreaks copyrights or open source licenses. I don't think that this\nwould be a problem for anyone that believes in open source,\nindependent of who they work for.\n\nJan> But maybe Mr. Mickos told the truth, that there never have\nJan> been substantial contributions from the outside and nearly\nJan> all the code has been written by \"Monty\" himself (with little\nJan> \"donations\" from David). In that case, NuSphere's launch of\nJan> mysql.org was long overdue.\n>> \n>> Why do you think that?\n>> \n>> MySQL AB is a totally open source company. Everything we develop and\n>> sell we also put on open source. I think we have are doing and have\n>> always done the right thing for the open source community.\n\nJan> That is what your CEO said on NewsForge, SlashDot and\nJan> whereever. I am committed to free source. Thus I think that\nJan> the best thing for open source is a free community, which and\nJan> who's product is not controlled by any commercial entity.\n\nI am also committed to open source even if my standpoint is a little\ndifferent from yours. Anyone can do a fork of MySQL, if they don't\nthink that we are doing the right thing. I don't have a problem with\nthat (I wouldn't like it, but it's a rule of the game). I am however\nagainst people that are using others trademark or copyrighted stuff\nwithout permission.\n\n>> I don't think it's really fair to be compare us to NuSphere :(\n\nJan> Did I? That wasn't my intention. And nothing I wrote was\nJan> meant personally. Even if the PostgreSQL and MySQL projects\nJan> had some differences in the past, there has never been\nJan> something between Monty and Jan (not to my knowledge).\n\nThat's right. Sorry for being a little 'on the edge', but this NuSphere\nbusiness is taking it's tool.\n\nJan> Let's meet next week at O'Reilly (you're there, aren't you)\nJan> and have a beer.\n\nI will not be there, but you will find my partner David there. I am\nsure he also would like to meet and chat with you for a while.\n\nI will try to keep down my postings on this list now (if not something\nREALLY interesting comes up). I just wanted you to give you a quick\nlook from the other side of the fence.\n\nRegards,\nMonty\n",
"msg_date": "Thu, 19 Jul 2001 07:08:49 +0300 (EEST)",
"msg_from": "Michael Widenius <monty@mysql.com>",
"msg_from_op": false,
"msg_subject": "Re: MySQL Gemini code"
},
{
"msg_contents": ">>>>> \"Michael\" == Michael Widenius <monty@mysql.com> writes:\n\n Michael> Please note that we NEVER have asked NuSphere to sign\n Michael> over copyright of Gemini to us. We do it only for the\n Michael> core server, and this is actually not an uncommon thing\n Michael> among open source companies. For example QT (Trolltech)\n Michael> and Ximian (a lot of gnome applications) does the same\n Michael> thing. Assigning over the code is also something that\n Michael> FSF requires for all code contributions. If you\n Michael> criticize us at MySQL AB, you should also criticize the\n Michael> above.\n\nAnd Redhat (who are obviously pro Open Source) does this with Cygwin,\n\nSincerely,\n\nAdrian Phillips\n\n-- \nYour mouse has moved.\nWindows NT must be restarted for the change to take effect.\nReboot now? [OK]\n",
"msg_date": "19 Jul 2001 17:14:32 +0200",
"msg_from": "Adrian Phillips <adrianp@powertech.no>",
"msg_from_op": false,
"msg_subject": "Re: MySQL Gemini code"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm playing around with the Full Text Indexing module, and I notice that\nit's case-sensitive. This seems to be pretty useless to me - especially for\nmy application. I wonder if there'd be any objections to me modifying it to\nbe case-insensitive. Or at least be configurable either way...\n\nAlso, the fti.pl that comes with the contrib seems to be using an outdated\nversion of CPAN's Pg.pm.\n\nThe Perl script currently does stuff in a procedural way:\n\nie. print(PQErrorMessage($conn))\n\nWhere it seems to need to be:\n\n print($conn->errorMessage).\n\nI'm not sure if I'm missing something here, but I could also update it to\nuse the new interface.\n\nRegards,\n\nChris\n\n",
"msg_date": "Wed, 18 Jul 2001 14:43:00 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Full Text Indexing"
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> I'm playing around with the Full Text Indexing module, and I notice that\n> it's case-sensitive. This seems to be pretty useless to me - especially for\n> my application. I wonder if there'd be any objections to me modifying it to\n> be case-insensitive. Or at least be configurable either way...\n\nSeems like a good idea, but make it configurable.\n\n> Also, the fti.pl that comes with the contrib seems to be using an outdated\n> version of CPAN's Pg.pm.\n\nIt hasn't been touched in awhile, so feel free to update it. BTW,\nsomeone ought to look at bringing src/interfaces/perl5 into sync with\nthe CPAN version, too. Or possibly we should stop distributing that\naltogether, if the CPAN copy is being maintained?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Jul 2001 11:44:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Full Text Indexing "
},
{
"msg_contents": "> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > I'm playing around with the Full Text Indexing module, and I notice that\n> > it's case-sensitive. This seems to be pretty useless to me -\n> especially for\n> > my application. I wonder if there'd be any objections to me\n> modifying it to\n> > be case-insensitive. Or at least be configurable either way...\n>\n> Seems like a good idea, but make it configurable.\n\nI actually came up with another way of solving the problem.\n\nThe FTI table has two columns: (string, id). The code needs to do two\nthings; delete all strings for an id, and join to the main table based on\nthe id.\n\nThe docs for FTI recommend indexing (string, id). This is poor as the\ndelete based on id does a sequential scan, although the join seems to be\nable to use the index (as long was you have a where string ~ '^something').\n\nI indexed as follows:\n\n-- Functional index that lets us do case-insensitivity without hacking\nfti.so\nCREATE INDEX fti_string_idx ON fti_table(lower(string));\n\n-- Index on id to allow fast deletes\nCREATE INDEX fti_id_idx ON fti_table(id);\n\nThat seems to be a good solution to me - it allows case-insensitivity, fast\ndeletion and fast joining.\n\n> > Also, the fti.pl that comes with the contrib seems to be using\n> an outdated\n> > version of CPAN's Pg.pm.\n>\n> It hasn't been touched in awhile, so feel free to update it. BTW,\n> someone ought to look at bringing src/interfaces/perl5 into sync with\n> the CPAN version, too. Or possibly we should stop distributing that\n> altogether, if the CPAN copy is being maintained?\n\nI'll have a look someday maybe, but I'll try to get this\nharder-than-expected ADD CONSTRAINT UNIQUE/PRIMARY patch off my hands first.\n\nChris\n\n",
"msg_date": "Mon, 23 Jul 2001 10:33:37 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Re: Full Text Indexing "
},
{
"msg_contents": "Doh - sorry about these hideously late posts. I think my mail queue has\nbeen clogged up for a while!\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Christopher\n> Kings-Lynne\n> Sent: Monday, 23 July 2001 10:34 AM\n> To: Tom Lane\n> Cc: Hackers\n> Subject: Re: [HACKERS] Full Text Indexing\n>\n>\n> > \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > > I'm playing around with the Full Text Indexing module, and I\n> notice that\n> > > it's case-sensitive. This seems to be pretty useless to me -\n> > especially for\n> > > my application. I wonder if there'd be any objections to me\n> > modifying it to\n> > > be case-insensitive. Or at least be configurable either way...\n> >\n> > Seems like a good idea, but make it configurable.\n>\n> I actually came up with another way of solving the problem.\n>\n> The FTI table has two columns: (string, id). The code needs to do two\n> things; delete all strings for an id, and join to the main table based on\n> the id.\n>\n> The docs for FTI recommend indexing (string, id). This is poor as the\n> delete based on id does a sequential scan, although the join seems to be\n> able to use the index (as long was you have a where string ~\n> '^something').\n>\n> I indexed as follows:\n>\n> -- Functional index that lets us do case-insensitivity without hacking\n> fti.so\n> CREATE INDEX fti_string_idx ON fti_table(lower(string));\n>\n> -- Index on id to allow fast deletes\n> CREATE INDEX fti_id_idx ON fti_table(id);\n>\n> That seems to be a good solution to me - it allows\n> case-insensitivity, fast\n> deletion and fast joining.\n>\n> > > Also, the fti.pl that comes with the contrib seems to be using\n> > an outdated\n> > > version of CPAN's Pg.pm.\n> >\n> > It hasn't been touched in awhile, so feel free to update it. BTW,\n> > someone ought to look at bringing src/interfaces/perl5 into sync with\n> > the CPAN version, too. Or possibly we should stop distributing that\n> > altogether, if the CPAN copy is being maintained?\n>\n> I'll have a look someday maybe, but I'll try to get this\n> harder-than-expected ADD CONSTRAINT UNIQUE/PRIMARY patch off my\n> hands first.\n>\n> Chris\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n",
"msg_date": "Fri, 31 Aug 2001 09:43:03 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Re: Full Text Indexing "
}
] |
[
{
"msg_contents": "On Wed, 18 Jul 2001 02:44:24 GMT, alavoor <alavoor@yahoo.com> wrote:\n\n>\n>All computers in the world MUST sync with ATOMIC clock before 12:00 AM\n>21 July 2001!!!\n\nWHY???\n\n\n",
"msg_date": "Wed, 18 Jul 2001 07:03:55 GMT",
"msg_from": "rick@edg.nl (Rick Schippers)",
"msg_from_op": true,
"msg_subject": "Re: All computers in the world MUST sync with ATOMIC clock before\n\t12:00 AM 21 July 2001!!!"
},
{
"msg_contents": "alavoor wrote:\n> \n> You must sync your PC's date and time with Cesium Atomic clock.\n> \n> Use this very small and tiny program written in PHP.\n\nBetter yet, do it with a program that uses the NTP protocol, which is the\nstandard way of doing it.\n\nGorazd\n",
"msg_date": "Wed, 18 Jul 2001 10:02:37 +0200",
"msg_from": "Gorazd Bozic <gorazd.bozic@arnes.si>",
"msg_from_op": false,
"msg_subject": "Re: All computers in the world MUST sync with ATOMIC clock before\n\t12:00 AM 21 July 2001!!!"
}
] |
[
{
"msg_contents": "\nWe are seeing what seems to me to be very peculiar behaviour. We have a\nschema upgrade script that alters the schema of an existing production\ndatabase. One of the things we do is create two new indexes. The script\nthen immediately performs a vacuum analyze.\n\nThe problem is (or was) that this analyze didn't seem to work. Queries\nperformed thereafter would run slowly. Doing another vacuum analyze later\non would fix this, and queries would then perform well.\n\nWe have two approaches that fix this. The first was to just sleep for two\nseconds between creating the indexes and doing the vacuum analyze. The\nsecond was to perform an explicit checkpoint between index creation and\nvacuum analyze. The second approach seems the most sound, the sleep\napproach relies too much on coincidence. But both work in our tests so\nfar.\n\nHowever, why is this so? Can analyze not work properly unless the data\nfiles have all been fsynced to disk? Does the WAL really stop analyze from\nworking?\n\nEven stranger, it turns out that doing the checkpoint _after_ the vacuum\nanalyze also fixes this behaviour, ie queries perform well\nimmediately. This part is _so_ strange that I'm tempted to just not\nbelieve it ever happened... except that it seems it did.\n\nAny insights? Is this expected behaviour? Can anyone explain why this is\nhappening? We have a workaround (checkpoint), so we're not too concerned,\nbut would like to understand what's going on.\n\nPlatform is PG7.1.2 on Red Hat Linux 6.2, x86.\n\nTim\n\n-- \n-----------------------------------------------\nTim Allen tim@proximity.com.au\nProximity Pty Ltd http://www.proximity.com.au/\n http://www4.tpg.com.au/users/rita_tim/\n\n",
"msg_date": "Wed, 18 Jul 2001 17:14:39 +1000 (EST)",
"msg_from": "Tim Allen <tim@proximity.com.au>",
"msg_from_op": true,
"msg_subject": "analyze strangeness"
},
{
"msg_contents": "Tim Allen <tim@proximity.com.au> writes:\n> The problem is (or was) that this analyze didn't seem to work. Queries\n> performed thereafter would run slowly. Doing another vacuum analyze later\n> on would fix this, and queries would then perform well.\n\nThis makes no sense to me, either. Can you put together a\nself-contained test case that demonstrates the problem?\n\nOne thing that would be useful is to compare the planner statistics\nproduced by the first and second vacuums. To see the stats, do\n\nselect relname,relpages,reltuples from pg_class where\nrelname in ('tablename', 'indexname', ...);\n\n(include each index on the table, as well as the table itself) and also\n\nselect attname,attdispersion,s.*\nfrom pg_statistic s, pg_attribute a, pg_class c\nwhere starelid = c.oid and attrelid = c.oid and staattnum = attnum\nand relname = 'tablename';\n\n\n> Even stranger, it turns out that doing the checkpoint _after_ the vacuum\n> analyze also fixes this behaviour, ie queries perform well\n> immediately.\n\nI don't really believe that checkpoint has anything to do with it.\nHowever, if the queries are being done in a different backend than the\none doing the vacuum, is it possible that the other backend is inside an\nopen transaction and does not see the catalog updates from the\nlater-starting vacuum transaction?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Jul 2001 12:04:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: analyze strangeness "
}
] |
[
{
"msg_contents": "\n> I just had an idea about how to avoid this cost:\n> why not recycle old log segments? At the point where the code\n> currently deletes a no-longer-needed segment, just rename it to\n> become the next created-in-advance segment.\n\nYes, since I already suggested this on Feb 26. I naturally think this \nis a good idea, iirc Vadim also stated similar ideas.\n\nhttp://fts.postgresql.org/db/mw/msg.html?mid=73076\n\nMaybe I did not make myself clear enough though, you clearly did better :-)\n\n> Another issue is whether the recycling logic should be \"always recycle\"\n> (hence number of extant WAL segments will never decrease), or should\n> it be more like \"recycle if there are fewer than WAL_FILES advance\n> segments, else delete\".\n\nYes, I think we should use the WAL_FILES parameter to state how many WAL files\nshould be kept around, or better yet only use it if it is not 0.\nThus the default would be to never decrease, but if the admin went to the \ntrouble of specifying a (good) value, that should imho be honored.\n\nAndreas\n",
"msg_date": "Wed, 18 Jul 2001 10:14:28 +0200",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: Idea: recycle WAL segments, don't delete/recreate '\n\tem"
},
{
"msg_contents": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n> Yes, since I already suggested this on Feb 26.\n\nSo you did. Darn, I thought it was original ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Jul 2001 11:09:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: AW: Idea: recycle WAL segments, don't delete/recreate ' em "
}
] |
[
{
"msg_contents": "Eugene Faukin (elf@solvo.ru) reports a bug with a severity of 2\nThe lower the number the more severe it is.\n\nShort Description\nlibpgtcl doesn't use UTF encoding of TCL\n\nLong Description\nModern versions of the TCL (8.2 at least) use UTF encoding to internal\nstorage of the text. libpgtcl uses TCL functions to insert strings directly\ninto TCL internal structure without any conversion.\n\n\nSample Code\nI can suggest you next patch I use for myself:\n\ndiff -uNr postgresql-7.0.2.orig/src/interfaces/libpgtcl/pgtclCmds.c postgresql-7.0.2/src/interfaces/libpgtcl/pgtclCmds.c\n--- postgresql-7.0.2.orig/src/interfaces/libpgtcl/pgtclCmds.c Wed Apr 12 21:17:11 2000\n+++ postgresql-7.0.2/src/interfaces/libpgtcl/pgtclCmds.c Thu Nov 16 20:26:37 2000\n@@ -431,6 +431,7 @@\n Pg_ConnectionId *connid;\n PGconn *conn;\n PGresult *result;\n+ Tcl_DString putString;\n \n if (argc != 3)\n {\n@@ -449,7 +450,9 @@\n return TCL_ERROR;\n }\n \n- result = PQexec(conn, argv[2]);\n+ Tcl_UtfToExternalDString(NULL, argv[2], -1, &putString);\n+ result = PQexec(conn, Tcl_DStringValue(&putString));\n+ Tcl_DStringFree(&putString);\n \n /* Transfer any notify events from libpq to Tcl event queue. */\n PgNotifyTransferEvents(connid);\n@@ -535,6 +538,7 @@\n char *arrVar;\n char nameBuffer[256];\n const char *appendstr;\n+ Tcl_DString retString;\n \n if (argc < 3 || argc > 5)\n {\n@@ -685,11 +689,24 @@\n }\n #ifdef TCL_ARRAYS\n for (i = 0; i < PQnfields(result); i++)\n- Tcl_AppendElement(interp, tcl_value(PQgetvalue(result, tupno, i)));\n+ {\n+ Tcl_ExternalToUtfDString(NULL,\n+ tcl_value(PQgetvalue(result,\n+ tupno, i)),\n+ -1, &retString);\n+ Tcl_AppendElement(interp, Tcl_DStringValue(&retString));\n+ }\n #else\n for (i = 0; i < PQnfields(result); i++)\n- Tcl_AppendElement(interp, PQgetvalue(result, tupno, i));\n+ {\n+ Tcl_ExternalToUtfDString(NULL,\n+ PQgetvalue(result, tupno, i),\n+ -1, &retString);\n+ \n+ Tcl_AppendElement(interp, Tcl_DStringValue(&retString));\n+ }\n #endif\n+ Tcl_DStringFree(&retString);\n return TCL_OK;\n }\n else if (strcmp(opt, \"-tupleArray\") == 0)\n@@ -707,21 +724,35 @@\n }\n for (i = 0; i < PQnfields(result); i++)\n {\n- if (Tcl_SetVar2(interp, argv[4], PQfname(result, i),\n+ Tcl_ExternalToUtfDString(NULL, \n #ifdef TCL_ARRAYS\n- tcl_value(PQgetvalue(result, tupno, i)),\n+ tcl_value(PQgetvalue(result,\n+ tupno, i)),\n #else\n- PQgetvalue(result, tupno, i),\n+ PQgetvalue(result, tupno, i),\n #endif\n- TCL_LEAVE_ERR_MSG) == NULL)\n- return TCL_ERROR;\n+ -1, &retString);\n+\n+ if (Tcl_SetVar2(interp, argv[4], PQfname(result, i),\n+ Tcl_DStringValue(&retString),\n+ TCL_LEAVE_ERR_MSG) == NULL)\n+ {\n+ Tcl_DStringFree(&retString);\n+ return TCL_ERROR;\n+ }\n }\n+ Tcl_DStringFree(&retString);\n return TCL_OK;\n }\n else if (strcmp(opt, \"-attributes\") == 0)\n {\n for (i = 0; i < PQnfields(result); i++)\n- Tcl_AppendElement(interp, PQfname(result, i));\n+ {\n+ Tcl_ExternalToUtfDString(NULL, PQfname(result, i),\n+ -1, &retString);\n+ Tcl_AppendElement(interp, Tcl_DStringValue(&retString));\n+ Tcl_DStringFree(&retString);\n+ }\n return TCL_OK;\n }\n else if (strcmp(opt, \"-lAttributes\") == 0)\n@@ -1274,6 +1305,8 @@\n column,\n ncols;\n Tcl_DString headers;\n+ Tcl_DString retString;\n+ Tcl_DString putString;\n char buffer[2048];\n struct info_s\n {\n@@ -1292,7 +1325,11 @@\n if (conn == (PGconn *) NULL)\n return TCL_ERROR;\n \n- if ((result = PQexec(conn, argv[2])) == 0)\n+ Tcl_UtfToExternalDString(NULL, argv[2], -1, &putString);\n+ result = PQexec(conn, Tcl_DStringValue(&putString));\n+ Tcl_DStringFree(&putString);\n+\n+ if (result == 0)\n {\n /* error occurred sending the query */\n Tcl_SetResult(interp, PQerrorMessage(conn), TCL_VOLATILE);\n@@ -1340,13 +1377,21 @@\n Tcl_SetVar2(interp, argv[3], \".tupno\", buffer, 0);\n \n for (column = 0; column < ncols; column++)\n- Tcl_SetVar2(interp, argv[3], info[column].cname,\n+ {\n+ Tcl_ExternalToUtfDString(NULL,\n #ifdef TCL_ARRAYS\n- tcl_value(PQgetvalue(result, tupno, column)),\n+ tcl_value(PQgetvalue(result,\n+ tupno,\n+ column)),\n #else\n- PQgetvalue(result, tupno, column),\n+ PQgetvalue(result, tupno, column),\n #endif\n- 0);\n+ -1, &retString);\n+\n+ Tcl_SetVar2(interp, argv[3], info[column].cname,\n+ Tcl_DStringValue(&retString), 0);\n+ Tcl_DStringFree(&retString);\n+ }\n \n Tcl_SetVar2(interp, argv[3], \".command\", \"update\", 0);\n \n\n\nNo file was uploaded with this report\n\n",
"msg_date": "Wed, 18 Jul 2001 04:18:23 -0400 (EDT)",
"msg_from": "pgsql-bugs@postgresql.org",
"msg_from_op": true,
"msg_subject": "libpgtcl doesn't use UTF encoding of TCL"
},
{
"msg_contents": "\nDo you have any idea how this will work with earlier TCL versions? When\nwas Tcl_UtfToExternalDString added to TCL?\n\n> Eugene Faukin (elf@solvo.ru) reports a bug with a severity of 2\n> The lower the number the more severe it is.\n> \n> Short Description\n> libpgtcl doesn't use UTF encoding of TCL\n> \n> Long Description\n> Modern versions of the TCL (8.2 at least) use UTF encoding to internal\n> storage of the text. libpgtcl uses TCL functions to insert strings directly\n> into TCL internal structure without any conversion.\n> \n> \n> Sample Code\n> I can suggest you next patch I use for myself:\n> \n> diff -uNr postgresql-7.0.2.orig/src/interfaces/libpgtcl/pgtclCmds.c postgresql-7.0.2/src/interfaces/libpgtcl/pgtclCmds.c\n> --- postgresql-7.0.2.orig/src/interfaces/libpgtcl/pgtclCmds.c Wed Apr 12 21:17:11 2000\n> +++ postgresql-7.0.2/src/interfaces/libpgtcl/pgtclCmds.c Thu Nov 16 20:26:37 2000\n> @@ -431,6 +431,7 @@\n> Pg_ConnectionId *connid;\n> PGconn *conn;\n> PGresult *result;\n> + Tcl_DString putString;\n> \n> if (argc != 3)\n> {\n> @@ -449,7 +450,9 @@\n> return TCL_ERROR;\n> }\n> \n> - result = PQexec(conn, argv[2]);\n> + Tcl_UtfToExternalDString(NULL, argv[2], -1, &putString);\n> + result = PQexec(conn, Tcl_DStringValue(&putString));\n> + Tcl_DStringFree(&putString);\n> \n> /* Transfer any notify events from libpq to Tcl event queue. */\n> PgNotifyTransferEvents(connid);\n> @@ -535,6 +538,7 @@\n> char *arrVar;\n> char nameBuffer[256];\n> const char *appendstr;\n> + Tcl_DString retString;\n> \n> if (argc < 3 || argc > 5)\n> {\n> @@ -685,11 +689,24 @@\n> }\n> #ifdef TCL_ARRAYS\n> for (i = 0; i < PQnfields(result); i++)\n> - Tcl_AppendElement(interp, tcl_value(PQgetvalue(result, tupno, i)));\n> + {\n> + Tcl_ExternalToUtfDString(NULL,\n> + tcl_value(PQgetvalue(result,\n> + tupno, i)),\n> + -1, &retString);\n> + Tcl_AppendElement(interp, Tcl_DStringValue(&retString));\n> + }\n> #else\n> for (i = 0; i < PQnfields(result); i++)\n> - Tcl_AppendElement(interp, PQgetvalue(result, tupno, i));\n> + {\n> + Tcl_ExternalToUtfDString(NULL,\n> + PQgetvalue(result, tupno, i),\n> + -1, &retString);\n> + \n> + Tcl_AppendElement(interp, Tcl_DStringValue(&retString));\n> + }\n> #endif\n> + Tcl_DStringFree(&retString);\n> return TCL_OK;\n> }\n> else if (strcmp(opt, \"-tupleArray\") == 0)\n> @@ -707,21 +724,35 @@\n> }\n> for (i = 0; i < PQnfields(result); i++)\n> {\n> - if (Tcl_SetVar2(interp, argv[4], PQfname(result, i),\n> + Tcl_ExternalToUtfDString(NULL, \n> #ifdef TCL_ARRAYS\n> - tcl_value(PQgetvalue(result, tupno, i)),\n> + tcl_value(PQgetvalue(result,\n> + tupno, i)),\n> #else\n> - PQgetvalue(result, tupno, i),\n> + PQgetvalue(result, tupno, i),\n> #endif\n> - TCL_LEAVE_ERR_MSG) == NULL)\n> - return TCL_ERROR;\n> + -1, &retString);\n> +\n> + if (Tcl_SetVar2(interp, argv[4], PQfname(result, i),\n> + Tcl_DStringValue(&retString),\n> + TCL_LEAVE_ERR_MSG) == NULL)\n> + {\n> + Tcl_DStringFree(&retString);\n> + return TCL_ERROR;\n> + }\n> }\n> + Tcl_DStringFree(&retString);\n> return TCL_OK;\n> }\n> else if (strcmp(opt, \"-attributes\") == 0)\n> {\n> for (i = 0; i < PQnfields(result); i++)\n> - Tcl_AppendElement(interp, PQfname(result, i));\n> + {\n> + Tcl_ExternalToUtfDString(NULL, PQfname(result, i),\n> + -1, &retString);\n> + Tcl_AppendElement(interp, Tcl_DStringValue(&retString));\n> + Tcl_DStringFree(&retString);\n> + }\n> return TCL_OK;\n> }\n> else if (strcmp(opt, \"-lAttributes\") == 0)\n> @@ -1274,6 +1305,8 @@\n> column,\n> ncols;\n> Tcl_DString headers;\n> + Tcl_DString retString;\n> + Tcl_DString putString;\n> char buffer[2048];\n> struct info_s\n> {\n> @@ -1292,7 +1325,11 @@\n> if (conn == (PGconn *) NULL)\n> return TCL_ERROR;\n> \n> - if ((result = PQexec(conn, argv[2])) == 0)\n> + Tcl_UtfToExternalDString(NULL, argv[2], -1, &putString);\n> + result = PQexec(conn, Tcl_DStringValue(&putString));\n> + Tcl_DStringFree(&putString);\n> +\n> + if (result == 0)\n> {\n> /* error occurred sending the query */\n> Tcl_SetResult(interp, PQerrorMessage(conn), TCL_VOLATILE);\n> @@ -1340,13 +1377,21 @@\n> Tcl_SetVar2(interp, argv[3], \".tupno\", buffer, 0);\n> \n> for (column = 0; column < ncols; column++)\n> - Tcl_SetVar2(interp, argv[3], info[column].cname,\n> + {\n> + Tcl_ExternalToUtfDString(NULL,\n> #ifdef TCL_ARRAYS\n> - tcl_value(PQgetvalue(result, tupno, column)),\n> + tcl_value(PQgetvalue(result,\n> + tupno,\n> + column)),\n> #else\n> - PQgetvalue(result, tupno, column),\n> + PQgetvalue(result, tupno, column),\n> #endif\n> - 0);\n> + -1, &retString);\n> +\n> + Tcl_SetVar2(interp, argv[3], info[column].cname,\n> + Tcl_DStringValue(&retString), 0);\n> + Tcl_DStringFree(&retString);\n> + }\n> \n> Tcl_SetVar2(interp, argv[3], \".command\", \"update\", 0);\n> \n> \n> \n> No file was uploaded with this report\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 18 Jul 2001 11:24:41 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: libpgtcl doesn't use UTF encoding of TCL"
},
{
"msg_contents": "On Wed, 18 Jul 2001, Bruce Momjian wrote:\n\n> Do you have any idea how this will work with earlier TCL versions?\n\nIt won't. If pgtcl is supposed to still be able to compile with older\nversions of Tcl, the changes have to be made a compile time option.\n\n> When was Tcl_UtfToExternalDString added to TCL?\n\nAccording to Tcl's changelog files, it was added in 1997 for Tcl 8.1\nwich was released in 1999.\n\ncu\n\tReinhard Max\n\nMaintainer of the Tcl/Tk and PostgreSQL packages for SuSE Linux\n\n",
"msg_date": "Wed, 18 Jul 2001 18:14:59 +0200 (CEST)",
"msg_from": "Reinhard Max <max@suse.de>",
"msg_from_op": false,
"msg_subject": "Re: libpgtcl doesn't use UTF encoding of TCL"
},
{
"msg_contents": "> On Wed, 18 Jul 2001, Bruce Momjian wrote:\n> \n> > Do you have any idea how this will work with earlier TCL versions?\n> \n> It won't. If pgtcl is supposed to still be able to compile with older\n> versions of Tcl, the changes have to be made a compile time option.\n\nCan't we probe the TCL version in the code and handle it that way?\n\n> > When was Tcl_UtfToExternalDString added to TCL?\n> \n> According to Tcl's changelog files, it was added in 1997 for Tcl 8.1\n> wich was released in 1999.\n\nShame. I think we could have required Tcl 8.X but clearly we can't be\nrequring >= 8.1. I have 8.0 here and I am sure many do as well.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 18 Jul 2001 12:22:14 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: libpgtcl doesn't use UTF encoding of TCL"
},
{
"msg_contents": "Reinhard Max <max@suse.de> writes:\n> On Wed, 18 Jul 2001, Bruce Momjian wrote:\n>> Do you have any idea how this will work with earlier TCL versions?\n\n> It won't. If pgtcl is supposed to still be able to compile with older\n> versions of Tcl, the changes have to be made a compile time option.\n\nPlease do that and resubmit the patch. We really don't want to give up\nbackwards compatibility just yet.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Jul 2001 14:53:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: libpgtcl doesn't use UTF encoding of TCL "
},
{
"msg_contents": "On Wed, Jul 18, 2001 at 02:53:22PM -0400, Tom Lane wrote:\n \n> > It won't. If pgtcl is supposed to still be able to compile with older\n> > versions of Tcl, the changes have to be made a compile time option.\n> \n> Please do that and resubmit the patch. We really don't want to give up\n> backwards compatibility just yet.\n> \n\nThank you, gentlemans :-)\nI will be interest with this result too.\n\n-- \nEugene Faukin\nSOLVO Ltd. Company\n",
"msg_date": "Thu, 19 Jul 2001 10:26:59 +0400",
"msg_from": "Eugene Fokin <elf@solvo.ru>",
"msg_from_op": false,
"msg_subject": "Re: libpgtcl doesn't use UTF encoding of TCL"
},
{
"msg_contents": "> On Wed, Jul 18, 2001 at 02:53:22PM -0400, Tom Lane wrote:\n> \n> > > It won't. If pgtcl is supposed to still be able to compile with older\n> > > versions of Tcl, the changes have to be made a compile time option.\n> > \n> > Please do that and resubmit the patch. We really don't want to give up\n> > backwards compatibility just yet.\n> > \n> \n> Thank you, gentlemans :-)\n> I will be interest with this result too.\n\nI believe $tcl_version is what you want to use to test.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 19 Jul 2001 06:28:16 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: libpgtcl doesn't use UTF encoding of TCL"
},
{
"msg_contents": "On Wed, 18 Jul 2001, Tom Lane wrote:\n\n> Reinhard Max <max@suse.de> writes:\n> > On Wed, 18 Jul 2001, Bruce Momjian wrote:\n> >> Do you have any idea how this will work with earlier TCL versions?\n>\n> > It won't. If pgtcl is supposed to still be able to compile with older\n> > versions of Tcl, the changes have to be made a compile time option.\n>\n> Please do that and resubmit the patch.\n\nOK, I'll pack the new stuff inside #ifdef TCL_UTF8 and define that if\nthe Tcl version is 8.1 or greater.\n\nCan anybody tell me, if that TCL_ARRAYS stuff is still good for\nsomething? If I could remove it, TCL_UTF8 would be less complex at\nsome places, because I'd only have to cover two instead of four cases.\n\nBTW, I think the proposed Patch doesn't go far enough as it assumes\nthe database (client) encoding is identical to the system encoding by\nusing NULL as the first argument for Tcl_UtfToExternalDString and\nTcl_ExternalToUtfDString. I think it should instead either use the\ndatabase's encoding for the conversion to be correct or set\nPostgreSQL's client encoding to UNICODE so that no conversion would be\nneeded. Unfortunately, I don't have the time to do that at the moment.\n\n> We really don't want to give up backwards compatibility just yet.\n\nHow far do you want it to be backwards compatible? If >= 8.0 is OK,\nI'll possibly overwork libpq later this year to use Tcl's object\ninterface. I expect at least some performance gain out of this.\n\ncu\n\tReinhard\n\n\n",
"msg_date": "Thu, 19 Jul 2001 12:44:30 +0200 (CEST)",
"msg_from": "Reinhard Max <max@suse.de>",
"msg_from_op": false,
"msg_subject": "Re: libpgtcl doesn't use UTF encoding of TCL "
},
{
"msg_contents": "On Wed, 18 Jul 2001, Tom Lane wrote:\n\n> Reinhard Max <max@suse.de> writes:\n> > On Wed, 18 Jul 2001, Bruce Momjian wrote:\n> >> Do you have any idea how this will work with earlier TCL versions?\n>\n> > It won't. If pgtcl is supposed to still be able to compile with older\n> > versions of Tcl, the changes have to be made a compile time option.\n>\n> Please do that and resubmit the patch.\n\nHere it is, but I consider it still incomplete and I have not done\nexhaustive testing. Some more occurrences of PQexec and PQgetvalue\nneed to be wrapped up with UTF8 conversion, but I'll not have the time\nto do it for the next 1-2 weeks.\n\ncu\n\tReinhard",
"msg_date": "Thu, 19 Jul 2001 16:05:35 +0200 (CEST)",
"msg_from": "Reinhard Max <max@suse.de>",
"msg_from_op": false,
"msg_subject": "Re: libpgtcl doesn't use UTF encoding of TCL "
},
{
"msg_contents": "Reinhard Max <max@suse.de> writes:\n>> [ concerning libpgtcl ]\n>> We really don't want to give up backwards compatibility just yet.\n\n> How far do you want it to be backwards compatible? If >= 8.0 is OK,\n> I'll possibly overwork libpq later this year to use Tcl's object\n> interface. I expect at least some performance gain out of this.\n\nThat's been on the to-do list for awhile, but we haven't really faced up\nto the question of whether Tcl 7.* compatibility is still important to\nretain. I can see three plausible paths:\n\n1. Drop 7.* compatibility, rework code to use 8.0 object interfaces.\n\n2. Rework code to use object interfaces #if TCL >= 8.0, else not.\n\n3. Build a separate, new implementation that's only for Tcl >= 8.0,\n but leave the old code available as a separate library.\n\nMy guess is that #2 would uglify the code to the point of\nunmaintainability --- but I haven't really looked to see how extensive\nthe changes might be; perhaps it'd be workable. #3 would create a\ndifferent sort of maintainability issue, namely updating two parallel\nsets of code when there were common bugs. Probably the old code would\nget dropped at some future time anyway, so that ultimately #3 becomes\n#1.\n\nI don't have a strong opinion about what to do. I've cc'd this to\npgsql-interfaces, maybe we can get some comments there. Does anyone\nstill use/care about Tcl 7.* ?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Jul 2001 13:40:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [BUGS] libpgtcl doesn't use UTF encoding of TCL "
},
{
"msg_contents": "Reinhard Max <max@suse.de> writes:\n> Can anybody tell me, if that TCL_ARRAYS stuff is still good for\n> something? If I could remove it, TCL_UTF8 would be less complex at\n> some places, because I'd only have to cover two instead of four cases.\n\nTCL_ARRAYS is a kluge in my humble opinion, but we haven't really\nsettled on something better --- in particular, the challenge is to\nfix it without breaking other things. See the mailing list archives\nfor prior discussions, notably the thread starting at\n\nhttp://fts.postgresql.org/db/mw/msg.html?mid=40589\n\nUnless you want to tackle the problem of providing a replacement\nfeature, I'd recommend just adding the extra code :-(\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Jul 2001 14:58:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "libpgtcl and TCL_ARRAYS"
},
{
"msg_contents": "> My guess is that #2 would uglify the code to the point of\n> unmaintainability --- but I haven't really looked to see how extensive\n> the changes might be; perhaps it'd be workable. #3 would create a\n> different sort of maintainability issue, namely updating two parallel\n> sets of code when there were common bugs. Probably the old code would\n> get dropped at some future time anyway, so that ultimately #3 becomes\n> #1.\n> \n> I don't have a strong opinion about what to do. I've cc'd this to\n> pgsql-interfaces, maybe we can get some comments there. Does anyone\n> still use/care about Tcl 7.* ?\n\nBSD/OS still ships with 7.5 but I have requested 8.0 in the next\nrelease. Let's see what happens. BSD/OS is usually up on the newer\nstuff so it is strange they have held back. I think part of the problem\nis that everyone is not happy with how TCL went with 8.1+ so many have\nheld back on upgrading and are still on 7.5 or 8.0.5. My guess is\nconsidering the number of platforms we support that we will have to keep\n7.5 perhaps another year. Part of the problem is that we haven't had\nany complaints about the 7.5-supported functionality.\n\nI have 8.0.5 custom-installed here. I agree dropping 7.5 support is the\nonly reasonable option rather than ifdef or separate files.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 20 Jul 2001 10:08:33 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [BUGS] libpgtcl doesn't use UTF encoding of TCL"
},
{
"msg_contents": "> On Wed, 18 Jul 2001, Tom Lane wrote:\n> \n> > Reinhard Max <max@suse.de> writes:\n> > > On Wed, 18 Jul 2001, Bruce Momjian wrote:\n> > >> Do you have any idea how this will work with earlier TCL versions?\n> >\n> > > It won't. If pgtcl is supposed to still be able to compile with older\n> > > versions of Tcl, the changes have to be made a compile time option.\n> >\n> > Please do that and resubmit the patch.\n> \n> OK, I'll pack the new stuff inside #ifdef TCL_UTF8 and define that if\n> the Tcl version is 8.1 or greater.\n\nIs the TCL_UTF8 some variable that gets set at tcl runtime? I hope so. \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 20 Jul 2001 10:10:04 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: libpgtcl doesn't use UTF encoding of TCL"
},
{
"msg_contents": "Reinhard Max writes:\n\n> OK, I'll pack the new stuff inside #ifdef TCL_UTF8 and define that if\n> the Tcl version is 8.1 or greater.\n\nNo, please add a configure check for Tcl_UtfToExternalDString or some\nother function representative of this interface..\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Fri, 20 Jul 2001 16:22:08 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: libpgtcl doesn't use UTF encoding of TCL "
},
{
"msg_contents": "On Fri, 20 Jul 2001, Peter Eisentraut wrote:\n\n> Reinhard Max writes:\n>\n> > OK, I'll pack the new stuff inside #ifdef TCL_UTF8 and define that if\n> > the Tcl version is 8.1 or greater.\n>\n> No, please add a configure check for Tcl_UtfToExternalDString or\n> some other function representative of this interface..\n\nWhy make simple things complicated?\nTcl changed it's internal string representation starting with release\n8.1 . It is not an interface one can decide whether to use it or not.\nEvery extension that imports or exports strings and gets compiled for\nTcl >= 8.1 has to make sure that they are UTF8 regardless, if it uses\nthe Tcl_*Utf*DString functions or something else. So I consider it\nsufficient to define TCL_UTF8 if Tcl's Version is >= 8.1 as I did in\nthe patch that was attached to my last mail.\n\ncu\n\tReinhard\n\n",
"msg_date": "Fri, 20 Jul 2001 17:09:03 +0200 (CEST)",
"msg_from": "Reinhard Max <max@suse.de>",
"msg_from_op": false,
"msg_subject": "Re: libpgtcl doesn't use UTF encoding of TCL "
},
{
"msg_contents": "> > On Wed, 18 Jul 2001, Tom Lane wrote:\n> > \n> > > Reinhard Max <max@suse.de> writes:\n> > > > On Wed, 18 Jul 2001, Bruce Momjian wrote:\n> > > >> Do you have any idea how this will work with earlier TCL versions?\n> > >\n> > > > It won't. If pgtcl is supposed to still be able to compile with older\n> > > > versions of Tcl, the changes have to be made a compile time option.\n> > >\n> > > Please do that and resubmit the patch.\n> > \n> > OK, I'll pack the new stuff inside #ifdef TCL_UTF8 and define that if\n> > the Tcl version is 8.1 or greater.\n> \n> Is the TCL_UTF8 some variable that gets set at tcl runtime? I hope so. \n\nI now realize we can't have this configure at runtime. It has to read\nthe tcl include file for the version it is about to be linked to.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 20 Jul 2001 11:16:16 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: libpgtcl doesn't use UTF encoding of TCL"
},
{
"msg_contents": "> On Fri, 20 Jul 2001, Peter Eisentraut wrote:\n> \n> > Reinhard Max writes:\n> >\n> > > OK, I'll pack the new stuff inside #ifdef TCL_UTF8 and define that if\n> > > the Tcl version is 8.1 or greater.\n> >\n> > No, please add a configure check for Tcl_UtfToExternalDString or\n> > some other function representative of this interface..\n> \n> Why make simple things complicated?\n> Tcl changed it's internal string representation starting with release\n> 8.1 . It is not an interface one can decide whether to use it or not.\n> Every extension that imports or exports strings and gets compiled for\n> Tcl >= 8.1 has to make sure that they are UTF8 regardless, if it uses\n> the Tcl_*Utf*DString functions or something else. So I consider it\n> sufficient to define TCL_UTF8 if Tcl's Version is >= 8.1 as I did in\n> the patch that was attached to my last mail.\n\nI think he is OK checking the TCL version. It is pretty common to check\nthe TCL include file for symbols and handle things that way. Do we test\nany other TCL include defines from configure?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 20 Jul 2001 11:17:26 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: libpgtcl doesn't use UTF encoding of TCL"
},
{
"msg_contents": "Hum. Why don't you enable --enable-multibyte and\n--enable-unicode-conversion and set client_encoding to UNICODE? That\nwould do a conversion from/to UTF-8 for Tcl 8.x (x > 9) clients?\n--\nTatsuo Ishii\n\n> Eugene Faukin (elf@solvo.ru) reports a bug with a severity of 2\n> The lower the number the more severe it is.\n> \n> Short Description\n> libpgtcl doesn't use UTF encoding of TCL\n> \n> Long Description\n> Modern versions of the TCL (8.2 at least) use UTF encoding to internal\n> storage of the text. libpgtcl uses TCL functions to insert strings directly\n> into TCL internal structure without any conversion.\n> \n> \n> Sample Code\n> I can suggest you next patch I use for myself:\n> \n> diff -uNr postgresql-7.0.2.orig/src/interfaces/libpgtcl/pgtclCmds.c postgresql-7.0.2/src/interfaces/libpgtcl/pgtclCmds.c\n> --- postgresql-7.0.2.orig/src/interfaces/libpgtcl/pgtclCmds.c Wed Apr 12 21:17:11 2000\n> +++ postgresql-7.0.2/src/interfaces/libpgtcl/pgtclCmds.c Thu Nov 16 20:26:37 2000\n> @@ -431,6 +431,7 @@\n> Pg_ConnectionId *connid;\n> PGconn *conn;\n> PGresult *result;\n> + Tcl_DString putString;\n> \n> if (argc != 3)\n> {\n> @@ -449,7 +450,9 @@\n> return TCL_ERROR;\n> }\n> \n> - result = PQexec(conn, argv[2]);\n> + Tcl_UtfToExternalDString(NULL, argv[2], -1, &putString);\n> + result = PQexec(conn, Tcl_DStringValue(&putString));\n> + Tcl_DStringFree(&putString);\n> \n> /* Transfer any notify events from libpq to Tcl event queue. */\n> PgNotifyTransferEvents(connid);\n> @@ -535,6 +538,7 @@\n> char *arrVar;\n> char nameBuffer[256];\n> const char *appendstr;\n> + Tcl_DString retString;\n> \n> if (argc < 3 || argc > 5)\n> {\n> @@ -685,11 +689,24 @@\n> }\n> #ifdef TCL_ARRAYS\n> for (i = 0; i < PQnfields(result); i++)\n> - Tcl_AppendElement(interp, tcl_value(PQgetvalue(result, tupno, i)));\n> + {\n> + Tcl_ExternalToUtfDString(NULL,\n> + tcl_value(PQgetvalue(result,\n> + tupno, i)),\n> + -1, &retString);\n> + Tcl_AppendElement(interp, Tcl_DStringValue(&retString));\n> + }\n> #else\n> for (i = 0; i < PQnfields(result); i++)\n> - Tcl_AppendElement(interp, PQgetvalue(result, tupno, i));\n> + {\n> + Tcl_ExternalToUtfDString(NULL,\n> + PQgetvalue(result, tupno, i),\n> + -1, &retString);\n> + \n> + Tcl_AppendElement(interp, Tcl_DStringValue(&retString));\n> + }\n> #endif\n> + Tcl_DStringFree(&retString);\n> return TCL_OK;\n> }\n> else if (strcmp(opt, \"-tupleArray\") == 0)\n> @@ -707,21 +724,35 @@\n> }\n> for (i = 0; i < PQnfields(result); i++)\n> {\n> - if (Tcl_SetVar2(interp, argv[4], PQfname(result, i),\n> + Tcl_ExternalToUtfDString(NULL, \n> #ifdef TCL_ARRAYS\n> - tcl_value(PQgetvalue(result, tupno, i)),\n> + tcl_value(PQgetvalue(result,\n> + tupno, i)),\n> #else\n> - PQgetvalue(result, tupno, i),\n> + PQgetvalue(result, tupno, i),\n> #endif\n> - TCL_LEAVE_ERR_MSG) == NULL)\n> - return TCL_ERROR;\n> + -1, &retString);\n> +\n> + if (Tcl_SetVar2(interp, argv[4], PQfname(result, i),\n> + Tcl_DStringValue(&retString),\n> + TCL_LEAVE_ERR_MSG) == NULL)\n> + {\n> + Tcl_DStringFree(&retString);\n> + return TCL_ERROR;\n> + }\n> }\n> + Tcl_DStringFree(&retString);\n> return TCL_OK;\n> }\n> else if (strcmp(opt, \"-attributes\") == 0)\n> {\n> for (i = 0; i < PQnfields(result); i++)\n> - Tcl_AppendElement(interp, PQfname(result, i));\n> + {\n> + Tcl_ExternalToUtfDString(NULL, PQfname(result, i),\n> + -1, &retString);\n> + Tcl_AppendElement(interp, Tcl_DStringValue(&retString));\n> + Tcl_DStringFree(&retString);\n> + }\n> return TCL_OK;\n> }\n> else if (strcmp(opt, \"-lAttributes\") == 0)\n> @@ -1274,6 +1305,8 @@\n> column,\n> ncols;\n> Tcl_DString headers;\n> + Tcl_DString retString;\n> + Tcl_DString putString;\n> char buffer[2048];\n> struct info_s\n> {\n> @@ -1292,7 +1325,11 @@\n> if (conn == (PGconn *) NULL)\n> return TCL_ERROR;\n> \n> - if ((result = PQexec(conn, argv[2])) == 0)\n> + Tcl_UtfToExternalDString(NULL, argv[2], -1, &putString);\n> + result = PQexec(conn, Tcl_DStringValue(&putString));\n> + Tcl_DStringFree(&putString);\n> +\n> + if (result == 0)\n> {\n> /* error occurred sending the query */\n> Tcl_SetResult(interp, PQerrorMessage(conn), TCL_VOLATILE);\n> @@ -1340,13 +1377,21 @@\n> Tcl_SetVar2(interp, argv[3], \".tupno\", buffer, 0);\n> \n> for (column = 0; column < ncols; column++)\n> - Tcl_SetVar2(interp, argv[3], info[column].cname,\n> + {\n> + Tcl_ExternalToUtfDString(NULL,\n> #ifdef TCL_ARRAYS\n> - tcl_value(PQgetvalue(result, tupno, column)),\n> + tcl_value(PQgetvalue(result,\n> + tupno,\n> + column)),\n> #else\n> - PQgetvalue(result, tupno, column),\n> + PQgetvalue(result, tupno, column),\n> #endif\n> - 0);\n> + -1, &retString);\n> +\n> + Tcl_SetVar2(interp, argv[3], info[column].cname,\n> + Tcl_DStringValue(&retString), 0);\n> + Tcl_DStringFree(&retString);\n> + }\n> \n> Tcl_SetVar2(interp, argv[3], \".command\", \"update\", 0);\n> \n> \n> \n> No file was uploaded with this report\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n",
"msg_date": "Sun, 22 Jul 2001 20:10:32 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: libpgtcl doesn't use UTF encoding of TCL"
},
{
"msg_contents": "On Sun, Jul 22, 2001 at 08:10:32PM +0900, Tatsuo Ishii wrote:\n> Hum. Why don't you enable --enable-multibyte and\n> --enable-unicode-conversion and set client_encoding to UNICODE? That\n> would do a conversion from/to UTF-8 for Tcl 8.x (x > 9) clients?\n\nYou're right. Probably, this way correct enough too :-)\nThank you for suggest.\nBut, I think, patching the libpgtcl has not to be superfluous.\n\n-- \nEugene Faukin\nSOLVO Ltd. Company\n",
"msg_date": "Mon, 23 Jul 2001 11:23:50 +0400",
"msg_from": "Eugene Fokin <elf@solvo.ru>",
"msg_from_op": false,
"msg_subject": "Re: libpgtcl doesn't use UTF encoding of TCL"
},
{
"msg_contents": "> On Wed, 18 Jul 2001, Tom Lane wrote:\n> \n> > Reinhard Max <max@suse.de> writes:\n> > > On Wed, 18 Jul 2001, Bruce Momjian wrote:\n> > >> Do you have any idea how this will work with earlier TCL versions?\n> >\n> > > It won't. If pgtcl is supposed to still be able to compile with older\n> > > versions of Tcl, the changes have to be made a compile time option.\n> >\n> > Please do that and resubmit the patch.\n> \n> Here it is, but I consider it still incomplete and I have not done\n> exhaustive testing. Some more occurrences of PQexec and PQgetvalue\n> need to be wrapped up with UTF8 conversion, but I'll not have the time\n> to do it for the next 1-2 weeks.\n> \n> cu\n> \tReinhard\n\nI have a patch here that handles all the TCL/UTF issues. Would you let\nme know if it is OK?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: configure.in\n===================================================================\nRCS file: /junk/pgsql/repo/pgsql/configure.in,v\nretrieving revision 1.132\ndiff -u -r1.132 configure.in\n--- configure.in\t2001/08/01 23:52:50\t1.132\n+++ configure.in\t2001/08/23 15:18:30\n@@ -411,6 +411,21 @@\n \n \n #\n+# If Tcl is enabled (above) then check for pltcl_utf\n+#\n+AC_MSG_CHECKING([whether to build with PL/Tcl with UTF support])\n+if test \"$with_tcl\" = yes; then\n+ PGAC_ARG_BOOL(enable, pltcl-utf, no,\n+ [ --enable-pltcl-utf build PL/Tcl UTF support (if Tcl is enabled)],\n+ [AC_DEFINE([ENABLE_PLTCL_UTF])])\n+else\n+ enable_pltcl_utf=no\n+fi\n+AC_MSG_RESULT([$enable_pltcl_utf])\n+AC_SUBST([enable_pltcl_utf])\n+\n+\n+#\n # Optionally build Perl modules (Pg.pm and PL/Perl)\n #\n AC_MSG_CHECKING([whether to build Perl modules])\nIndex: doc/src/sgml/installation.sgml\n===================================================================\nRCS file: /junk/pgsql/repo/pgsql/doc/src/sgml/installation.sgml,v\nretrieving revision 1.50\ndiff -u -r1.50 installation.sgml\n--- doc/src/sgml/installation.sgml\t2001/06/02 18:25:16\t1.50\n+++ doc/src/sgml/installation.sgml\t2001/08/24 12:39:53\n@@ -674,6 +674,17 @@\n </varlistentry>\n \n <varlistentry>\n+ <term>--enable-pltcl-utf</term>\n+ <listitem>\n+ <para>\n+ Enables enables PL/Tcl Tcl_UtfToExternal and Tcl_ExternalToUtf\n+ conversion support. These functions needed for Tcl versions 8.1\n+ and above for proper handling of 8-bit characters.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+\n+ <varlistentry>\n <term>--enable-odbc</term>\n <listitem>\n <para>\nIndex: src/include/config.h.in\n===================================================================\nRCS file: /junk/pgsql/repo/pgsql/src/include/config.h.in,v\nretrieving revision 1.170\ndiff -u -r1.170 config.h.in\n--- src/include/config.h.in\t2001/08/01 23:52:50\t1.170\n+++ src/include/config.h.in\t2001/08/23 15:01:41\n@@ -84,6 +84,9 @@\n /* --enable-pltcl-unknown */\n #undef ENABLE_PLTCL_UNKNOWN\n \n+/* --enable-pltcl-utf */\n+#undef ENABLE_PLTCL_UTF\n+\n /* --enable-nls */\n #undef ENABLE_NLS\n \nIndex: src/pl/tcl/pltcl.c\n===================================================================\nRCS file: /junk/pgsql/repo/pgsql/src/pl/tcl/pltcl.c,v\nretrieving revision 1.38\ndiff -u -r1.38 pltcl.c\n--- src/pl/tcl/pltcl.c\t2001/08/02 15:45:55\t1.38\n+++ src/pl/tcl/pltcl.c\t2001/08/24 12:43:06\n@@ -59,6 +59,18 @@\n #include \"catalog/pg_language.h\"\n #include \"catalog/pg_type.h\"\n \n+#if defined(ENABLE_PLTCL_UTF) && TCL_MAJOR_VERSION == 8 \\\n+\t&& TCL_MINOR_VERSION > 0\n+#\tdefine UTF_BEGIN\tdo { Tcl_DString _pltcl_ds_tmp\n+# define UTF_END\t\tTcl_DStringFree(&_pltcl_ds_tmp); } while (0)\n+# define UTF_U2E(x)\t(Tcl_UtfToExternalDString(NULL,(x),-1,&_pltcl_ds_tmp))\n+#\tdefine UTF_E2U(x)\t(Tcl_ExternalToUtfDString(NULL,(x),-1,&_pltcl_ds_tmp))\n+#else /* ENABLE_PLTCL_UTF */\n+#\tdefine\tUTF_BEGIN\n+#\tdefine\tUTF_END\n+#\tdefine\tUTF_U2E(x)\t(x)\n+#\tdefine\tUTF_E2U(x)\t(x)\n+#endif /* ENABLE_PLTCL_UTF */\n \n /**********************************************************************\n * The information we cache about loaded procedures\n@@ -333,7 +345,9 @@\n \t\t\t\t\t\t\tSPI_tuptable->tupdesc, fno);\n \t\tif (part != NULL)\n \t\t{\n-\t\t\tTcl_DStringAppend(&unknown_src, part, -1);\n+\t\t\tUTF_BEGIN;\n+\t\t\tTcl_DStringAppend(&unknown_src, UTF_E2U(part), -1);\n+\t\t\tUTF_END;\n \t\t\tpfree(part);\n \t\t}\n \t}\n@@ -613,7 +627,9 @@\n \t\t}\n \t\tproc_source = DatumGetCString(DirectFunctionCall1(textout,\n \t\t\t\t\t\t\t\t PointerGetDatum(&procStruct->prosrc)));\n-\t\tTcl_DStringAppend(&proc_internal_body, proc_source, -1);\n+\t\tUTF_BEGIN;\n+\t\tTcl_DStringAppend(&proc_internal_body, UTF_E2U(proc_source), -1);\n+\t\tUTF_END;\n \t\tpfree(proc_source);\n \t\tTcl_DStringAppendElement(&proc_internal_def,\n \t\t\t\t\t\t\t\t Tcl_DStringValue(&proc_internal_body));\n@@ -715,7 +731,9 @@\n \t\t\t\t\t\t\t\t\t\t\t\t\tfcinfo->arg[i],\n \t\t\t\t\t\t\t ObjectIdGetDatum(prodesc->arg_out_elem[i]),\n \t\t\t\t\t\t\t\tInt32GetDatum(prodesc->arg_out_len[i])));\n-\t\t\t\tTcl_DStringAppendElement(&tcl_cmd, tmp);\n+\t\t\t\tUTF_BEGIN;\n+\t\t\t\tTcl_DStringAppendElement(&tcl_cmd, UTF_E2U(tmp));\n+\t\t\t\tUTF_END;\n \t\t\t\tpfree(tmp);\n \t\t\t}\n \t\t}\n@@ -777,13 +795,15 @@\n \tif (SPI_finish() != SPI_OK_FINISH)\n \t\telog(ERROR, \"pltcl: SPI_finish() failed\");\n \n+\tUTF_BEGIN;\n \tif (fcinfo->isnull)\n \t\tretval = (Datum) 0;\n \telse\n \t\tretval = FunctionCall3(&prodesc->result_in_func,\n-\t\t\t\t\t\t\t PointerGetDatum(interp->result),\n+\t\t\t\t\t\t\t PointerGetDatum(UTF_U2E(interp->result)),\n \t\t\t\t\t\t\t ObjectIdGetDatum(prodesc->result_in_elem),\n \t\t\t\t\t\t\t Int32GetDatum(-1));\n+\tUTF_END;\n \n \t/************************************************************\n \t * Finally we may restore normal error handling.\n@@ -929,7 +949,9 @@\n \n \t\tproc_source = DatumGetCString(DirectFunctionCall1(textout,\n \t\t\t\t\t\t\t\t PointerGetDatum(&procStruct->prosrc)));\n-\t\tTcl_DStringAppend(&proc_internal_body, proc_source, -1);\n+\t\tUTF_BEGIN;\n+\t\tTcl_DStringAppend(&proc_internal_body, UTF_E2U(proc_source), -1);\n+\t\tUTF_END;\n \t\tpfree(proc_source);\n \t\tTcl_DStringAppendElement(&proc_internal_def,\n \t\t\t\t\t\t\t\t Tcl_DStringValue(&proc_internal_body));\n@@ -1230,11 +1252,13 @@\n \t\t ************************************************************/\n \t\tmodnulls[attnum - 1] = ' ';\n \t\tfmgr_info(typinput, &finfo);\n+\t\tUTF_BEGIN;\n \t\tmodvalues[attnum - 1] =\n \t\t\tFunctionCall3(&finfo,\n-\t\t\t\t\t\t CStringGetDatum(ret_values[i++]),\n+\t\t\t\t\t\t CStringGetDatum(UTF_U2E(ret_values[i++])),\n \t\t\t\t\t\t ObjectIdGetDatum(typelem),\n \t\t\t\t Int32GetDatum(tupdesc->attrs[attnum - 1]->atttypmod));\n+\t\tUTF_END;\n \t}\n \n \trettup = SPI_modifytuple(trigdata->tg_relation, rettup, tupdesc->natts,\n@@ -1558,7 +1582,9 @@\n \t/************************************************************\n \t * Execute the query and handle return codes\n \t ************************************************************/\n-\tspi_rc = SPI_exec(argv[query_idx], count);\n+\tUTF_BEGIN;\n+\tspi_rc = SPI_exec(UTF_U2E(argv[query_idx]), count);\n+\tUTF_END;\n \tmemcpy(&Warn_restart, &save_restart, sizeof(Warn_restart));\n \n \tswitch (spi_rc)\n@@ -1794,7 +1820,9 @@\n \t/************************************************************\n \t * Prepare the plan and check for errors\n \t ************************************************************/\n-\tplan = SPI_prepare(argv[1], nargs, qdesc->argtypes);\n+\tUTF_BEGIN;\n+\tplan = SPI_prepare(UTF_U2E(argv[1]), nargs, qdesc->argtypes);\n+\tUTF_END;\n \n \tif (plan == NULL)\n \t{\n@@ -2078,11 +2106,13 @@\n \t\t ************************************************************/\n \t\tfor (j = 0; j < callnargs; j++)\n \t\t{\n+\t\t\tUTF_BEGIN;\n \t\t\tqdesc->argvalues[j] =\n \t\t\t\tFunctionCall3(&qdesc->arginfuncs[j],\n-\t\t\t\t\t\t\t CStringGetDatum(callargs[j]),\n+\t\t\t\t\t\t\t CStringGetDatum(UTF_U2E(callargs[j])),\n \t\t\t\t\t\t\t ObjectIdGetDatum(qdesc->argtypelems[j]),\n \t\t\t\t\t\t\t Int32GetDatum(qdesc->arglen[j]));\n+\t\t\tUTF_END;\n \t\t}\n \n \t\t/************************************************************\n@@ -2377,7 +2407,9 @@\n \t\t\t\t\t\t\t\t\t\t\t\t\t\t attr,\n \t\t\t\t\t\t\t\t\t\t\t ObjectIdGetDatum(typelem),\n \t\t\t\t\t\t\t Int32GetDatum(tupdesc->attrs[i]->attlen)));\n-\t\t\tTcl_SetVar2(interp, *arrptr, *nameptr, outputstr, 0);\n+\t\t\tUTF_BEGIN;\n+\t\t\tTcl_SetVar2(interp, *arrptr, *nameptr, UTF_E2U(outputstr), 0);\n+\t\t\tUTF_END;\n \t\t\tpfree(outputstr);\n \t\t}\n \t\telse\n@@ -2448,7 +2480,9 @@\n \t\t\t\t\t\t\t\t\t\t\t ObjectIdGetDatum(typelem),\n \t\t\t\t\t\t\t Int32GetDatum(tupdesc->attrs[i]->attlen)));\n \t\t\tTcl_DStringAppendElement(retval, attname);\n-\t\t\tTcl_DStringAppendElement(retval, outputstr);\n+\t\t\tUTF_BEGIN;\n+\t\t\tTcl_DStringAppendElement(retval, UTF_E2U(outputstr));\n+\t\t\tUTF_END;\n \t\t\tpfree(outputstr);\n \t\t}\n \t}",
"msg_date": "Wed, 5 Sep 2001 12:52:37 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: libpgtcl doesn't use UTF encoding of TCL"
},
{
"msg_contents": "Hi Bruce,\n\nOn Wed, 5 Sep 2001, Bruce Momjian wrote:\n\n> I have a patch here that handles all the TCL/UTF issues.\n> Would you let me know if it is OK?\n\nI think, it isn't really a clean fix. It only works, if your\ndatabase's encoding and Tcl's system encoding are identical. If the\ndatabase uses a different encoding than Tcl, you still end up with\nwrong characters. Also, the configure switch (if needed at all)\nshould IMHO be a disable switch, because the conversion is mandatory\nfor Tcl >= 8.1 unless someone really knows that he won't have any\n8-Bit characters in his database. So less people would get bitten, if\nUTF conversion was enabled by default for Tcl >= 8.1 .\n\nBesides these flaws, I think the patch could be simpler and avoid the\nUTF_BEGIN and UTF_END macros if UTF_U2E and UTF_E2U were (maybe\ninlined) functions and defined like this (untested):\n\nchar* UTF_U2E(CONST char * source)\n{\n\tstatic Tcl_DString *destPtr = NULL;\n\n\tif (destPtr == NULL) {\n\t\tdestPtr = (Tcl_DString *) malloc(sizeof(Tcl_DString));\n\t} else {\n\t\tTcl_DStringFree(destPtr);\n\t}\n\treturn Tcl_UtfToExternalDString(NULL, source, -1, destPtr);\n}\n\n\nSee also the mail, I sent to pgsql-patches last Tuesday on the same\ntopic.\n\nIn addition to my suggestion there to require the database to be\nUNICODE for Tcl >= 8.1, I just had another Idea how it could be solved\ncleanly:\n\nWhat about making --enable-unicode-conversion mandatory when\nPostgreSQL gets compiled with Tcl support and changing PgTcl and\nPL/Tcl to set their client encoding to UNICODE at _runtime_ when they\nfind themselfes running with a Tcl interpreter that needs UTF-8 (i.e.\nTcl >= 8.1)?\n\nGoing this way, we could even retain binary compatibility for PgTcl\nand PL/Tcl with Tcl versions prior and after Tcl's move to UTF-8.\n\nOne Question remains here: Do --enable-multibyte and\n--enable-unicode-conversion have any downsides (besides a larger\nexecutable), if they are compiled in, but not used?\n\ncu\n\tReinhard\n\n",
"msg_date": "Thu, 6 Sep 2001 11:43:12 +0200 (CEST)",
"msg_from": "Reinhard Max <max@suse.de>",
"msg_from_op": false,
"msg_subject": "Re: libpgtcl doesn't use UTF encoding of TCL"
}
] |
[
{
"msg_contents": "Hi,\nI use the libraries of function of Postgres in a\nprogram.\nIn this script, I keep connected with the postmaster\nand I submit him a lot of queries without\ndisconnecting each time. At the end of each queries, I\nuse PQclear to clean memory but I notice that the\nmemory used by the process postgres is always\nincreasing until I disconnect.\nAny idea ?\nThanks for your help\n\n=====\n\n\n___________________________________________________________\nDo You Yahoo!? -- Vos albums photos en ligne, \nYahoo! Photos : http://fr.photos.yahoo.com\n",
"msg_date": "Wed, 18 Jul 2001 10:25:34 +0200 (CEST)",
"msg_from": "=?iso-8859-1?q?jerome=20crouigneau?= <jerome_crouigneau@yahoo.fr>",
"msg_from_op": true,
"msg_subject": "Memory management"
}
] |
[
{
"msg_contents": "Hello,\n\nI have a problem white one sql request. I got this error message :\n\nWarning: PostgreSQL query failed: ERROR: SELECT DISTINCT ON expressions\nmust match initial ORDER BY expressions in\n/export/castor-b7/local-home/kelbertj/Prog/web/lumiere/admin/recherche_realisateurs.php\non line 85 ERROR: SELECT DISTINCT ON expressions must match initial\nORDER BY expressions SELECT DISTINCT ON (people_id)\npeople_id,people_lastname,people_firstname from people where\nlower(people_firstname) ~* (SELECT text_accents('\\\\\\\"Luc\\\\$')) order by\npeople_lastname ASC limit 40 offset 0\n\nI didn't find any solution to this problem ! If you have any idea I'll\nbe most gratefull If you could answer !\n\nThanks\n\n-- \nJean-Michel Kelbert\n",
"msg_date": "Wed, 18 Jul 2001 10:36:42 +0200",
"msg_from": "Jean-Michel Kelbert <jean-michel@club-internet.fr>",
"msg_from_op": true,
"msg_subject": "ERROR: SELECT DISTINCT ON with postgresql v 7.1.2"
}
] |
[
{
"msg_contents": "Hello,\n\nI have a problem white one sql request. I got this error message :\n\nWarning: PostgreSQL query failed: ERROR: SELECT DISTINCT ON expressions\nmust match initial ORDER BY expressions in\n/export/castor-b7/local-home/kelbertj/Prog/web/lumiere/admin/recherche_realisateurs.php\non line 85 ERROR: SELECT DISTINCT ON expressions must match initial\nORDER BY expressions SELECT DISTINCT ON (people_id)\npeople_id,people_lastname,people_firstname from people where\nlower(people_firstname) ~* (SELECT text_accents('\\\\\\\"Luc\\\\$')) order by\npeople_lastname ASC limit 40 offset 0\n\nI didn't find any solution to this problem ! If you have any idea I'll\nbe most gratefull If you could answer !\n\nThanks\n\n--\nJean-Michel KELBERT\n",
"msg_date": "Wed, 18 Jul 2001 10:40:57 +0200",
"msg_from": "Kelbert <jean-michel@club-internet.fr>",
"msg_from_op": true,
"msg_subject": "ERROR: SELECT DISTINCT ON with postgresql v 7.1.2"
},
{
"msg_contents": "On Wed, 18 Jul 2001, Kelbert wrote:\n\n> Hello,\n> \n> I have a problem white one sql request. I got this error message :\n> \n> Warning: PostgreSQL query failed: ERROR: SELECT DISTINCT ON expressions\n> must match initial ORDER BY expressions in\n> /export/castor-b7/local-home/kelbertj/Prog/web/lumiere/admin/recherche_realisateurs.php\n> on line 85 ERROR: SELECT DISTINCT ON expressions must match initial\n> ORDER BY expressions SELECT DISTINCT ON (people_id)\n> people_id,people_lastname,people_firstname from people where\n> lower(people_firstname) ~* (SELECT text_accents('\\\\\\\"Luc\\\\$')) order by\n> people_lastname ASC limit 40 offset 0\n> \n> I didn't find any solution to this problem ! If you have any idea I'll\n> be most gratefull If you could answer !\n\nFirst a warning. The query you've written is potential non-deterministic\nif you have a people_id that has multiple rows with different last names\nthat meet the where clause. This is why the query was rejected in the\nfirst place. The ordering that the rows got chosen (semi-random) would\ndetermine which last name was used and could change the output.\n\nIf you *really* want to do this, you can probably put the select distinct\non in a subquery (basically untested, so there might be some syntax\nerrors)...\nselect people_id, people_lastname, people_firstname from \n ( select distinct on (people_id) people_id, people_lastname, \n people_firstname from people where lower(people_firstname) ~*\n (Select text_accents('\\\\\\\"Luc\\\\$')) ) as peop\n order by people_lastname asc limit 40 offset 0;\n\n",
"msg_date": "Wed, 18 Jul 2001 11:06:56 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: SELECT DISTINCT ON with postgresql v 7.1.2"
}
] |
[
{
"msg_contents": "\n> > > : Most Unix filesystems will not allocate disk blocks until you write in\n> > > : them. [...]\n> > >\n> > > Yes, I understand that, but how is it a problem for postgresql?\n> >\n> > Uh, I thought we did that so we were not allocating file system blocks\n> > during WAL writes. Performance is bad when we do that.\n> \n> Performance isn't the question.\n\niirc, at the time, performance was also a question, at least on some of the \nplatforms that were tested.\n\n> The problem is when you get a\n> \"disk full\" just in the middle of the need to write important\n> WAL information. While preallocation of a new WAL file, it's\n> OK and controlled, but there are more delicate portions of\n> the code.\n\nOf course there should not be, since the write to the WAL is the first IO \n:-) Imho all modifying activity could be blocked, until disk space is made \navailable by the admin. Could you enlighten us on what the delicate portions \nare (other than when running in no fsync mode) ?\n\nAndreas\n",
"msg_date": "Wed, 18 Jul 2001 10:42:04 +0200",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: Re: Idea: recycle WAL segments, don't delete/recrea\n\tte 'emm"
}
] |
[
{
"msg_contents": "Hello all,\n\n\n Writing my interface application, which use the PQexec library, I\ncame across the PQexec() queries 8191 bytes limit.\n What useful are 4Gb text fields if I have this limit ?\n I mean, if a user make an update to this field, with a large value\n(let's say, 4Mb), do I have to call PQexec multiple (more then 500) times,\nconcatenating the strings each time I call it ??? Can't this be better\nimplemented ? This is too slow, and generates much more traffic then I ever\nwish.\n This problem also plagues the large objects API, since they're only\na wrapper to the built-in large objects API.\n Does anyone have a better way of doing this ?\n\nBest Regards,\nSteve Howe\nhttp://www.vitavoom.com\n\n\n",
"msg_date": "Wed, 18 Jul 2001 05:51:09 -0300",
"msg_from": "\"Steve Howe\" <howe@carcass.dhs.org>",
"msg_from_op": true,
"msg_subject": "PQexec() 8191 bytes limit and text fields"
},
{
"msg_contents": " First, are you using the latest PG? I was under the impression that all\nthe hard-coded limitations on size had been eliminated in the latest\nreleases. I know for an absolute fact that I can insert multi-megabyte sized\ntext chunks in PG 7.1.2 as I've done just that before...\n\n Good luck!\n\n-Mitch\n\n----- Original Message -----\nFrom: \"Steve Howe\" <howe@carcass.dhs.org>\nTo: <pgsql-hackers@postgresql.org>\nSent: Wednesday, July 18, 2001 4:51 AM\nSubject: [HACKERS] PQexec() 8191 bytes limit and text fields\n\n\n> Hello all,\n>\n>\n> Writing my interface application, which use the PQexec library, I\n> came across the PQexec() queries 8191 bytes limit.\n> What useful are 4Gb text fields if I have this limit ?\n> I mean, if a user make an update to this field, with a large value\n> (let's say, 4Mb), do I have to call PQexec multiple (more then 500) times,\n> concatenating the strings each time I call it ??? Can't this be better\n> implemented ? This is too slow, and generates much more traffic then I\never\n> wish.\n> This problem also plagues the large objects API, since they're\nonly\n> a wrapper to the built-in large objects API.\n> Does anyone have a better way of doing this ?\n>\n> Best Regards,\n> Steve Howe\n> http://www.vitavoom.com\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://www.postgresql.org/search.mpl\n>\n\n",
"msg_date": "Wed, 18 Jul 2001 11:06:12 -0400",
"msg_from": "\"Mitch Vincent\" <mvincent@cablespeed.com>",
"msg_from_op": false,
"msg_subject": "Re: PQexec() 8191 bytes limit and text fields"
},
{
"msg_contents": "\"Steve Howe\" <howe@carcass.dhs.org> writes:\n> Writing my interface application, which use the PQexec library, I\n> came across the PQexec() queries 8191 bytes limit.\n\nYou must have a very out-of-date library. Time to update.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Jul 2001 12:14:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PQexec() 8191 bytes limit and text fields "
},
{
"msg_contents": "Hi...\n\n The problem is, I compiled it myself from original PostgreSQL\nversion 7.12 C sources using Microsoft's Visual C++ 6.0. I had to compile it\nbecause I add a function to free the handlers returned from PQnotifies(), or\nI would have a memory leak.\n The resulting libpq.dll seems ok in everything but this issue...\n I guess I'll do it again, after checking the sources :)\n Other people reported me they send large queries with no problems,\nso I guess it should really be a problem of mine...\n\nBest Regards,\nSteve Howe\n\n----- Original Message -----\nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nTo: \"Steve Howe\" <howe@carcass.dhs.org>\nCc: <pgsql-hackers@postgresql.org>\nSent: Wednesday, July 18, 2001 1:14 PM\nSubject: Re: [HACKERS] PQexec() 8191 bytes limit and text fields\n\n\n> \"Steve Howe\" <howe@carcass.dhs.org> writes:\n> > Writing my interface application, which use the PQexec library,\nI\n> > came across the PQexec() queries 8191 bytes limit.\n>\n> You must have a very out-of-date library. Time to update.\n>\n> regards, tom lane\n>\n\n",
"msg_date": "Wed, 18 Jul 2001 14:30:56 -0300",
"msg_from": "\"Steve Howe\" <howe@carcass.dhs.org>",
"msg_from_op": true,
"msg_subject": "Re: PQexec() 8191 bytes limit and text fields "
},
{
"msg_contents": "Hi Steve, lets approach this from the other angle...\n\nI don't see anywhere in your email where you say what makes you think that\nyou can only pass a query > 8191 bytes in size to PG. What exactly makes you\nthink that there is some hard coded limit? This limit is not in 7.1.2 so\neither you have outdated source code or the problem is somewhere else..\n\nGood luck!\n\n-Mitch\n\n\n----- Original Message -----\nFrom: \"Steve Howe\" <howe@carcass.dhs.org>\nTo: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nCc: <pgsql-hackers@postgresql.org>\nSent: Wednesday, July 18, 2001 1:30 PM\nSubject: Re: [HACKERS] PQexec() 8191 bytes limit and text fields\n\n\n> Hi...\n>\n> The problem is, I compiled it myself from original PostgreSQL\n> version 7.12 C sources using Microsoft's Visual C++ 6.0. I had to compile\nit\n> because I add a function to free the handlers returned from PQnotifies(),\nor\n> I would have a memory leak.\n> The resulting libpq.dll seems ok in everything but this issue...\n> I guess I'll do it again, after checking the sources :)\n> Other people reported me they send large queries with no problems,\n> so I guess it should really be a problem of mine...\n>\n> Best Regards,\n> Steve Howe\n>\n> ----- Original Message -----\n> From: \"Tom Lane\" <tgl@sss.pgh.pa.us>\n> To: \"Steve Howe\" <howe@carcass.dhs.org>\n> Cc: <pgsql-hackers@postgresql.org>\n> Sent: Wednesday, July 18, 2001 1:14 PM\n> Subject: Re: [HACKERS] PQexec() 8191 bytes limit and text fields\n>\n>\n> > \"Steve Howe\" <howe@carcass.dhs.org> writes:\n> > > Writing my interface application, which use the PQexec\nlibrary,\n> I\n> > > came across the PQexec() queries 8191 bytes limit.\n> >\n> > You must have a very out-of-date library. Time to update.\n> >\n> > regards, tom lane\n> >\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n",
"msg_date": "Wed, 18 Jul 2001 17:32:41 -0400",
"msg_from": "\"Mitch Vincent\" <mvincent@cablespeed.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS - GENERAL] PQexec() 8191 bytes limit and text fields "
}
] |
[
{
"msg_contents": "\nHow do I define a function as taking a variable number of parameters. The\ndocumentation seems to indicate (...) but, no such luck.\n\nmarkw=# create function concat( ... )\nmarkw-# returns varchar\nmarkw-# as '/usr/local/lib/pgcontains.so', 'concat'\nmarkw-# language 'c' with (iscachable);\nERROR: parser: parse error at or near \".\"\n",
"msg_date": "Wed, 18 Jul 2001 08:52:54 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "C functions, variable number of params?"
}
] |
[
{
"msg_contents": "pg_dump/pg_dumpall fail with the following messages (verbose output\nselected):\n\n...\n-- dumping out user-defined procedural languages\n-- dumping out user-defined functions\nfailed sanity check, type with oid 59770787 was not found\n\nOS: Linux kernel 2.2.16\nPostgreSQL v. 7.0.3\n\nI've seen similar posts and suggested resolutions to the problems (which\nI haven't been able to use), and I think this is a bug that's been put\non a TODO list.\n\nSo, two questions:\n\n1) Is a bugfix for pg_dump/pg_dumpall available, perhaps in the 7.1\nversion?\n2) If the bugfix is available in 7.1, can I safely migrate data from\n7.0.3 to 7.1, if i use the COPY command to export and import tables.\n\nThanx.\n",
"msg_date": "Wed, 18 Jul 2001 13:43:56 -0400",
"msg_from": "Mark R DeLong <mdelong@cgt.mc.duke.edu>",
"msg_from_op": true,
"msg_subject": "pg_dump(all) fails with \"failed sanity chec, type with oid ...\""
}
] |
[
{
"msg_contents": "\nFor the record:\n\n http://www.lineone.net/dictionaryof/englishusage/d0081889.html\n\ndependent or dependant\n\n \"Dependent is the adjective, used for a person or thing that depends\n on someone or something: Admission to college is dependent on A-level\n results. Dependant is the noun, and is a person who relies on someone\n for financial support: Do you have any dependants?\"\n\nThis is not for mailing-list pendantism, but just to make sure \nthat the right spelling gets into the code. (The page mentioned \nabove was found by entering \"dependent dependant\" into Google.)\n\nNathan Myers\nncm@zembu.com\n",
"msg_date": "Wed, 18 Jul 2001 12:09:04 -0700",
"msg_from": "ncm@zembu.com (Nathan Myers)",
"msg_from_op": true,
"msg_subject": "dependent dependants"
},
{
"msg_contents": "[ way off topic, but I can't resist ]\n\nncm@zembu.com (Nathan Myers) writes:\n> For the record:\n> http://www.lineone.net/dictionaryof/englishusage/d0081889.html\n\n> dependent or dependant\n\n> \"Dependent is the adjective, used for a person or thing that depends\n> on someone or something: Admission to college is dependent on A-level\n> results. Dependant is the noun, and is a person who relies on someone\n> for financial support: Do you have any dependants?\"\n\nIn order of increasing heft, my dictionaries have:\n\nWebster's New Collegiate: no entry for \"dependant\" at all.\n\nRandom House: \"dependant\" is defined with a one-word entry: \"dependent\",\nfor both noun and adjective.\n\nOED: entries for both \"dependant\" and \"dependent\", but it says \"now\nusually spelt [dependent]\". Apparently the spellings were once more-\nor-less interchangeable.\n\nNot being an eighteenth-century person, to me \"dependant\" looks just\nplain wrong. I'd never spell it that way, for either noun or adjective.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Jul 2001 18:06:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: dependent dependants "
}
] |
[
{
"msg_contents": "> If you want to make oids optional on user tables,\n> we can vote on that.\n\nLet's vote. I'm proposing optional oids for 2-3 years,\nso you know how I'll vote -:)\n\n> However, OID's keep our system tables together.\n\nHow?! If we want to find function with oid X we query\npg_proc, if we want to find table with oid Y we query\npg_class - we always use oids in context of \"class\"\nto what an object belongs. This means that two tuples\nfrom different system tables could have same oid values\nand everything would work perfectly.\n\nThere is no magic around OIDs.\n\nVadim\n",
"msg_date": "Wed, 18 Jul 2001 14:06:15 -0700",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: OID wraparound (was Re: pg_depend)"
},
{
"msg_contents": "> > If you want to make oids optional on user tables,\n> > we can vote on that.\n> \n> Let's vote. I'm proposing optional oids for 2-3 years,\n> so you know how I'll vote -:)\n\nOK, we need to vote on whether Oid's are optional, and whether we can\nhave them not created by default.\n\n> \n> > However, OID's keep our system tables together.\n> \n> How?! If we want to find function with oid X we query\n> pg_proc, if we want to find table with oid Y we query\n> pg_class - we always use oids in context of \"class\"\n> to what an object belongs. This means that two tuples\n> from different system tables could have same oid values\n> and everything would work perfectly.\n\nI meant we use them in many cases to link entries, and in pg_description\nfor descriptions and lots of other things that may use them in the\nfuture for system table use.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 18 Jul 2001 17:09:38 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound (was Re: pg_depend)"
},
{
"msg_contents": "[trimmed cc:list]\nOn Wednesday 18 July 2001 17:09, Bruce Momjian wrote:\n> OK, we need to vote on whether Oid's are optional, and whether we can\n> have them not created by default.\n\n[All the below IMHO]\n\nOID's should be optional.\n\nSystem tables that absolutely have to have OIDs may keep them.\n\nNo new OID usage, period. Use some other unique primary key.\n\nDefault user tables to no OIDs. \n\nDocument other means by which rows that are otherwise identical can be made \nunique, for the purpose of expunging duplicates (ctids or whatever is \nappropriate).\n\nAllow a SET DEFAULT CREATE OIDS style option for those who just _must_ have \nOIDS -- and integrate with GUC. Document that OID wrap can occur, and that \nit can cause Bad Things to happen.\n\nAllow a CREATE TABLE WITH OIDS to supplement the above option setting.\n\nNow for a question: OID creation seems to be a low-overhead task. Is the \ncreation of SERIAL PRIMARY KEY values as efficient? Or will we be shooting \nourselves in the performance foot if frequently-accessed system tables go \nfrom OID usage to SERIAL PRIMARY KEY usage?\n\n> I meant we use them in many cases to link entries, and in pg_description\n> for descriptions and lots of other things that may use them in the\n> future for system table use.\n\nIf I may be so bold: we discourage users from using OIDs as a SERIAL PRIMARY \nKEY, yet the system does it en masse.\n\nI say all that knowing full well that I am using OIDs in my own \napplications.... :-) I guess I'll just need to switch to proper SERIALs and \nPRIMARY KEYs. Of course, if I wanted to be stubborn, I'd just use the GUC \noption to enable OIDs system-wide by default....\n\nHowever, the utility of INSERT returning a unique identifier to the inserted \nrow needs to be addressed -- I would prefer it return the defined PRIMARY KEY \nvalue for the tuple just inserted, if a PRIMARY KEY is defined. If no \nPRIMARY KEY is defined, return a unique identifier (even a temporary one like \nthe ctid) so that I have that information for use later in the application. \nThe utility of that feature should not be underestimated.\n\nSuch a return value would of course have to be returned as a tuple with all \nthe necessary metadata to process the return value -- this is probably not a \ntrivial change.\n\nOf course, I may be missing some essential usage of OID's.... and I reserve \nthe right to be wrong.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Wed, 18 Jul 2001 18:10:36 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound (was Re: pg_depend)"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > > If you want to make oids optional on user tables,\n> > > we can vote on that.\n> >\n> > Let's vote. I'm proposing optional oids for 2-3 years,\n> > so you know how I'll vote -:)\n> \n> OK, we need to vote on whether Oid's are optional, and whether we can\n> have them not created by default.\n> \n\nI don't love current OIDs. However they have lived in PostgreSQL's\nworld too long and few people have pointed out that there's no magic\naround OIDs. I agree to change OIDs to be per class but strongly\nobject to let OIDs optional.\n\nIt's a big pain for generic applications to lose OIDs.\nIn fact I'm implementing updatable cursors in ODBC using\nOIDs and Tids.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Thu, 19 Jul 2001 08:48:20 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound (was Re: pg_depend)"
},
{
"msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> Now for a question: OID creation seems to be a low-overhead task. Is the \n> creation of SERIAL PRIMARY KEY values as efficient? Or will we be shooting \n> ourselves in the performance foot if frequently-accessed system tables go \n> from OID usage to SERIAL PRIMARY KEY usage?\n\nYes, nowhere near, and yes. Sequence objects require disk I/O to\nupdate; the OID counter essentially lives in shared memory, and can\nbe bumped for the price of a spinlock access.\n\nI don't think we should discourage use of OIDs quite as vigorously\nas you propose ;-). All I want is to not expend OIDs on things that\nhave no need for one. That, together with clarifying exactly how\nunique OIDs should be expected to be, seems to me that it will solve\n99% of the problem.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Jul 2001 19:49:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound (was Re: pg_depend) "
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> I don't love current OIDs. However they have lived in PostgreSQL's\n> world too long and few people have pointed out that there's no magic\n> around OIDs. I agree to change OIDs to be per class but strongly\n> object to let OIDs optional.\n\nUh ... what? I don't follow what you are proposing here.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Jul 2001 20:14:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound (was Re: pg_depend) "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > I don't love current OIDs. However they have lived in PostgreSQL's\n> > world too long and few people have pointed out that there's no magic\n> > around OIDs. I agree to change OIDs to be per class but strongly\n> > object to let OIDs optional.\n> \n> Uh ... what? I don't follow what you are proposing here.\n> \n\nI couldn't think of the cases that we need database-wide\nuniqueness. So the uniqueness of OIDs could be only within\na table. But I object to the option that tables could have\nno OIDs.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Thu, 19 Jul 2001 09:28:04 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound (was Re: pg_depend)"
},
{
"msg_contents": "At 06:10 PM 18-07-2001 -0400, Lamar Owen wrote:\n>applications.... :-) I guess I'll just need to switch to proper SERIALs and \n>PRIMARY KEYs. Of course, if I wanted to be stubborn, I'd just use the GUC \n>option to enable OIDs system-wide by default....\n\nThe default 32 bit serial primary key isn't immune to roll overs either.\n\nI doubt it'll affect my stuff, but it'll affect others.\n\nOnce you talk about storing petabytes or terabytes of data, 32 bits might\nnot be enough.\n\nCheerio,\nLink.\n\n",
"msg_date": "Thu, 19 Jul 2001 11:23:06 +0800",
"msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound (was Re: pg_depend)"
},
{
"msg_contents": "I wrote:\n> \n> Tom Lane wrote:\n> >\n> > Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > > I don't love current OIDs. However they have lived in PostgreSQL's\n> > > world too long and few people have pointed out that there's no magic\n> > > around OIDs. I agree to change OIDs to be per class but strongly\n> > > object to let OIDs optional.\n> >\n> > Uh ... what? I don't follow what you are proposing here.\n> >\n> \n> I couldn't think of the cases that we need database-wide\n> uniqueness. So the uniqueness of OIDs could be only within\n> a table. But I object to the option that tables could have\n> no OIDs.\n> \n\nIt seems that I'm the only one who objects to optional OIDs\nas usual:-).\nIMHO OIDs are not for system but for users.\nOIDs have lived in PostgreSQL world from the first(???).\nIsn't it sufficiently long for users to believe that OIDs\nare unique (at least per table) ?\nAs I mentioned already I'm implementing updatable cursors\nin ODBC and have half done it. If OIDs would be optional\nmy trial loses its validity but I would never try another\nimplementation.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Thu, 19 Jul 2001 12:54:30 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound (was Re: pg_depend)"
},
{
"msg_contents": "On Thursday 19 July 2001 06:08, you wrote:\n>�> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n\n\n>�I think it should be off on user tables by default, but kept on system\n>�tables just for completeness. �It could be added at table creation time\n>�or from ALTER TABLEL ADD. �It seems we just use them too much for system\n>�stuff. �pg_description is just one example.\n\n\nand what difference should it make, to have a few extra hundred or thousand \nOIDs used by system tables, when I insert daily some ten thousand records \neach using an OID for itself?\n\nWhy not make OIDs 64 bit? Might slow down a little on legacy hardware, but in \na couple of years we'll all run 64 bit hardware anyway.\n\nI believe that just using 64 bit would require the least changes to Postgres. \nNow, why would that look that obvious to me and yet I saw no mentioing of \nthis in the recent postings. Surely it has been discussed before, so which is \nthe point I miss or don't understand?\n\nI would need 64 bit sequences anyway, as it is predictable that our table for \npathology results will run out of unique IDs in a couple of years.\n\nHorst \n",
"msg_date": "Thu, 19 Jul 2001 13:55:53 +1000",
"msg_from": "Horst Herb <hherb@malleenet.net.au>",
"msg_from_op": false,
"msg_subject": "Re: Re: OID wraparound (was Re: pg_depend)"
},
{
"msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> However, the utility of INSERT returning a unique identifier to the\n> inserted row needs to be addressed -- I would prefer it return the\n> defined PRIMARY KEY value for the tuple just inserted, if a PRIMARY\n> KEY is defined. If no PRIMARY KEY is defined, return a unique\n> identifier (even a temporary one like the ctid) so that I have that\n> information for use later in the application. The utility of that\n> feature should not be underestimated.\n\nThat's something that needs to be thought about, all right. I kinda\nlike the idea of returning the ctid, because it is (a) very low\noverhead, which is nice for something that the client may not actually\nneed, and (b) the tuple can be retrieved *very* quickly given a tid,\nmuch more so than was possible with OID. OTOH, if you want to use a\ntid you'd best use it right away, before someone else can update the\nrow...\n\nThe major problem with any change away from returning OID is that it'll\nbreak client libraries and apps. How much pain do we want to cause\nourselves in that line?\n\nCertainly, to return anything besides/instead of OID we'd have to change\nthe FE/BE protocol. IIRC, there are a number of other things pending\nthat require protocol changes, so gathering them all together and\nupdating the protocol isn't necessarily a bad thing. But I don't think\nwe have time for it in the 7.2 cycle, unless we slip the schedule past\nthe beta-by-end-of-August that I believe we're shooting for.\n\nAnother possibility, given that any app using a feature like this is\nnonportable anyway, is to extend the INSERT statement along the lines\nthat someone (maybe Larry R? I forget now) proposed before:\n\n\tINSERT INTO foo ... RETURNING x,y,z,...\n\nwhere x,y,z, etc are expressions in the variables of the inserted\ntuple(s). This could be made to look like a SELECT at the protocol\nlevel, which would mean that it wouldn't break client libraries or\nrequire a protocol bump, and it's *way* more flexible than any\nhardwired decision about what columns to return. It wouldn't have\nany problem with multiple tuples inserted by an INSERT ... SELECT,\neither.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Jul 2001 00:00:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound (was Re: pg_depend) "
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> As I mentioned already I'm implementing updatable cursors\n> in ODBC and have half done it. If OIDs would be optional\n> my trial loses its validity but I would never try another\n> implementation.\n\nCould you use CTID instead of OID?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Jul 2001 00:08:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound (was Re: pg_depend) "
},
{
"msg_contents": "On Thursday 19 July 2001 12:00 am, Tom Lane wrote:\n> Lamar Owen <lamar.owen@wgcr.org> writes:\n> > However, the utility of INSERT returning a unique identifier to the\n> > inserted row needs to be addressed -- I would prefer it return the\n\n> Another possibility, given that any app using a feature like this is\n> nonportable anyway, is to extend the INSERT statement along the lines\n> that someone (maybe Larry R? I forget now) proposed before:\n\n> \tINSERT INTO foo ... RETURNING x,y,z,...\n\n> where x,y,z, etc are expressions in the variables of the inserted\n\nI like this one.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Thu, 19 Jul 2001 00:09:02 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound (was Re: pg_depend)"
},
{
"msg_contents": "On Wednesday 18 July 2001 07:49 pm, Tom Lane wrote:\n> I don't think we should discourage use of OIDs quite as vigorously\n> as you propose ;-).\n\nJust playing devil's advocate. As I said, I am one who is using OID's in a \nclient now.... but who is willing to forgo that feature for large-system \nstability.\n\n> All I want is to not expend OIDs on things that\n> have no need for one. That, together with clarifying exactly how\n> unique OIDs should be expected to be, seems to me that it will solve\n> 99% of the problem.\n\n99% solved for 1% effort... The other 1% would take alot more effort.\n\nI think you're barking up the right tree, as usual, Tom.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Thu, 19 Jul 2001 00:11:54 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound (was Re: pg_depend)"
},
{
"msg_contents": "Tom Lane wrote:\n\n >Lamar Owen <lamar.owen@wgcr.org> writes:\n >\n >><snip>\n >>\n >\n ><snip>\n >\n >Another possibility, given that any app using a feature like this is\n >nonportable anyway, is to extend the INSERT statement along the lines\n >that someone (maybe Larry R? I forget now) proposed before:\n >\n >\tINSERT INTO foo ... RETURNING x,y,z,...\n >\n >where x,y,z, etc are expressions in the variables of the inserted\n >tuple(s). This could be made to look like a SELECT at the protocol\n >level, which would mean that it wouldn't break client libraries or\n >require a protocol bump, and it's *way* more flexible than any\n >hardwired decision about what columns to return. It wouldn't have\n >any problem with multiple tuples inserted by an INSERT ... SELECT,\n >either.\n >\n\nThis would be a good thing (tm). I use Oracle quite extensively as well\nas PG and Oracle's method of \"RETURNING :avalue\" is very good for\nreturning values from newly inserted rows.\n\nThere was some talk a while back about [not?] implementing variable\nbinding. This seems to become very closely related to that. It would \nseem to solve the problem of having a unique identifier returned for \ninserts. I'm sure it would please quite a few people in the process, \nespecially ones moving across from Oracle. (kill two birds with one stone)\n\n >\n > \n\t\tregards, tom lane\n >\n\nAshley Cambrell\n\n\n\n",
"msg_date": "Thu, 19 Jul 2001 15:48:43 +1000",
"msg_from": "Ashley Cambrell <ash@freaky-namuh.com>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound (was Re: pg_depend)"
},
{
"msg_contents": "At 00:00 19/07/01 -0400, Tom Lane wrote:\n>that someone (maybe Larry R? I forget now) proposed before:\n>\n>\tINSERT INTO foo ... RETURNING x,y,z,...\n>\n\nThat would have been me; at the time we also talked about\nUPDATE...RETURNING and Jan proposed allowing UPDATE...RETURNING\n{[Old.|New.]Attr,...}\n\nNeedless to say, I'd love to see it implemented.\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Thu, 19 Jul 2001 16:20:45 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound (was Re: pg_depend) "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > As I mentioned already I'm implementing updatable cursors\n> > in ODBC and have half done it. If OIDs would be optional\n> > my trial loses its validity but I would never try another\n> > implementation.\n> \n> Could you use CTID instead of OID?\n> \n\nI am using both.\nTIDs for fast access and OIDs for identification.\nUnfortunately TIDs are transient and they aren't\nthat reliable as for identification. But the\ntransience of TIDs are useful for row-versioning\nfortunately. The combination of OID and TID has\nbeen my plan since I introduced Tid scan.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Thu, 19 Jul 2001 15:36:20 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound (was Re: pg_depend)"
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> Tom Lane wrote:\n>> Could you use CTID instead of OID?\n\n> I am using both.\n> TIDs for fast access and OIDs for identification.\n> Unfortunately TIDs are transient and they aren't\n> that reliable as for identification.\n\nHmm ... within a transaction I think they'd be reliable enough,\nbut for long-term ID I agree they're not. What behavior do you\nneed exactly; do you need to be able to find the updated version\nof a row you originally inserted? What would it take to use a\nuser-defined primary key instead of OID?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Jul 2001 10:16:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound (was Re: pg_depend) "
},
{
"msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> At 00:00 19/07/01 -0400, Tom Lane wrote:\n>> INSERT INTO foo ... RETURNING x,y,z,...\n\n> That would have been me; at the time we also talked about\n> UPDATE...RETURNING and Jan proposed allowing UPDATE...RETURNING\n> {[Old.|New.]Attr,...}\n\nHm. I'm less excited about UPDATE ... RETURNING since it would seem\nthat SELECT FOR UPDATE followed by UPDATE would get that job done\nin a somewhat-less-nonstandard manner. But anyway ---\n\nThinking about this some more, it seems that it's straightforward enough\nfor a plain INSERT, but I don't understand what's supposed to happen if\nthe INSERT is replaced by an ON INSERT DO INSTEAD rule. The rule might\nnot contain an INSERT at all, or it might contain several INSERTs into\nvarious tables with no simple relationship to the original. What then?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Jul 2001 13:19:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound (was Re: pg_depend) "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > Tom Lane wrote:\n> >> Could you use CTID instead of OID?\n> \n> > I am using both.\n> > TIDs for fast access and OIDs for identification.\n> > Unfortunately TIDs are transient and they aren't\n> > that reliable as for identification.\n> \n> Hmm ... within a transaction I think they'd be reliable enough,\n> but for long-term ID I agree they're not. What behavior do you\n> need exactly;do you need to be able to find the updated version\n> of a row you originally inserted? \n\nWhat I was about to do in the case e.g. UPDATE is the following.\n\n1) UPDATE .. set .. where CTID = saved_ctid and OID = saved_oid;\n If one row was updated it's OK and return.\n2) Otherwise something has changed and the update operation would\n fail. However the driver has to try to find the updated\n version of the row in case of keyset-driven cursors by the query\n SELECT CTID, .. from .. where CTID = \n currtid2(table_name, saved_ctid) and OID = saved_oid;\n If a row was found, the content of cursors' buffer is \n replaced and return.\n3) If no row was found, the row may be deleted. Or we could\n issue another query\n SELECT CTID, .. from .. where OID = saved_oid;\n though the performance is doubtful.\n\nThe OIDs are (mainly) to prevent updating the wrong records.\n\n> What would it take to use a\n> user-defined primary key instead of OID?\n\nYes it could be. In fact M$ provides the ODBC cursor library\nin that way and we have used it(indirectly) for a long time.\nIt's the reason why ODBC users don't complain about the non-existence\nof updatable cursors that often. Must I repeat the implementation ?\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Fri, 20 Jul 2001 09:24:09 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound (was Re: pg_depend)"
},
{
"msg_contents": "lamar.owen@wgcr.org (Lamar Owen) wrote in message news:<01071818103609.00973@lowen.wgcr.org>...\n\n> [trimmed cc:list]\n\n> On Wednesday 18 July 2001 17:09, Bruce Momjian wrote:\n\n> > OK, we need to vote on whether Oid's are optional, and whether we can\n\n> > have them not created by default.\n\n> \n\n> [All the below IMHO]\n\n> \n\n> OID's should be optional.\n\n\nyep. we don't depend upon oids > 32 bits. that's pretty standard\npractice for serious db apps. however, tx limit is a real problem.\n\nmy vote is for solving the tx limit before chaning the oid problem.\n",
"msg_date": "19 Jul 2001 20:34:03 -0700",
"msg_from": "jmscott@yahoo.com (jmscott@REMOVEMEyahoo.com)",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound (was Re: pg_depend)"
}
] |
[
{
"msg_contents": "> OK, we need to vote on whether Oid's are optional,\n> and whether we can have them not created by default.\n\nOptional OIDs: YES\nNo OIDs by default: YES\n\n> > > However, OID's keep our system tables together.\n> > \n> > How?! If we want to find function with oid X we query\n> > pg_proc, if we want to find table with oid Y we query\n> > pg_class - we always use oids in context of \"class\"\n> > to what an object belongs. This means that two tuples\n> > from different system tables could have same oid values\n> > and everything would work perfectly.\n> \n> I meant we use them in many cases to link entries, and in\n> pg_description for descriptions and lots of other things\n> that may use them in the future for system table use.\n\nSo, add class' ID (uniq id from pg_class) when linking.\n\nVadim\n",
"msg_date": "Wed, 18 Jul 2001 14:35:06 -0700",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: OID wraparound (was Re: pg_depend)"
},
{
"msg_contents": ">> I meant we use them in many cases to link entries, and in\n>> pg_description for descriptions and lots of other things\n>> that may use them in the future for system table use.\n\npg_description is a point I hadn't thought about --- it uses OIDs\nto refer to pg_attribute entries. However, pg_description is pretty\nbroken in its assumptions about OIDs anyway. I'm inclined to change\nit to be indexed by\n\n\t(object type ID, object OID, attributenumber)\n\nthe same way that Philip proposed indexing pg_depend. Among other\nthings, that'd make it much cheaper to drop comments during a DROP\nTABLE. You could just scan on (object type ID, object OID), and get\nboth the table and all its columns in a single indexscan search,\nnot one per column as happens now.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Jul 2001 18:26:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound (was Re: pg_depend) "
},
{
"msg_contents": "> >> I meant we use them in many cases to link entries, and in\n> >> pg_description for descriptions and lots of other things\n> >> that may use them in the future for system table use.\n> \n> pg_description is a point I hadn't thought about --- it uses OIDs\n> to refer to pg_attribute entries. However, pg_description is pretty\n> broken in its assumptions about OIDs anyway. I'm inclined to change\n> it to be indexed by\n> \n> \t(object type ID, object OID, attributenumber)\n> \n> the same way that Philip proposed indexing pg_depend. Among other\n> things, that'd make it much cheaper to drop comments during a DROP\n> TABLE. You could just scan on (object type ID, object OID), and get\n> both the table and all its columns in a single indexscan search,\n> not one per column as happens now.\n\nRemember most pg_description comments are not on column but on functions\nand stuff. That attributenumber is not going to apply there.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 18 Jul 2001 19:47:38 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound (was Re: pg_depend)"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Remember most pg_description comments are not on column but on functions\n> and stuff. That attributenumber is not going to apply there.\n\nSure, it'd just be zero for non-column items.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Jul 2001 19:55:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound (was Re: pg_depend) "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Remember most pg_description comments are not on column but on functions\n> > and stuff. That attributenumber is not going to apply there.\n> \n> Sure, it'd just be zero for non-column items.\n\nWhat do we do with other columns that need descriptions and don't have\noid column. Make the attribute column mean something else? I just\ndon't see a huge gain here and lots of confusion. User tables are a\ndifferent story.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 18 Jul 2001 20:00:10 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound (was Re: pg_depend)"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> What do we do with other columns that need descriptions and don't have\n> oid column.\n\nLike what?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Jul 2001 20:43:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound (was Re: pg_depend) "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > What do we do with other columns that need descriptions and don't have\n> > oid column.\n> \n> Like what?\n\nDepends what other system tables you are intending to remove oid's for?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 18 Jul 2001 20:53:29 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound (was Re: pg_depend)"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> What do we do with other columns that need descriptions and don't have\n> oid column.\n>> \n>> Like what?\n\n> Depends what other system tables you are intending to remove oid's for?\n\nNothing that requires a description ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Jul 2001 20:58:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound (was Re: pg_depend) "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > What do we do with other columns that need descriptions and don't have\n> > oid column.\n> >> \n> >> Like what?\n> \n> > Depends what other system tables you are intending to remove oid's for?\n> \n> Nothing that requires a description ;-)\n\nYou are a sly one. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 18 Jul 2001 20:59:00 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound (was Re: pg_depend)"
},
{
"msg_contents": "Tom mentioned what should be stored in the OID system column if no oid's\nare in the table. He also mentioned that he doesn't want a\nvariable-length tuple header so will always have an oid system column.\n\nWhat about moving the oid column out of the tuple header. This saves 4\nbytes in the header in cases where there is no oid on the table.\n\nIf they ask for an OID in a table, make it the first column of a table. \nAlso, if they have asked for oid's on the table, odds are they want\nSELECT * to show it.\n\nAlso, how about a GUC option that controls whether tables are created\nwith OID's by default.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 20 Jul 2001 10:20:44 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound (was Re: pg_depend)"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> What about moving the oid column out of the tuple header. This saves 4\n> bytes in the header in cases where there is no oid on the table.\n\nNo it doesn't --- at least not on machines where MAXALIGN is eight\nbytes.\n\nI don't think this is worth the trouble...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 20 Jul 2001 11:29:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: OID wraparound (was Re: pg_depend) "
}
] |
[
{
"msg_contents": "FYI, I will be visiting Red Hat engineers in Toronto tomorrow\n(Thursday). I will be back online Friday.\n\nI should also mention that Jan, Tom, and I will be at the O'Reilly\nconference all next week.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 18 Jul 2001 21:11:57 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Visit to Red Hat Canada"
}
] |
[
{
"msg_contents": "\nJ-P wrote:\n> > I need to create a new system table like pg_log to\n> > implement a replication scheme. The big problem is\n> how\n> > I could get an OID for it, a unique OID that is\n> > reserved for that table???\n\nHiroshi Inoue wrote:\n>\n> \n> Do you need the following ?\n> \n> visco=# select oid from pg_class where relname =\n> 'pg_log';\n> oid\n> ------\n> 1269\n> (1 row)\n> \n> I'm afraid of misunderstanding.\n\nSorry my question was wrongly asked.\nWhat I need is a unique OID for my new system table\nthat is reserved for that table?\nA new Id that is not used by anything else, and that\nwill never be used.\n(The reference to pg_log was just to show the\nsimilarity of what I need).\n\nN.B. I can't just \n#select oid from pg_class \nand take one that is not there, since I don't know if\nthe oid I choose will be used by something else in the\nsystem??\n\nThanks for your help,\nJ-P \n\n\n\n_______________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.ca address at http://mail.yahoo.ca\n",
"msg_date": "Thu, 19 Jul 2001 11:19:11 -0400 (EDT)",
"msg_from": "J-P Guy <grizzlouca@yahoo.ca>",
"msg_from_op": true,
"msg_subject": "Re: OID wraparound (was Re: pg_depend)"
}
] |
[
{
"msg_contents": "> Yes, nowhere near, and yes. Sequence objects require disk I/O to\n> update; the OID counter essentially lives in shared memory, and can\n> be bumped for the price of a spinlock access.\n\nSequences also cache values (32 afair) - ie one log record is required\nfor 32 nextval-s. Sequence' data file is updated at checkpoint time,\nso - not so much IO. I really think that using sequences for system\ntables IDs would be good.\n\nVadim\n",
"msg_date": "Thu, 19 Jul 2001 08:54:24 -0700",
"msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>",
"msg_from_op": true,
"msg_subject": "RE: OID wraparound (was Re: pg_depend) "
}
] |
[
{
"msg_contents": "I need to programatically relax a constraint during db syncronization. I\ntried setting tgenabled to false in the pg_trigger table, but it didn't seem\nto make a difference.\n\nThx,\n\nHowie\n\n\n\n",
"msg_date": "Thu, 19 Jul 2001 17:08:31 GMT",
"msg_from": "\"Howard Williams\" <howieshouse@home.com>",
"msg_from_op": true,
"msg_subject": "RELAX! - or more to the point,\n\thow do I temporarily relax a trigger/constraint?"
}
] |
[
{
"msg_contents": "\tI've followed the threads of needing to run vacuum nightly, and even\ngot an answer to a slow updating database regarding needing to vacuum. But I\nhaven't see the question asked:\n\nis it possible to completely turn off the revision tracking feature so that\nvacuum does not need to be run at all?\n\nThanks for any pointers,\nMike\n\n",
"msg_date": "Thu, 19 Jul 2001 10:20:47 -0700",
"msg_from": "Mike Cianflone <mcianflone@littlefeet-inc.com>",
"msg_from_op": true,
"msg_subject": "Turning off revision tracking so vacuum never needs to be run"
},
{
"msg_contents": "Mike Cianflone <mcianflone@littlefeet-inc.com> writes:\n> is it possible to completely turn off the revision tracking feature so that\n> vacuum does not need to be run at all?\n\nNo.\n\nOf course, if you never update or delete any rows, you don't need VACUUM\n... but I suspect that's not what you had in mind.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Jul 2001 14:40:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Turning off revision tracking so vacuum never needs to be run "
}
] |
[
{
"msg_contents": "/home/projects/pgsql partition at hub.org is down to zero free space...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Jul 2001 23:18:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "hub.org out of disk space"
},
{
"msg_contents": "On Thu, 19 Jul 2001, Tom Lane wrote:\n\nHi Tom,\n\n I removed an ISO that Corey had made for me, that should free up some\nspace.\n\n> /home/projects/pgsql partition at hub.org is down to zero free space...\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n\n Chris Bowlby,\n -----------------------------------------------------\n Web Developer @ Hub.org.\n excalibur@hub.org\n www.hub.org\n 1-902-542-3657\n -----------------------------------------------------\n\n",
"msg_date": "Fri, 20 Jul 2001 07:33:34 -0400 (EDT)",
"msg_from": "Chris Bowlby <excalibur@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: hub.org out of disk space"
},
{
"msg_contents": "On Fri, 20 Jul 2001, Chris Bowlby wrote:\n\n> On Thu, 19 Jul 2001, Tom Lane wrote:\n>\n> Hi Tom,\n>\n> I removed an ISO that Corey had made for me, that should free up some\n> space.\n\nAnd I removed some stuff.\n\nVince.\n\n>\n> > /home/projects/pgsql partition at hub.org is down to zero free space...\n> >\n> > \t\t\tregards, tom lane\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> >\n>\n> Chris Bowlby,\n> -----------------------------------------------------\n> Web Developer @ Hub.org.\n> excalibur@hub.org\n> www.hub.org\n> 1-902-542-3657\n> -----------------------------------------------------\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Fri, 20 Jul 2001 07:53:00 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: hub.org out of disk space"
},
{
"msg_contents": "\nthere, and I just cleared out about 500Meg+ of old garbage ... 1.2gig free\nagain ...\n\nOn Fri, 20 Jul 2001, Chris Bowlby wrote:\n\n> On Thu, 19 Jul 2001, Tom Lane wrote:\n>\n> Hi Tom,\n>\n> I removed an ISO that Corey had made for me, that should free up some\n> space.\n>\n> > /home/projects/pgsql partition at hub.org is down to zero free space...\n> >\n> > \t\t\tregards, tom lane\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> >\n>\n> Chris Bowlby,\n> -----------------------------------------------------\n> Web Developer @ Hub.org.\n> excalibur@hub.org\n> www.hub.org\n> 1-902-542-3657\n> -----------------------------------------------------\n>\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n",
"msg_date": "Fri, 20 Jul 2001 09:09:22 -0300 (ADT)",
"msg_from": "The Hermit Hacker <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: hub.org out of disk space"
},
{
"msg_contents": "On Friday 20 July 2001 08:09, The Hermit Hacker wrote:\n> there, and I just cleared out about 500Meg+ of old garbage ... 1.2gig free\n> again ...\n\nUnless I get protests to the contrary, I'm going to remove all but the last \nsupported RPM versions in /pub/binary for each major version. IE, all but \nthe last 7.0.3 RPM will be removed, etc. A full RPMset across three or more \nplatforms plus source takes a bit of space.... :-) ftp/pub/binary is only up \nto 353356 blocks, and /home/projects/pgsql is now down to 446308 free.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Fri, 20 Jul 2001 11:17:29 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Re: hub.org out of disk space"
}
] |
[
{
"msg_contents": "Hello all,\n\n I've tried again sending large queries using libpq on Windows\nenvironment, without success.\n I downloaded the PostgreSQL v7.12 sources, compiled libpq.dll using\nMicrosoft's Visual C++ 6.0, and tried sending a large query.\n The problem is, when the query is > 8192 large, a NULL pointer is\nreturned from PQexec().\n I have tried using ZDE (http://www.zeoslib.org), which I helped\ndevelop, and pgAccess. ZDE is based on the Zeos Database Objects library,\nwhich provides full access to PostgreSQL to Borland Delphi and Borland C++\nBuilder compilers.\n Could anyone please try this query:\nftp://carcass.dhs.org/pub/test.zip on windows (using libpq) and confirm it\nsuceed ? This archive contains a test.sql source, which will create a dumb\ntable with a text filed and then try to insert in it a large data (>8192\nbytes) on it, and the libpq.dll I just compiled, for who want a fresh libpq\n(it's virus free, don't worry... ). All my current PostgreSQL driver\nimplementation is depending on this. I'm sure the libpq will fail, unless\nsomething very weird is happening in here... :)\n Other friends have confirmed this behaviour.\n I tried to look at the libpq sources to find out where's the error,\nbut I think it will take much less time to who develops it...\n\nBest Regards,\nSteve Howe\n\n\n",
"msg_date": "Fri, 20 Jul 2001 03:34:00 -0300",
"msg_from": "\"Steve Howe\" <howe@carcass.dhs.org>",
"msg_from_op": true,
"msg_subject": "Large queries - again..."
},
{
"msg_contents": "Well, I tested the query you sent, and I got these results accessing the\ndata:\n\n1) libpq from Windows (freshly compiled from 7.1.2 sources): Error:\npqReadData() -- read() failed: errno=0\nNo error\n2) ODBC from Windows: It works ok.\n\n\n\n\nSteve Howe <howe@carcass.dhs.org> escreveu nas not�cias de\nmensagem:9j8jce$ddo$1@news.tht.net...\n> Hello all,\n>\n> I've tried again sending large queries using libpq on Windows\n> environment, without success.\n> I downloaded the PostgreSQL v7.12 sources, compiled libpq.dll\nusing\n> Microsoft's Visual C++ 6.0, and tried sending a large query.\n> The problem is, when the query is > 8192 large, a NULL pointer is\n> returned from PQexec().\n> I have tried using ZDE (http://www.zeoslib.org), which I helped\n> develop, and pgAccess. ZDE is based on the Zeos Database Objects library,\n> which provides full access to PostgreSQL to Borland Delphi and Borland C++\n> Builder compilers.\n> Could anyone please try this query:\n> ftp://carcass.dhs.org/pub/test.zip on windows (using libpq) and confirm it\n> suceed ? This archive contains a test.sql source, which will create a dumb\n> table with a text filed and then try to insert in it a large data (>8192\n> bytes) on it, and the libpq.dll I just compiled, for who want a fresh\nlibpq\n> (it's virus free, don't worry... ). All my current PostgreSQL driver\n> implementation is depending on this. I'm sure the libpq will fail, unless\n> something very weird is happening in here... :)\n> Other friends have confirmed this behaviour.\n> I tried to look at the libpq sources to find out where's the\nerror,\n> but I think it will take much less time to who develops it...\n>\n> Best Regards,\n> Steve Howe\n>\n>\n\n\n",
"msg_date": "Fri, 20 Jul 2001 11:34:41 -0300",
"msg_from": "\"Eduardo Stern\" <eduardo@stern.com.br>",
"msg_from_op": false,
"msg_subject": "Re: Large queries - again..."
},
{
"msg_contents": "\"Steve Howe\" <howe@carcass.dhs.org> writes:\n> I downloaded the PostgreSQL v7.12 sources, compiled libpq.dll using\n> Microsoft's Visual C++ 6.0, and tried sending a large query.\n> The problem is, when the query is > 8192 large, a NULL pointer is\n> returned from PQexec().\n\nIt sure sounds to me like you are invoking an old (6.5 or before) libpq.\nPerhaps you should check around to see if there are multiple libpq.dll\nfiles on your system ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 20 Jul 2001 11:35:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Large queries - again... "
},
{
"msg_contents": "Hello Tom,\n\n Nope, I'm 100% sure that the libpq.dll used is the one I just\ncompiled. And I never installed an older libpq.dll on this system.\n My application loads specifically the libpq.dll I compiled (I use\nthe full library path on the call to LoadLibrary() call, so there is no way\nit is an older library.\n Eduardo Stern from dbExperts (http://www.dbexperts.com.br), a\nPostgreSQL specialized consulting company, got the same results.\n The ODBC driver, however, do not suffer from the same problem, once\nit does not use libpq.\n Maybe I should write a C sample that proves libpq under windows has\nthis bug ???\n\nBest Regards,\nSteve Howe\n\n----- Original Message -----\nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nTo: \"Steve Howe\" <howe@carcass.dhs.org>\nCc: <pgsql-hackers@postgresql.org>\nSent: Friday, July 20, 2001 12:35 PM\nSubject: Re: [HACKERS] Large queries - again...\n\n\n> \"Steve Howe\" <howe@carcass.dhs.org> writes:\n> > I downloaded the PostgreSQL v7.12 sources, compiled libpq.dll\nusing\n> > Microsoft's Visual C++ 6.0, and tried sending a large query.\n> > The problem is, when the query is > 8192 large, a NULL pointer\nis\n> > returned from PQexec().\n>\n> It sure sounds to me like you are invoking an old (6.5 or before) libpq.\n> Perhaps you should check around to see if there are multiple libpq.dll\n> files on your system ...\n>\n> regards, tom lane\n>\n\n",
"msg_date": "Fri, 20 Jul 2001 14:35:05 -0300",
"msg_from": "\"Steve Howe\" <howe@carcass.dhs.org>",
"msg_from_op": true,
"msg_subject": "Re: Large queries - again... "
},
{
"msg_contents": "\"Steve Howe\" <howe@carcass.dhs.org> writes:\n> Nope, I'm 100% sure that the libpq.dll used is the one I just\n> compiled. And I never installed an older libpq.dll on this system.\n\nHmph. So what is left in PQerrorMessage() after the failure?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 20 Jul 2001 13:42:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Large queries - again... "
},
{
"msg_contents": "Hello Tom,\n\n It returns \"Error: pqReadData() -- read() failed: errno=0 No error\n\" as expected when a nil pointer is returned.\n\nBest Regards,\nSteve Howe\n\n----- Original Message -----\nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nTo: \"Steve Howe\" <howe@carcass.dhs.org>\nCc: <pgsql-hackers@postgresql.org>\nSent: Friday, July 20, 2001 2:42 PM\nSubject: Re: [HACKERS] Large queries - again...\n\n\n> \"Steve Howe\" <howe@carcass.dhs.org> writes:\n> > Nope, I'm 100% sure that the libpq.dll used is the one I just\n> > compiled. And I never installed an older libpq.dll on this system.\n>\n> Hmph. So what is left in PQerrorMessage() after the failure?\n>\n> regards, tom lane\n>\n\n",
"msg_date": "Fri, 20 Jul 2001 15:21:10 -0300",
"msg_from": "\"Steve Howe\" <howe@carcass.dhs.org>",
"msg_from_op": true,
"msg_subject": "Re: Large queries - again... "
},
{
"msg_contents": "\"Steve Howe\" <howe@carcass.dhs.org> writes:\n> It returns \"Error: pqReadData() -- read() failed: errno=0 No error\n> \" as expected when a nil pointer is returned.\n\n\"As expected\"? That's not what I'd expect, especially not for a\nbehavior that's dependent on the size of an *outgoing* message.\n\n(Thinks for awhile...) You're not using PQsetnonblocking() are you,\nby any chance?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 20 Jul 2001 16:17:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Large queries - again... "
},
{
"msg_contents": "\n----- Original Message -----\nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nTo: \"Steve Howe\" <howe@carcass.dhs.org>\nCc: <pgsql-hackers@postgresql.org>\nSent: Friday, July 20, 2001 5:17 PM\nSubject: Re: [HACKERS] Large queries - again...\n\n\n> \"Steve Howe\" <howe@carcass.dhs.org> writes:\n> > It returns \"Error: pqReadData() -- read() failed: errno=0 No\nerror\n> > \" as expected when a nil pointer is returned.\n>\n> \"As expected\"? That's not what I'd expect, especially not for a\n> behavior that's dependent on the size of an *outgoing* message.\nIt is expected, because it's the default message when a PQexec() query\nreturns NULL: pqReadData() will return nothing yet no error is signed.\nOf course, the \"really expected\" would be a sucessfull exec :-)\n\n> (Thinks for awhile...) You're not using PQsetnonblocking() are you,\n> by any chance?\nNo, I'm not. Asynchronous libpq connections on Windows are still not\nrealiable (althought I read someone submitted a patch recently), so I'm\nkeeping synchronous queries for a while. I'm not also using any non-standard\nfunctions; just plain PQconnectdb() and PQexec()...\n\nBest Regards,\nSteve\n\n",
"msg_date": "Fri, 20 Jul 2001 18:46:26 -0300",
"msg_from": "\"Steve Howe\" <howe@carcass.dhs.org>",
"msg_from_op": true,
"msg_subject": "Re: Large queries - again... "
},
{
"msg_contents": "> > \"As expected\"? That's not what I'd expect, especially not for a\n> > behavior that's dependent on the size of an *outgoing* message.\n> It is expected, because it's the default message when a PQexec() query\n> returns NULL: pqReadData() will return nothing yet no error is signed.\n> Of course, the \"really expected\" would be a sucessfull exec :-)\n> \n> > (Thinks for awhile...) You're not using PQsetnonblocking() are you,\n> > by any chance?\n> No, I'm not. Asynchronous libpq connections on Windows are still not\n> realiable (althought I read someone submitted a patch recently), so I'm\n> keeping synchronous queries for a while. I'm not also using any non-standard\n> functions; just plain PQconnectdb() and PQexec()...\n\nYes, just applied. I will have another one next week.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 20 Jul 2001 18:53:28 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Large queries - again..."
},
{
"msg_contents": "\"Steve Howe\" <howe@carcass.dhs.org> writes:\n>> (Thinks for awhile...) You're not using PQsetnonblocking() are you,\n>> by any chance?\n\n> No, I'm not.\n\nDrat, another perfectly good theory down the drain :-(.\n\nWell, we're not going to find out anymore until we discover what the\nerror code actually is --- the \"errno=0\" bogosity isn't helping.\nAs Bruce mentioned, we did just commit a patch that #defines errno\nas WSAGetLastError() on WIN32, so that you can get at least something\nuseful about socket errors. I'd suggest pulling the current CVS sources\n(or a nightly snapshot tarball dated after today) and building libpq\nfrom that. Then maybe we can learn more.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 20 Jul 2001 19:08:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Large queries - again... "
},
{
"msg_contents": "> \"Steve Howe\" <howe@carcass.dhs.org> writes:\n> >> (Thinks for awhile...) You're not using PQsetnonblocking() are you,\n> >> by any chance?\n>\n> > No, I'm not.\n>\n> Drat, another perfectly good theory down the drain :-(.\n>\n> Well, we're not going to find out anymore until we discover what the\n> error code actually is --- the \"errno=0\" bogosity isn't helping.\n> As Bruce mentioned, we did just commit a patch that #defines errno\n> as WSAGetLastError() on WIN32, so that you can get at least something\n> useful about socket errors. I'd suggest pulling the current CVS sources\n> (or a nightly snapshot tarball dated after today) and building libpq\n> from that. Then maybe we can learn more.\nUnhappyly, there are unresolved externals and it won't build...\nI'll try to fix it.\nThe log follows right below...\n\nBest regards,\nSteve Howe\n\n----------------------------------------------------------------------------\n----------\nMicrosoft (R) Program Maintenance Utility Version 6.00.8168.0\nCopyright (C) Microsoft Corp 1988-1998. All rights reserved.\n\n cd include\n if not exist config.h copy config.h.win32 config.h\n cd ..\n cd interfaces\\libpq\n nmake /f win32.mak\n\nMicrosoft (R) Program Maintenance Utility Version 6.00.8168.0\nCopyright (C) Microsoft Corp 1988-1998. All rights reserved.\n\n link.exe @C:\\DOCUME~1\\ADMINI~1\\LOCALS~1\\Temp\\nma01588.\n Creating library .\\Release\\libpqdll.lib and object .\\Release\\libpqdll.exp\nlibpq.lib(fe-exec.obj) : error LNK2001: unresolved external symbol _snprintf\nlibpq.lib(fe-misc.obj) : error LNK2001: unresolved external symbol _snprintf\nlibpq.lib(fe-auth.obj) : error LNK2001: unresolved external symbol _snprintf\nlibpq.lib(dllist.obj) : error LNK2001: unresolved external symbol _elog\n.\\Release\\libpq.dll : fatal error LNK1120: 2 unresolved externals\nNMAKE : fatal error U1077: 'link.exe' : return code '0x460'\nStop.\nNMAKE : fatal error U1077: '\"C:\\Program Files\\Microsoft Visual\nStudio\\VC98\\bin\\NMAKE.EXE\"' : return\ncode '0x2'\nStop.\n\n\n\n",
"msg_date": "Sat, 21 Jul 2001 00:47:51 -0300",
"msg_from": "\"Steve Howe\" <howe@carcass.dhs.org>",
"msg_from_op": true,
"msg_subject": "Re: Large queries - again... "
},
{
"msg_contents": "\nOK, I just applied a patch to add the final fixes to Win32 libpq. \nPlease try the CVS or later snapshot to see how it works. The patch\nsuggested adding \n\n\t#define snprintf _snprintf\n\nto win32.h and I have done that. There was already one there for\nvsnprintf. I am quite confused about the elog() mention. I don't see\nwhere we added a call to elog() in the past day. I only see two\nmentions of elog in the code, both it dllist.c. They don't use elog()\nif you define FRONTEND. Please do -DFRONTEND on the compile line. I\nthink this will give you a good library binary.\n\nLet us know how the new code works. The most recent patch I just\napplied was tested by a user and it worked well for him. Nice to have\nthis resolved. I can mark this TODO item as done:\n\n\t* -Fix libpq to properly handle socket failures under native MS\n\t Win32 [libpq]\n\n\n\n> > \"Steve Howe\" <howe@carcass.dhs.org> writes:\n> > >> (Thinks for awhile...) You're not using PQsetnonblocking() are you,\n> > >> by any chance?\n> >\n> > > No, I'm not.\n> >\n> > Drat, another perfectly good theory down the drain :-(.\n> >\n> > Well, we're not going to find out anymore until we discover what the\n> > error code actually is --- the \"errno=0\" bogosity isn't helping.\n> > As Bruce mentioned, we did just commit a patch that #defines errno\n> > as WSAGetLastError() on WIN32, so that you can get at least something\n> > useful about socket errors. I'd suggest pulling the current CVS sources\n> > (or a nightly snapshot tarball dated after today) and building libpq\n> > from that. Then maybe we can learn more.\n> Unhappyly, there are unresolved externals and it won't build...\n> I'll try to fix it.\n> The log follows right below...\n> \n> Best regards,\n> Steve Howe\n> \n> ----------------------------------------------------------------------------\n> ----------\n> Microsoft (R) Program Maintenance Utility Version 6.00.8168.0\n> Copyright (C) Microsoft Corp 1988-1998. All rights reserved.\n> \n> cd include\n> if not exist config.h copy config.h.win32 config.h\n> cd ..\n> cd interfaces\\libpq\n> nmake /f win32.mak\n> \n> Microsoft (R) Program Maintenance Utility Version 6.00.8168.0\n> Copyright (C) Microsoft Corp 1988-1998. All rights reserved.\n> \n> link.exe @C:\\DOCUME~1\\ADMINI~1\\LOCALS~1\\Temp\\nma01588.\n> Creating library .\\Release\\libpqdll.lib and object .\\Release\\libpqdll.exp\n> libpq.lib(fe-exec.obj) : error LNK2001: unresolved external symbol _snprintf\n> libpq.lib(fe-misc.obj) : error LNK2001: unresolved external symbol _snprintf\n> libpq.lib(fe-auth.obj) : error LNK2001: unresolved external symbol _snprintf\n> libpq.lib(dllist.obj) : error LNK2001: unresolved external symbol _elog\n> .\\Release\\libpq.dll : fatal error LNK1120: 2 unresolved externals\n> NMAKE : fatal error U1077: 'link.exe' : return code '0x460'\n> Stop.\n> NMAKE : fatal error U1077: '\"C:\\Program Files\\Microsoft Visual\n> Studio\\VC98\\bin\\NMAKE.EXE\"' : return\n> code '0x2'\n> Stop.\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 21 Jul 2001 00:39:56 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Large queries - again..."
},
{
"msg_contents": "\n----- Original Message -----\nFrom: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\nTo: \"Steve Howe\" <howe@carcass.dhs.org>\nCc: \"Tom Lane\" <tgl@sss.pgh.pa.us>; <pgsql-hackers@postgresql.org>\nSent: Saturday, July 21, 2001 1:39 AM\nSubject: Re: [HACKERS] Large queries - again...\n\n\n>\n> OK, I just applied a patch to add the final fixes to Win32 libpq.\n> Please try the CVS or later snapshot to see how it works. The patch\n> suggested adding\n>\n> #define snprintf _snprintf\n>\n> to win32.h and I have done that. There was already one there for\n> vsnprintf. I am quite confused about the elog() mention. I don't see\n> where we added a call to elog() in the past day. I only see two\n> mentions of elog in the code, both it dllist.c. They don't use elog()\n> if you define FRONTEND. Please do -DFRONTEND on the compile line. I\n> think this will give you a good library binary.\n\nI did it, but that brings other dependency problems (see below). I think\nit's better to properly fix the elog issue... :-)\n----------------------------------------------------------------------------\n--------------------------------------------\nC:\\ttt\\src>nmake -f win32.mak\n\nMicrosoft (R) Program Maintenance Utility Version 6.00.8168.0\nCopyright (C) Microsoft Corp 1988-1998. All rights reserved.\n\n cd include\n if not exist config.h copy config.h.win32 config.h\n cd ..\n cd interfaces\\libpq\n nmake /f win32.mak\n\nMicrosoft (R) Program Maintenance Utility Version 6.00.8168.0\nCopyright (C) Microsoft Corp 1988-1998. All rights reserved.\n\n cl.exe @C:\\DOCUME~1\\ADMINI~1\\LOCALS~1\\Temp\\nma01700.\ndllist.c\n..\\..\\backend\\lib\\dllist.c(20) : fatal error C1083: Cannot open include\nfile: 'sysexits.h': No such\nfile or directory\n----------------------------------------------------------------------------\n--------------------------------------------\n> Let us know how the new code works. The most recent patch I just\n> applied was tested by a user and it worked well for him. Nice to have\n> this resolved. I can mark this TODO item as done:\n>\n> * -Fix libpq to properly handle socket failures under native MS\n> Win32 [libpq]\nI want this fixed more then anybody else i the world, believe me :-)\n\nBest Regards,\nSteve Howe\n\n",
"msg_date": "Sat, 21 Jul 2001 02:21:56 -0300",
"msg_from": "\"Steve Howe\" <howe@carcass.dhs.org>",
"msg_from_op": true,
"msg_subject": "Re: Large queries - again..."
},
{
"msg_contents": "\"Steve Howe\" <howe@carcass.dhs.org> writes:\n> ..\\..\\backend\\lib\\dllist.c(20) : fatal error C1083: Cannot open include\n> file: 'sysexits.h': No such file or directory\n\nJan added that recently. I was wondering if it was portable or not ...\nlooks like now we know :-(.\n\nFor the moment, just take out the include --- you may also need to\nreplace \"exit(EX_UNAVAILABLE)\" by plain \"exit(1)\".\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 21 Jul 2001 11:06:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Large queries - again... "
},
{
"msg_contents": "> > OK, I just applied a patch to add the final fixes to Win32 libpq.\n> > Please try the CVS or later snapshot to see how it works. The patch\n> > suggested adding\n> >\n> > #define snprintf _snprintf\n> >\n> > to win32.h and I have done that. There was already one there for\n> > vsnprintf. I am quite confused about the elog() mention. I don't see\n> > where we added a call to elog() in the past day. I only see two\n> > mentions of elog in the code, both it dllist.c. They don't use elog()\n> > if you define FRONTEND. Please do -DFRONTEND on the compile line. I\n> > think this will give you a good library binary.\n> \n> I did it, but that brings other dependency problems (see below). I think\n> it's better to properly fix the elog issue... :-)\n\nShouldn't we be defining FRONTEND in the win32.mak file?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 21 Jul 2001 14:30:45 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Large queries - again..."
},
{
"msg_contents": "Tom Lane wrote:\n> \"Steve Howe\" <howe@carcass.dhs.org> writes:\n> > ..\\..\\backend\\lib\\dllist.c(20) : fatal error C1083: Cannot open include\n> > file: 'sysexits.h': No such file or directory\n>\n> Jan added that recently. I was wondering if it was portable or not ...\n> looks like now we know :-(.\n\n Grmbl - tell me why I don't like Windows...\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Mon, 23 Jul 2001 07:50:43 -0400 (EDT)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Large queries - again..."
},
{
"msg_contents": "> Tom Lane wrote:\n> > \"Steve Howe\" <howe@carcass.dhs.org> writes:\n> > > ..\\..\\backend\\lib\\dllist.c(20) : fatal error C1083: Cannot open\ninclude\n> > > file: 'sysexits.h': No such file or directory\n> >\n> > Jan added that recently. I was wondering if it was portable or not ...\n> > looks like now we know :-(.\n>\n> Grmbl - tell me why I don't like Windows...\nPlease notify me when it's fixed so that I can test it.\nI'll also test the error messages returned from libpq on Windows, as\nrequested on another thread.\n\nBest Regards,\nSteve Howe\n\n",
"msg_date": "Mon, 23 Jul 2001 14:25:47 -0300",
"msg_from": "\"Steve Howe\" <howe@carcass.dhs.org>",
"msg_from_op": true,
"msg_subject": "Re: Large queries - again..."
}
] |
[
{
"msg_contents": "Hi !\nMy system is i686/Linux Mandrake 7.0/Postgresql v-7.0.2.\nI found a bug in the sql command ALTER TABLE ADD CONSTRAINT..., when I tried to add a composite foreign key constraint \n(a FK with more than one attribute). The problem is in the file identified by \n$Header: /home/projects/pgsql/cvsroot/pgsql/src/backend/commands/command.c,v 1.71 2000/04/12 17:14:57 momjian Exp $ \nin the code lines #1139 to #1150, when the function AlterTableAddConstraint() tries to construct the vector of the trigger�s tgargs.\n>From the position 4 and forward, it must collect the pairs of fk_attrs and pk_attrs (interleaved), but the current code put first all \nfk_attrs and then all the pk_attrs, leading to an error.\nI fixed the bug and tested the update and now it works well. I send you a \"diff -c command.c command.fixed.c\" (with the diff : \nGNU diffutils version 2.7) and the output is:\n\n*** command.c\t\tSun May 6 21:13:06 2001\n--- command.fixed.c\tMon Jul 9 19:58:21 2001\n***************\n*** 19,24 ****\n--- 19,25 ----\n *\t manipulating code in the commands/ directory, should go\n *\t someplace closer to the lib/catalog code.\n *\n+ *\n *-------------------------------------------------------------------------\n */\n #include \"postgres.h\"\n***************\n*** 1138,1152 ****\n \t\t\t\t{\n \t\t\t\t\tIdent\t *fk_at = lfirst(list);\n \n! \t\t\t\t\ttrig.tgargs[count++] = fk_at->name;\n \t\t\t\t}\n \t\t\t\tforeach(list, fkconstraint->pk_attrs)\n \t\t\t\t{\n \t\t\t\t\tIdent\t *pk_at = lfirst(list);\n \n! \t\t\t\t\ttrig.tgargs[count++] = pk_at->name;\n \t\t\t\t}\n! \t\t\t\ttrig.tgnargs = count;\n \n \t\t\t\tscan = heap_beginscan(rel, false, SnapshotNow, 0, NULL);\n \t\t\t\tAssertState(scan != NULL);\n--- 1139,1156 ----\n \t\t\t\t{\n \t\t\t\t\tIdent\t *fk_at = lfirst(list);\n \n! \t\t\t\t\ttrig.tgargs[count] = fk_at->name;\n! \t\t\t\t\tcount+=2;\n \t\t\t\t}\n+ \t\t\t\tcount = 5;\n \t\t\t\tforeach(list, fkconstraint->pk_attrs)\n \t\t\t\t{\n \t\t\t\t\tIdent\t *pk_at = lfirst(list);\n \n! \t\t\t\t\ttrig.tgargs[count] = pk_at->name;\n! \t\t\t\t\tcount+=2;\n \t\t\t\t}\n! \t\t\t\ttrig.tgnargs = (count-1);\n \n \t\t\t\tscan = heap_beginscan(rel, false, SnapshotNow, 0, NULL);\n \t\t\t\tAssertState(scan != NULL);\n***************\n*** 1220,1223 ****\n \tLockRelation(rel, lockstmt->mode);\n \n \theap_close(rel, NoLock);\t/* close rel, keep lock */\n! }\n--- 1224,1227 ----\n \tLockRelation(rel, lockstmt->mode);\n \n \theap_close(rel, NoLock);\t/* close rel, keep lock */\n! } \n\n\nI wish it would help you. If it�s necessary, drop me a line. Regards\n Jose Luis Ozzano.\n\n(P.D.: I attached the messaje in a file edited in LINUX. Maybe you have problems to read the original text)\n>From pgsql-hackers-owner@postgresql.org Fri Jul 20 13:55:40 2001\nReceived: from postgresql.org.org (webmail.postgresql.org [216.126.85.28])\n\tby postgresql.org (8.11.3/8.11.1) with SMTP id f6KGs6a95467\n\tfor <pgsql-hackers@postgresql.org>; Fri, 20 Jul 2001 12:54:06 -0400 (EDT)\n\t(envelope-from pgsql-hackers-owner@postgresql.org)\nReceived: from news.tht.net (news.hub.org [216.126.91.242])\n\tby postgresql.org (8.11.3/8.11.1) with ESMTP id f6IDVAa53796;\n\tWed, 18 Jul 2001 09:31:10 -0400 (EDT)\n\t(envelope-from news@news.tht.net)\nReceived: (from news@localhost)\n\tby news.tht.net (8.11.4/8.11.4) id f6IDMm486418;\n\tWed, 18 Jul 2001 09:22:48 -0400 (EDT)\n\t(envelope-from news)\nMessage-ID: <3B558D0C.E046E8A6@wafishermn.com>\nFrom: Justin Koivisto <justink@wafishermn.com>\nX-Mailer: Mozilla 4.73 [en] (X11; U; Linux 2.4.5 i686)\nX-Accept-Language: en\nMIME-Version: 1.0\nX-Newsgroups: php.general,alt.php,ba.php,de.comp.lang.php,fr.comp.infosystemes.www.auteurs.php,comp.lang.perl.tk,comp.databases.postgresql.general,comp.databases.postgresql.hackers,comp.databases.postgresql.questions,mailing.database.mysql\nSubject: Re: All computers in the world MUST sync with ATOMIC clock before 12:00 \n AM 21 July 2001!!!\nReferences: <3B54F80B.C5DC2979@yahoo.com>\nContent-Type: text/plain; charset=us-ascii\nContent-Transfer-Encoding: 7bit\nLines: 1\nDate: Wed, 18 Jul 2001 13:22:47 GMT\nX-Complaints-To: abuse@onvoy.com\nOrganization: Onvoy\nTo: pgsql-hackers@postgresql.org.pgsql-general@postgresql.org\nX-Archive-Number: 200107/577\nX-Sequence-Number: 11396\n\nJust another way to get your name on the top 10 lists, eh?\n",
"msg_date": "Fri, 20 Jul 2001 12:23:16 +0300 (GMT+03:00)",
"msg_from": "jozzano <jozzano@exa.unicen.edu.ar>",
"msg_from_op": true,
"msg_subject": "BUG (fixed) in CREATE TABLE ADD CONSTRAINT...(v-7.0.2)"
}
] |
[
{
"msg_contents": "Hi !\nMy system is i686/Linux Mandrake 7.0/Postgresql v-7.0.2.\nI found a bug in the sql command ALTER TABLE ADD CONSTRAINT..., when I tried to add a composite foreign key constraint \n(a FK with more than one attribute). The problem is in the file identified by \n$Header: /home/projects/pgsql/cvsroot/pgsql/src/backend/commands/command.c,v 1.71 2000/04/12 17:14:57 momjian Exp $ \nin the code lines #1139 to #1150, when the function AlterTableAddConstraint() tries to construct the vector of the trigger�s tgargs.\n>From the position 4 and forward, it must collect the pairs of fk_attrs and pk_attrs (interleaved), but the current code put first all \nfk_attrs and then all the pk_attrs, leading to an error.\nI fixed the bug and tested the update and now it works well. I send you a \"diff -c command.c command.fixed.c\" (with the diff : \nGNU diffutils version 2.7) and the output is:\n\n*** command.c\t\tSun May 6 21:13:06 2001\n--- command.fixed.c\tMon Jul 9 19:58:21 2001\n***************\n*** 19,24 ****\n--- 19,25 ----\n *\t manipulating code in the commands/ directory, should go\n *\t someplace closer to the lib/catalog code.\n *\n+ *\n *-------------------------------------------------------------------------\n */\n #include \"postgres.h\"\n***************\n*** 1138,1152 ****\n \t\t\t\t{\n \t\t\t\t\tIdent\t *fk_at = lfirst(list);\n \n! \t\t\t\t\ttrig.tgargs[count++] = fk_at->name;\n \t\t\t\t}\n \t\t\t\tforeach(list, fkconstraint->pk_attrs)\n \t\t\t\t{\n \t\t\t\t\tIdent\t *pk_at = lfirst(list);\n \n! \t\t\t\t\ttrig.tgargs[count++] = pk_at->name;\n \t\t\t\t}\n! \t\t\t\ttrig.tgnargs = count;\n \n \t\t\t\tscan = heap_beginscan(rel, false, SnapshotNow, 0, NULL);\n \t\t\t\tAssertState(scan != NULL);\n--- 1139,1156 ----\n \t\t\t\t{\n \t\t\t\t\tIdent\t *fk_at = lfirst(list);\n \n! \t\t\t\t\ttrig.tgargs[count] = fk_at->name;\n! \t\t\t\t\tcount+=2;\n \t\t\t\t}\n+ \t\t\t\tcount = 5;\n \t\t\t\tforeach(list, fkconstraint->pk_attrs)\n \t\t\t\t{\n \t\t\t\t\tIdent\t *pk_at = lfirst(list);\n \n! \t\t\t\t\ttrig.tgargs[count] = pk_at->name;\n! \t\t\t\t\tcount+=2;\n \t\t\t\t}\n! \t\t\t\ttrig.tgnargs = (count-1);\n \n \t\t\t\tscan = heap_beginscan(rel, false, SnapshotNow, 0, NULL);\n \t\t\t\tAssertState(scan != NULL);\n***************\n*** 1220,1223 ****\n \tLockRelation(rel, lockstmt->mode);\n \n \theap_close(rel, NoLock);\t/* close rel, keep lock */\n! }\n--- 1224,1227 ----\n \tLockRelation(rel, lockstmt->mode);\n \n \theap_close(rel, NoLock);\t/* close rel, keep lock */\n! } \n\n\nI wish it would help you. If it�s necessary, drop me a line. Regards\n Jose Luis Ozzano.\n\n(P.D.: I attached this same messaje, edited in LINUX, because yo may have trouble reading it)",
"msg_date": "Fri, 20 Jul 2001 12:27:19 +0300 (GMT+03:00)",
"msg_from": "jozzano <jozzano@exa.unicen.edu.ar>",
"msg_from_op": true,
"msg_subject": "BUG (fixed) in CREATE TABLE ADD CONSTRAINT...(v-7.0.2)"
},
{
"msg_contents": "jozzano <jozzano@exa.unicen.edu.ar> writes:\n> My system is i686/Linux Mandrake 7.0/Postgresql v-7.0.2.\n> I found a bug in the sql command ALTER TABLE ADD CONSTRAINT...,\n\nThis bug seems to be already fixed in release 7.1.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 20 Jul 2001 13:53:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG (fixed) in CREATE TABLE ADD CONSTRAINT...(v-7.0.2) "
}
] |
[
{
"msg_contents": "I'm pretty new to postgresql.. I'm using a fresh compile/install of postgresql 7.1.2 without any special options.. but here's my problem:\n\nsemantic=# create temp table ttmptable(lookup_id int, rating int);\nCREATE\nsemantic=# SELECT doEverythingTemp(20706,2507);\n doeverythingtemp \n------------------\n 1\n(1 row)\nsemantic=# DROP table ttmptable;\nDROP\nsemantic=# create temp table ttmptable(lookup_id int, rating int);\nCREATE\nsemantic=# SELECT doEverythingTemp(20706,2507);\nERROR: Relation 4348389 does not exist\n\n--- schema --\n\nCREATE FUNCTION doEverythingTemp(int,int) RETURNS int AS '\nDECLARE\n rrec RECORD;\n userid int;\n lookupid int;\n rrating int;\n ruser int;\nBEGIN\n userid := $1;\n lookupid := $2;\n FOR rrec IN SELECT webuser_id,rating FROM rating WHERE webuser_id!=userid AND lookup_id=lookupid;\n rrating:=rrec.rating;\n ruser:=rrec.webuser_id;\n INSERT INTO ttmptable SELECT lookup_id,rrating*rating FROM rating WHERE webuser_id=ruser AND lookup_id!=lookupid;\n END LOOP;\n RETURN 1;\nEND;' LANGUAGE 'plpgsql'\n\nTable \"rating\"\n Attribute | Type | Modifier \n-------------+---------+----------------------------------------------------------\nwebuser_id | integer | not null default '0'\ncategory_id | integer | not null default '0'\nlookup_id | integer | not null default '0'\nrating | integer | not null default '0'\nrating_id | integer | not null default nextval('\"rating_rating_id_seq\"'::text)\nIndices: rating_category_id_idx,\n rating_lookup_id_idx,\n rating_rating_id_key,\n rating_webuser_id_idx\n\n\nI've tried regular tables, creating the table from within the function, and a few other things.. no luck. Does anyone have ANY idea how I can either redesign this query or make the create/drop thing work properly?\n\nThanks,\n(::) Bob Ippolito\n",
"msg_date": "Fri, 20 Jul 2001 06:27:28 -0400",
"msg_from": "\"\\(::\\) Bob Ippolito\" <bob@redivi.com>",
"msg_from_op": true,
"msg_subject": "problem with creating/dropping tables and plpgsql ?"
},
{
"msg_contents": "\"\\(::\\) Bob Ippolito\" <bob@redivi.com> writes:\n> semantic=# DROP table ttmptable;\n> DROP\n> semantic=# create temp table ttmptable(lookup_id int, rating int);\n> CREATE\n> semantic=# SELECT doEverythingTemp(20706,2507);\n> ERROR: Relation 4348389 does not exist\n\nYeah, temp tables and plpgsql functions don't coexist very well yet.\n(plpgsql tries to cache query plans, and at the moment there's no\nmechanism to let it flush obsolete plans when a table is deleted.)\n\nWhat you'll need to do is create a temp table that lasts for the whole\nsession and is re-used by each successive call of the plpgsql function.\nYou don't need to worry about dropping the temp table at session exit;\nthat's what temp tables are for, after all, to go away automatically.\nSo, just delete all its contents at entry or exit of the function,\nand you can re-use it each time through.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 20 Jul 2001 11:53:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: problem with creating/dropping tables and plpgsql ? "
}
] |
[
{
"msg_contents": "Does anyone know if it is possible to define a Postgres C function as taking a\nvariable number of parameters? The fmgr code will pass it, but I don't see any\nway to use \"create function\" to register it.\n\nDoes one have to issue a create function for each additional parameter?\n\nI am trying to port some mysql stuff to postgres, and mysql has a function\n\"concat\" which will concatenate a number of fields. I have this coded for\nPostgres, but I can't get it registered. I used a few \"create function\"\nstatements to cover the number of parameters I need, but this is really ugly.\n\n(Also, this will help with \"decode(),\" an oraclesque function I use.)\n",
"msg_date": "Fri, 20 Jul 2001 08:49:43 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "C functions"
},
{
"msg_contents": "mlw <markw@mohawksoft.com> writes:\n> Does anyone know if it is possible to define a Postgres C function as\n> taking a variable number of parameters? The fmgr code will pass it,\n> but I don't see any way to use \"create function\" to register it.\n\nNo, it's not. There is some (purely speculative) support for the idea\nin the fmgr code, but none anywhere else, as yet.\n\n> Does one have to issue a create function for each additional parameter?\n\nYup, you could make multiple pg_proc entries all pointing at the same\nC function. Kinda grotty, but...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 20 Jul 2001 12:02:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: C functions "
}
] |
[
{
"msg_contents": "Someone on IRC just mentioned that mere mortals can create tables in\ntemplate1. If the user restricts template1 access to users via\npg_hba.conf, certain commands will not work that use template1\nconnection. \n\nAny solutions? I think we need table creation permissions even if we\ndon't overhaul the permission system for 7.2.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 20 Jul 2001 09:40:19 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Ability to create tables"
}
] |
[
{
"msg_contents": "Straight out of Allied peace talks, we've got this article up at mysql.com\nhttp://www.mysql.com/news/article-76.html\n\nOne wonders what happened to the postal or email systems that this couldn't\nhave been delivered privately.\n\nIn all honesty, it appears mysql.org was overdue, the level of rhetoric\ncoming from MySQL AB is incredible.\n\nPerhaps Postgresql folks could start thinking of peace talk sites as well? I\nrecommand the tropics. Then all that's left is to find something to fight\nabout to justify a flight down to paradise.\n\nAZ\n\n\n\n\n\n",
"msg_date": "Fri, 20 Jul 2001 10:16:00 -0400",
"msg_from": "\"August Zajonc\" <junk-pgsql@aontic.com>",
"msg_from_op": true,
"msg_subject": "Neutral Soil (OT)"
},
{
"msg_contents": "August Zajonc wrote:\n\n> Perhaps Postgresql folks could start thinking of peace talk sites as well? I\n> recommand the tropics. Then all that's left is to find something to fight\n> about to justify a flight down to paradise.\n\nYou are all welcome here in Cyprus. Monty too, he will find a lot of\nSwedish fellows here to share a beer. :-)\n\n-- \nAlessio F. Bragadini\t\talessio@albourne.com\nAPL Financial Services\t\thttp://village.albourne.com\nNicosia, Cyprus\t\t \tphone: +357-2-755750\n\n\"It is more complicated than you think\"\n\t\t-- The Eighth Networking Truth from RFC 1925\n",
"msg_date": "Mon, 23 Jul 2001 10:47:51 +0300",
"msg_from": "Alessio Bragadini <alessio@albourne.com>",
"msg_from_op": false,
"msg_subject": "Re: Neutral Soil (OT)"
}
] |
[
{
"msg_contents": "Reported by Tatsuo with 1000 backends all waking up at the same time:\n\n\t* Create spinlock sleepers queue so everyone doesn't wake up at once \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 20 Jul 2001 10:27:00 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Added TODO item"
}
] |
[
{
"msg_contents": "\n> As I mentioned already I'm implementing updatable cursors\n> in ODBC and have half done it. If OIDs would be optional\n> my trial loses its validity but I would never try another\n> implementation.\n\nBut how can you do that ? The oid index is only created by \nthe dba for specific tables, thus your update would do an update\nwith a where restriction, that is not indexed. \nThis would be darn slow, no ?\n\nHow about instead selecting the primary key and one of the tid's \n(I never remember which, was it ctid ?) instead, so you can validate\nwhen a row changed between the select and the update ? \n\nAndreas\n",
"msg_date": "Fri, 20 Jul 2001 16:45:19 +0200",
"msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>",
"msg_from_op": true,
"msg_subject": "AW: OID wraparound (was Re: pg_depend)"
},
{
"msg_contents": "> -----Original Message-----\n> Zeugswetter Andreas SB\n> \n> > As I mentioned already I'm implementing updatable cursors\n> > in ODBC and have half done it. If OIDs would be optional\n> > my trial loses its validity but I would never try another\n> > implementation.\n> \n> But how can you do that ? The oid index is only created by \n> the dba for specific tables, thus your update would do an update\n> with a where restriction, that is not indexed. \n> This would be darn slow, no ?\n> \n\nPlease look at my another(previous ?) posting to pgsql-hackers.\nI would use both TIDs and OIDs, TIDs for fast access, OIDs\nfor identification.\n\n> How about instead selecting the primary key and one of the tid's \n> (I never remember which, was it ctid ?) instead, so you can validate\n> when a row changed between the select and the update ? \n> \n\nXmin is also available for row-versioning. But now I'm wondering\nif TID/xmin are guranteed to keep such characteriscs.\nEven Object IDentifier is about to lose the existence. \nProbably all-purpose application mustn't use system columns\nat all though I've never heard of it in other dbms-s.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Sat, 21 Jul 2001 15:31:13 +0900",
"msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "RE: OID wraparound (was Re: pg_depend)"
}
] |
[
{
"msg_contents": "uvscan doesn't extract out MIME attachments but amavis does. You have to\nhave a whole lot of un archivers on the system for that reason.\n\nDave\nOn Tue, 2002-07-30 at 22:13, Christopher Kings-Lynne wrote:\n> Hmmm - I'm pretty sure that uvscan won't automatically extract out MIME\n> attachements. You need to scan normal files.\n> \n> We use inflex on our mail servers to extract all our emails before\n> scanning...\n> \n> Chris\n> \n> > -----Original Message-----\n> > From: Marc G. Fournier [mailto:scrappy@hub.org]\n> > Sent: Tuesday, 30 July 2002 10:47 PM\n> > To: Larry Rosenman\n> > Cc: Christopher Kings-Lynne; pgsql-hackers@postgresql.org\n> > Subject: Re: [HACKERS] Virus Emails\n> >\n> >\n> >\n> >\n> > Okay, am playing with this on one of my 'none-critical' servers right now\n> > ... tried to use uvscan from teh command line, and it didn't appear to\n> > pick up any of the Klez stuff, even though I know I have a few in my\n> > mailbox ...\n> >\n> > What options should I be running as? I'm using the following:\n> >\n> > uvscan --analyse --recursive --mime --summary --program /var/spool/mail\n> >\n> > On 28 Jul 2002, Larry Rosenman wrote:\n> >\n> > > On Sun, 2002-07-28 at 20:10, Marc G. Fournier wrote:\n> > > >\n> > > > God, I go through 200+ of those almost daily as moderator ...\n> > imagine if\n> > > > we had the lists open? :)\n> > > I picked up a copy of McAfee's vscan for FreeBSD from one of my contract\n> > > people, and have amavisd-milter running to prevent them from even\n> > > getting in the door.\n> > >\n> > > Mayhaps pgsql.org should do the same?\n> > >\n> > >\n> > > >\n> > > >\n> > > > On Sat, 27 Jul 2002, Christopher Kings-Lynne wrote:\n> > > >\n> > > > > Hi guys,\n> > > > >\n> > > > > I seem to be getting virus emails that pretend to be one of\n> > your guys. eg.\n> > > > > I get them from T.Ishii and N.Conway, etc. Anyone out\n> > there on the list who\n> > > > > should perhaps scan their computer? :)\n> > > > >\n> > > > > Chris\n> > > > >\n> > > > >\n> > > > >\n> > > > > ---------------------------(end of\n> > broadcast)---------------------------\n> > > > > TIP 5: Have you checked our extensive FAQ?\n> > > > >\n> > > > > http://www.postgresql.org/users-lounge/docs/faq.html\n> > > > >\n> > > >\n> > > >\n> > > > ---------------------------(end of\n> > broadcast)---------------------------\n> > > > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > > > subscribe-nomail command to majordomo@postgresql.org so that your\n> > > > message can get through to the mailing list cleanly\n> > > >\n> > > --\n> > > Larry Rosenman http://www.lerctr.org/~ler\n> > > Phone: +1 972-414-9812 E-Mail: ler@lerctr.org\n> > > US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> > >\n> > >\n> >\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n> \n\n\n\n",
"msg_date": "20 Jul 2001 22:33:48 -0400",
"msg_from": "Dave Cramer <dave@fastcrypt.com>",
"msg_from_op": true,
"msg_subject": "Re: Virus Emails"
},
{
"msg_contents": "Hi guys,\n\nI seem to be getting virus emails that pretend to be one of your guys. eg.\nI get them from T.Ishii and N.Conway, etc. Anyone out there on the list who\nshould perhaps scan their computer? :)\n\nChris\n\n\n",
"msg_date": "Sat, 27 Jul 2002 10:26:44 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Virus Emails"
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> I seem to be getting virus emails that pretend to be one of your guys. eg.\n> I get them from T.Ishii and N.Conway, etc. Anyone out there on the list who\n> should perhaps scan their computer? :)\n\nOne of the nastier aspects of the Klez virus is that it searches\naccessible files and webpages for email addresses. It doesn't just spam\nall the addresses it can find --- it spams each address with a false\n\"From:\" that's a found-nearby address. So mail-list archives are a\ngold mine for it: it can spam you with a false \"From:\" that you will\nprobably recognize.\n\nHowever, even a trivial look at the detail mail headers (Received: etc)\nwill convince you that the spam did not originate from the claimed\n\"From:\" address. If you care to post a few sets of complete headers,\nwe can probably triangulate pretty quickly on the virus-infected loser\nwho's originating these messages.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 27 Jul 2002 02:35:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Virus Emails "
},
{
"msg_contents": "On Sat, 27 Jul 2002, Tom Lane wrote:\n\n> One of the nastier aspects of the Klez virus....\n>\n> However, even a trivial look at the detail mail headers (Received: etc)\n> will convince you that the spam did not originate from the claimed\n> \"From:\" address. If you care to post a few sets of complete headers,\n> we can probably triangulate pretty quickly on the virus-infected loser\n> who's originating these messages.\n\nIt appears to me that the envelope sender is not forged by Klez.H,\nassuming that that's the virus I'm getting all the time. So you\ncould check for the \"Return-Path:\" header, or maybe \"From \" (note:\nno colon) if you're using a Berkeley-mailbox style system, and find\nout the e-mail address of the real sender.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n\n",
"msg_date": "Sun, 28 Jul 2002 18:06:12 +0900 (JST)",
"msg_from": "Curt Sampson <cjs@cynic.net>",
"msg_from_op": false,
"msg_subject": "Re: Virus Emails "
},
{
"msg_contents": "That may be true with some variants.\nHowever my mail server has rejected the relay of several mails sent pretending \nto be from me (envelope sender) to other parties and I think these could be \nklez variants or another such virus. Since my server rejected them I cannot \nbe sure of the contents.\n\nOn Sunday 28 July 2002 04:06 am, Curt Sampson wrote:\n> On Sat, 27 Jul 2002, Tom Lane wrote:\n> > One of the nastier aspects of the Klez virus....\n> >\n> > However, even a trivial look at the detail mail headers (Received: etc)\n> > will convince you that the spam did not originate from the claimed\n> > \"From:\" address. If you care to post a few sets of complete headers,\n> > we can probably triangulate pretty quickly on the virus-infected loser\n> > who's originating these messages.\n>\n> It appears to me that the envelope sender is not forged by Klez.H,\n> assuming that that's the virus I'm getting all the time. So you\n> could check for the \"Return-Path:\" header, or maybe \"From \" (note:\n> no colon) if you're using a Berkeley-mailbox style system, and find\n> out the e-mail address of the real sender.\n>\n> cjs\n\n",
"msg_date": "Sun, 28 Jul 2002 13:57:30 -0500",
"msg_from": "David Walker <pgsql@grax.com>",
"msg_from_op": false,
"msg_subject": "Re: Virus Emails"
},
{
"msg_contents": "\nGod, I go through 200+ of those almost daily as moderator ... imagine if\nwe had the lists open? :)\n\n\nOn Sat, 27 Jul 2002, Christopher Kings-Lynne wrote:\n\n> Hi guys,\n>\n> I seem to be getting virus emails that pretend to be one of your guys. eg.\n> I get them from T.Ishii and N.Conway, etc. Anyone out there on the list who\n> should perhaps scan their computer? :)\n>\n> Chris\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n",
"msg_date": "Sun, 28 Jul 2002 22:10:42 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Virus Emails"
},
{
"msg_contents": "On Sun, 2002-07-28 at 20:10, Marc G. Fournier wrote:\n> \n> God, I go through 200+ of those almost daily as moderator ... imagine if\n> we had the lists open? :)\nI picked up a copy of McAfee's vscan for FreeBSD from one of my contract\npeople, and have amavisd-milter running to prevent them from even\ngetting in the door. \n\nMayhaps pgsql.org should do the same? \n\n\n> \n> \n> On Sat, 27 Jul 2002, Christopher Kings-Lynne wrote:\n> \n> > Hi guys,\n> >\n> > I seem to be getting virus emails that pretend to be one of your guys. eg.\n> > I get them from T.Ishii and N.Conway, etc. Anyone out there on the list who\n> > should perhaps scan their computer? :)\n> >\n> > Chris\n> >\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> >\n> > http://www.postgresql.org/users-lounge/docs/faq.html\n> >\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n",
"msg_date": "28 Jul 2002 20:14:13 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: Virus Emails"
},
{
"msg_contents": "Marc G. Fournier wrote:\n> \n> God, I go through 200+ of those almost daily as moderator ... imagine if\n> we had the lists open? :)\n> \n\nHow do you prevent virus emails from coming in that look like they are\nfrom the intended person? Does the filter check only the envelope from\nand not the From: line?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 28 Jul 2002 21:45:39 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Virus Emails"
},
{
"msg_contents": "On 28 Jul 2002, Larry Rosenman wrote:\n\n> On Sun, 2002-07-28 at 20:10, Marc G. Fournier wrote:\n> >\n> > God, I go through 200+ of those almost daily as moderator ... imagine if\n> > we had the lists open? :)\n> I picked up a copy of McAfee's vscan for FreeBSD from one of my contract\n> people, and have amavisd-milter running to prevent them from even\n> getting in the door.\n>\n> Mayhaps pgsql.org should do the same?\n\nOne of the many things on my list to do ... how do you find the vscan\nstuff? do you find it slows down email noticeably?\n\n\n>\n>\n> >\n> >\n> > On Sat, 27 Jul 2002, Christopher Kings-Lynne wrote:\n> >\n> > > Hi guys,\n> > >\n> > > I seem to be getting virus emails that pretend to be one of your guys. eg.\n> > > I get them from T.Ishii and N.Conway, etc. Anyone out there on the list who\n> > > should perhaps scan their computer? :)\n> > >\n> > > Chris\n> > >\n> > >\n> > >\n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 5: Have you checked our extensive FAQ?\n> > >\n> > > http://www.postgresql.org/users-lounge/docs/faq.html\n> > >\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to majordomo@postgresql.org so that your\n> > message can get through to the mailing list cleanly\n> >\n> --\n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 E-Mail: ler@lerctr.org\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n>\n>\n\n",
"msg_date": "Sun, 28 Jul 2002 23:44:50 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Virus Emails"
},
{
"msg_contents": "On Sun, 2002-07-28 at 21:44, Marc G. Fournier wrote:\n> On 28 Jul 2002, Larry Rosenman wrote:\n> \n> > On Sun, 2002-07-28 at 20:10, Marc G. Fournier wrote:\n> > >\n> > > God, I go through 200+ of those almost daily as moderator ... imagine if\n> > > we had the lists open? :)\n> > I picked up a copy of McAfee's vscan for FreeBSD from one of my contract\n> > people, and have amavisd-milter running to prevent them from even\n> > getting in the door.\n> >\n> > Mayhaps pgsql.org should do the same?\n> \n> One of the many things on my list to do ... how do you find the vscan\n> stuff? do you find it slows down email noticeably?\nSpamAssassin slows it down much more. The vscan stuff is FAST, and\nrunning as a Milter prevents it from even getting in the door. \n\nSince most of my large mail is generally klez and friends, this speeds\nup the SpamAssassin stuff by design. \n\n\nI like the vscan stuff, and McAfee updates the DAT files at least\nweekly, although I have the update_dat script from ports try every day\nto get new ones. \n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n",
"msg_date": "28 Jul 2002 21:47:40 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: Virus Emails"
},
{
"msg_contents": "> > I picked up a copy of McAfee's vscan for FreeBSD from one of my contract\n> > people, and have amavisd-milter running to prevent them from even\n> > getting in the door.\n> >\n> > Mayhaps pgsql.org should do the same?\n>\n> One of the many things on my list to do ... how do you find the vscan\n> stuff? do you find it slows down email noticeably?\n\nWe actually use the McAfee vscan at work, but it's not blocking these\nviruses. If you're on a freebsd box, install /usr/ports/security/vscan and\n/usr/ports/security/uvscan-dat. it will then automatically update your DAT\nfiles.\n\nChris\n\n",
"msg_date": "Mon, 29 Jul 2002 10:55:55 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Virus Emails"
},
{
"msg_contents": "On Sun, 2002-07-28 at 21:55, Christopher Kings-Lynne wrote:\n> > > I picked up a copy of McAfee's vscan for FreeBSD from one of my contract\n> > > people, and have amavisd-milter running to prevent them from even\n> > > getting in the door.\n> > >\n> > > Mayhaps pgsql.org should do the same?\n> >\n> > One of the many things on my list to do ... how do you find the vscan\n> > stuff? do you find it slows down email noticeably?\n> \n> We actually use the McAfee vscan at work, but it's not blocking these\n> viruses. \nWhy isn't it ? Klez is caught by it... \n\n\n> \n> Chris\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n",
"msg_date": "28 Jul 2002 21:58:46 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: Virus Emails"
},
{
"msg_contents": "On Sun, 28 Jul 2002, Bruce Momjian wrote:\n\n> Marc G. Fournier wrote:\n> >\n> > God, I go through 200+ of those almost daily as moderator ... imagine if\n> > we had the lists open? :)\n> >\n>\n> How do you prevent virus emails from coming in that look like they are\n> from the intended person? Does the filter check only the envelope from\n> and not the From: line?\n\nDon't filter, scan for viruses. McAfee finds it just fine.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Mon, 29 Jul 2002 06:01:19 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Virus Emails"
},
{
"msg_contents": "On Sun, 28 Jul 2002, Marc G. Fournier wrote:\n\n> On 28 Jul 2002, Larry Rosenman wrote:\n>\n> > On Sun, 2002-07-28 at 20:10, Marc G. Fournier wrote:\n> > >\n> > > God, I go through 200+ of those almost daily as moderator ... imagine if\n> > > we had the lists open? :)\n> > I picked up a copy of McAfee's vscan for FreeBSD from one of my contract\n> > people, and have amavisd-milter running to prevent them from even\n> > getting in the door.\n> >\n> > Mayhaps pgsql.org should do the same?\n>\n> One of the many things on my list to do ... how do you find the vscan\n> stuff? do you find it slows down email noticeably?\n\npop4 doesn't even break a sweat and a ton of mail goes thru there every\nday.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Mon, 29 Jul 2002 06:10:17 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Virus Emails"
},
{
"msg_contents": "\n\nOkay, am playing with this on one of my 'none-critical' servers right now\n... tried to use uvscan from teh command line, and it didn't appear to\npick up any of the Klez stuff, even though I know I have a few in my\nmailbox ...\n\nWhat options should I be running as? I'm using the following:\n\nuvscan --analyse --recursive --mime --summary --program /var/spool/mail\n\nOn 28 Jul 2002, Larry Rosenman wrote:\n\n> On Sun, 2002-07-28 at 20:10, Marc G. Fournier wrote:\n> >\n> > God, I go through 200+ of those almost daily as moderator ... imagine if\n> > we had the lists open? :)\n> I picked up a copy of McAfee's vscan for FreeBSD from one of my contract\n> people, and have amavisd-milter running to prevent them from even\n> getting in the door.\n>\n> Mayhaps pgsql.org should do the same?\n>\n>\n> >\n> >\n> > On Sat, 27 Jul 2002, Christopher Kings-Lynne wrote:\n> >\n> > > Hi guys,\n> > >\n> > > I seem to be getting virus emails that pretend to be one of your guys. eg.\n> > > I get them from T.Ishii and N.Conway, etc. Anyone out there on the list who\n> > > should perhaps scan their computer? :)\n> > >\n> > > Chris\n> > >\n> > >\n> > >\n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 5: Have you checked our extensive FAQ?\n> > >\n> > > http://www.postgresql.org/users-lounge/docs/faq.html\n> > >\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to majordomo@postgresql.org so that your\n> > message can get through to the mailing list cleanly\n> >\n> --\n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 E-Mail: ler@lerctr.org\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n>\n>\n\n",
"msg_date": "Tue, 30 Jul 2002 11:47:28 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Virus Emails"
},
{
"msg_contents": "\nfigured it out ... uvscan isn't looking where ports installed the newer\n.dat files ... fixed that and it finds 63 virii infected files instead of\njust 5 :)\n\none step closer ...\n\nOn Tue, 30 Jul 2002, Marc G. Fournier wrote:\n\n>\n>\n> Okay, am playing with this on one of my 'none-critical' servers right now\n> ... tried to use uvscan from teh command line, and it didn't appear to\n> pick up any of the Klez stuff, even though I know I have a few in my\n> mailbox ...\n>\n> What options should I be running as? I'm using the following:\n>\n> uvscan --analyse --recursive --mime --summary --program /var/spool/mail\n>\n> On 28 Jul 2002, Larry Rosenman wrote:\n>\n> > On Sun, 2002-07-28 at 20:10, Marc G. Fournier wrote:\n> > >\n> > > God, I go through 200+ of those almost daily as moderator ... imagine if\n> > > we had the lists open? :)\n> > I picked up a copy of McAfee's vscan for FreeBSD from one of my contract\n> > people, and have amavisd-milter running to prevent them from even\n> > getting in the door.\n> >\n> > Mayhaps pgsql.org should do the same?\n> >\n> >\n> > >\n> > >\n> > > On Sat, 27 Jul 2002, Christopher Kings-Lynne wrote:\n> > >\n> > > > Hi guys,\n> > > >\n> > > > I seem to be getting virus emails that pretend to be one of your guys. eg.\n> > > > I get them from T.Ishii and N.Conway, etc. Anyone out there on the list who\n> > > > should perhaps scan their computer? :)\n> > > >\n> > > > Chris\n> > > >\n> > > >\n> > > >\n> > > > ---------------------------(end of broadcast)---------------------------\n> > > > TIP 5: Have you checked our extensive FAQ?\n> > > >\n> > > > http://www.postgresql.org/users-lounge/docs/faq.html\n> > > >\n> > >\n> > >\n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > > subscribe-nomail command to majordomo@postgresql.org so that your\n> > > message can get through to the mailing list cleanly\n> > >\n> > --\n> > Larry Rosenman http://www.lerctr.org/~ler\n> > Phone: +1 972-414-9812 E-Mail: ler@lerctr.org\n> > US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> >\n> >\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n",
"msg_date": "Tue, 30 Jul 2002 13:26:46 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Virus Emails"
},
{
"msg_contents": "\nOkay, this is sweet ... but can someone tell me where I 'Buy' a copy of\nuvscan? I've searched McAfee, but can't seem to find it in their eStore\nanywhere ...\n\n\nOn 28 Jul 2002, Larry Rosenman wrote:\n\n> On Sun, 2002-07-28 at 20:10, Marc G. Fournier wrote:\n> >\n> > God, I go through 200+ of those almost daily as moderator ... imagine if\n> > we had the lists open? :)\n> I picked up a copy of McAfee's vscan for FreeBSD from one of my contract\n> people, and have amavisd-milter running to prevent them from even\n> getting in the door.\n>\n> Mayhaps pgsql.org should do the same?\n>\n>\n> >\n> >\n> > On Sat, 27 Jul 2002, Christopher Kings-Lynne wrote:\n> >\n> > > Hi guys,\n> > >\n> > > I seem to be getting virus emails that pretend to be one of your guys. eg.\n> > > I get them from T.Ishii and N.Conway, etc. Anyone out there on the list who\n> > > should perhaps scan their computer? :)\n> > >\n> > > Chris\n> > >\n> > >\n> > >\n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 5: Have you checked our extensive FAQ?\n> > >\n> > > http://www.postgresql.org/users-lounge/docs/faq.html\n> > >\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to majordomo@postgresql.org so that your\n> > message can get through to the mailing list cleanly\n> >\n> --\n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 E-Mail: ler@lerctr.org\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n>\n>\n\n",
"msg_date": "Tue, 30 Jul 2002 15:20:11 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Virus Emails"
},
{
"msg_contents": "Try their corporate sales droids....\n\nIf you can't find one, I'll ask my contract client....\n\nLER\n\nOn Tue, 2002-07-30 at 13:20, Marc G. Fournier wrote:\n> \n> Okay, this is sweet ... but can someone tell me where I 'Buy' a copy of\n> uvscan? I've searched McAfee, but can't seem to find it in their eStore\n> anywhere ...\n> \n> \n> On 28 Jul 2002, Larry Rosenman wrote:\n> \n> > On Sun, 2002-07-28 at 20:10, Marc G. Fournier wrote:\n> > >\n> > > God, I go through 200+ of those almost daily as moderator ... imagine if\n> > > we had the lists open? :)\n> > I picked up a copy of McAfee's vscan for FreeBSD from one of my contract\n> > people, and have amavisd-milter running to prevent them from even\n> > getting in the door.\n> >\n> > Mayhaps pgsql.org should do the same?\n> >\n> >\n> > >\n> > >\n> > > On Sat, 27 Jul 2002, Christopher Kings-Lynne wrote:\n> > >\n> > > > Hi guys,\n> > > >\n> > > > I seem to be getting virus emails that pretend to be one of your guys. eg.\n> > > > I get them from T.Ishii and N.Conway, etc. Anyone out there on the list who\n> > > > should perhaps scan their computer? :)\n> > > >\n> > > > Chris\n> > > >\n> > > >\n> > > >\n> > > > ---------------------------(end of broadcast)---------------------------\n> > > > TIP 5: Have you checked our extensive FAQ?\n> > > >\n> > > > http://www.postgresql.org/users-lounge/docs/faq.html\n> > > >\n> > >\n> > >\n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > > subscribe-nomail command to majordomo@postgresql.org so that your\n> > > message can get through to the mailing list cleanly\n> > >\n> > --\n> > Larry Rosenman http://www.lerctr.org/~ler\n> > Phone: +1 972-414-9812 E-Mail: ler@lerctr.org\n> > US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> >\n> >\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n",
"msg_date": "30 Jul 2002 13:25:37 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: Virus Emails"
},
{
"msg_contents": "Hmmm - I'm pretty sure that uvscan won't automatically extract out MIME\nattachements. You need to scan normal files.\n\nWe use inflex on our mail servers to extract all our emails before\nscanning...\n\nChris\n\n> -----Original Message-----\n> From: Marc G. Fournier [mailto:scrappy@hub.org]\n> Sent: Tuesday, 30 July 2002 10:47 PM\n> To: Larry Rosenman\n> Cc: Christopher Kings-Lynne; pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] Virus Emails\n>\n>\n>\n>\n> Okay, am playing with this on one of my 'none-critical' servers right now\n> ... tried to use uvscan from teh command line, and it didn't appear to\n> pick up any of the Klez stuff, even though I know I have a few in my\n> mailbox ...\n>\n> What options should I be running as? I'm using the following:\n>\n> uvscan --analyse --recursive --mime --summary --program /var/spool/mail\n>\n> On 28 Jul 2002, Larry Rosenman wrote:\n>\n> > On Sun, 2002-07-28 at 20:10, Marc G. Fournier wrote:\n> > >\n> > > God, I go through 200+ of those almost daily as moderator ...\n> imagine if\n> > > we had the lists open? :)\n> > I picked up a copy of McAfee's vscan for FreeBSD from one of my contract\n> > people, and have amavisd-milter running to prevent them from even\n> > getting in the door.\n> >\n> > Mayhaps pgsql.org should do the same?\n> >\n> >\n> > >\n> > >\n> > > On Sat, 27 Jul 2002, Christopher Kings-Lynne wrote:\n> > >\n> > > > Hi guys,\n> > > >\n> > > > I seem to be getting virus emails that pretend to be one of\n> your guys. eg.\n> > > > I get them from T.Ishii and N.Conway, etc. Anyone out\n> there on the list who\n> > > > should perhaps scan their computer? :)\n> > > >\n> > > > Chris\n> > > >\n> > > >\n> > > >\n> > > > ---------------------------(end of\n> broadcast)---------------------------\n> > > > TIP 5: Have you checked our extensive FAQ?\n> > > >\n> > > > http://www.postgresql.org/users-lounge/docs/faq.html\n> > > >\n> > >\n> > >\n> > > ---------------------------(end of\n> broadcast)---------------------------\n> > > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > > subscribe-nomail command to majordomo@postgresql.org so that your\n> > > message can get through to the mailing list cleanly\n> > >\n> > --\n> > Larry Rosenman http://www.lerctr.org/~ler\n> > Phone: +1 972-414-9812 E-Mail: ler@lerctr.org\n> > US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> >\n> >\n>\n\n",
"msg_date": "Wed, 31 Jul 2002 10:13:00 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Virus Emails"
},
{
"msg_contents": "I would also like to know this! They don't mention it anywhere on their\nsite!\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Marc G. Fournier\n> Sent: Wednesday, 31 July 2002 2:20 AM\n> To: Larry Rosenman\n> Cc: Christopher Kings-Lynne; pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] Virus Emails\n>\n>\n>\n> Okay, this is sweet ... but can someone tell me where I 'Buy' a copy of\n> uvscan? I've searched McAfee, but can't seem to find it in their eStore\n> anywhere ...\n>\n>\n> On 28 Jul 2002, Larry Rosenman wrote:\n>\n> > On Sun, 2002-07-28 at 20:10, Marc G. Fournier wrote:\n> > >\n> > > God, I go through 200+ of those almost daily as moderator ...\n> imagine if\n> > > we had the lists open? :)\n> > I picked up a copy of McAfee's vscan for FreeBSD from one of my contract\n> > people, and have amavisd-milter running to prevent them from even\n> > getting in the door.\n> >\n> > Mayhaps pgsql.org should do the same?\n> >\n> >\n> > >\n> > >\n> > > On Sat, 27 Jul 2002, Christopher Kings-Lynne wrote:\n> > >\n> > > > Hi guys,\n> > > >\n> > > > I seem to be getting virus emails that pretend to be one of\n> your guys. eg.\n> > > > I get them from T.Ishii and N.Conway, etc. Anyone out\n> there on the list who\n> > > > should perhaps scan their computer? :)\n> > > >\n> > > > Chris\n> > > >\n> > > >\n> > > >\n> > > > ---------------------------(end of\n> broadcast)---------------------------\n> > > > TIP 5: Have you checked our extensive FAQ?\n> > > >\n> > > > http://www.postgresql.org/users-lounge/docs/faq.html\n> > > >\n> > >\n> > >\n> > > ---------------------------(end of\n> broadcast)---------------------------\n> > > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > > subscribe-nomail command to majordomo@postgresql.org so that your\n> > > message can get through to the mailing list cleanly\n> > >\n> > --\n> > Larry Rosenman http://www.lerctr.org/~ler\n> > Phone: +1 972-414-9812 E-Mail: ler@lerctr.org\n> > US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> >\n> >\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n",
"msg_date": "Wed, 31 Jul 2002 10:24:16 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Virus Emails"
},
{
"msg_contents": "\nthe only thing I've found so far (I've email'd their sales guy, but\nhaven't heard back yet) on their site is a 'calculator' that depends on\nnumber of users ... for the University I work out, I believe the cost came\nout to something like $99kUS, and I went low on my figures for # of users\n:)\n\nThank god there is more then just McAfee out there .. unless those #'s are\nwrong, am definitely going to be looking at alternatives ...\n\nOn Wed, 31 Jul 2002, Christopher Kings-Lynne wrote:\n\n> I would also like to know this! They don't mention it anywhere on their\n> site!\n>\n> Chris\n>\n> > -----Original Message-----\n> > From: pgsql-hackers-owner@postgresql.org\n> > [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Marc G. Fournier\n> > Sent: Wednesday, 31 July 2002 2:20 AM\n> > To: Larry Rosenman\n> > Cc: Christopher Kings-Lynne; pgsql-hackers@postgresql.org\n> > Subject: Re: [HACKERS] Virus Emails\n> >\n> >\n> >\n> > Okay, this is sweet ... but can someone tell me where I 'Buy' a copy of\n> > uvscan? I've searched McAfee, but can't seem to find it in their eStore\n> > anywhere ...\n> >\n> >\n> > On 28 Jul 2002, Larry Rosenman wrote:\n> >\n> > > On Sun, 2002-07-28 at 20:10, Marc G. Fournier wrote:\n> > > >\n> > > > God, I go through 200+ of those almost daily as moderator ...\n> > imagine if\n> > > > we had the lists open? :)\n> > > I picked up a copy of McAfee's vscan for FreeBSD from one of my contract\n> > > people, and have amavisd-milter running to prevent them from even\n> > > getting in the door.\n> > >\n> > > Mayhaps pgsql.org should do the same?\n> > >\n> > >\n> > > >\n> > > >\n> > > > On Sat, 27 Jul 2002, Christopher Kings-Lynne wrote:\n> > > >\n> > > > > Hi guys,\n> > > > >\n> > > > > I seem to be getting virus emails that pretend to be one of\n> > your guys. eg.\n> > > > > I get them from T.Ishii and N.Conway, etc. Anyone out\n> > there on the list who\n> > > > > should perhaps scan their computer? :)\n> > > > >\n> > > > > Chris\n> > > > >\n> > > > >\n> > > > >\n> > > > > ---------------------------(end of\n> > broadcast)---------------------------\n> > > > > TIP 5: Have you checked our extensive FAQ?\n> > > > >\n> > > > > http://www.postgresql.org/users-lounge/docs/faq.html\n> > > > >\n> > > >\n> > > >\n> > > > ---------------------------(end of\n> > broadcast)---------------------------\n> > > > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > > > subscribe-nomail command to majordomo@postgresql.org so that your\n> > > > message can get through to the mailing list cleanly\n> > > >\n> > > --\n> > > Larry Rosenman http://www.lerctr.org/~ler\n> > > Phone: +1 972-414-9812 E-Mail: ler@lerctr.org\n> > > US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> > >\n> > >\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> >\n> > http://www.postgresql.org/users-lounge/docs/faq.html\n> >\n>\n>\n\n",
"msg_date": "Wed, 31 Jul 2002 02:00:56 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Virus Emails"
},
{
"msg_contents": "I'll ask my contract what they paid....\n\n\n\nOn Wed, 2002-07-31 at 00:00, Marc G. Fournier wrote:\n> \n> the only thing I've found so far (I've email'd their sales guy, but\n> haven't heard back yet) on their site is a 'calculator' that depends on\n> number of users ... for the University I work out, I believe the cost came\n> out to something like $99kUS, and I went low on my figures for # of users\n> :)\n> \n> Thank god there is more then just McAfee out there .. unless those #'s are\n> wrong, am definitely going to be looking at alternatives ...\n> \n> On Wed, 31 Jul 2002, Christopher Kings-Lynne wrote:\n> \n> > I would also like to know this! They don't mention it anywhere on their\n> > site!\n> >\n> > Chris\n> >\n> > > -----Original Message-----\n> > > From: pgsql-hackers-owner@postgresql.org\n> > > [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Marc G. Fournier\n> > > Sent: Wednesday, 31 July 2002 2:20 AM\n> > > To: Larry Rosenman\n> > > Cc: Christopher Kings-Lynne; pgsql-hackers@postgresql.org\n> > > Subject: Re: [HACKERS] Virus Emails\n> > >\n> > >\n> > >\n> > > Okay, this is sweet ... but can someone tell me where I 'Buy' a copy of\n> > > uvscan? I've searched McAfee, but can't seem to find it in their eStore\n> > > anywhere ...\n> > >\n> > >\n> > > On 28 Jul 2002, Larry Rosenman wrote:\n> > >\n> > > > On Sun, 2002-07-28 at 20:10, Marc G. Fournier wrote:\n> > > > >\n> > > > > God, I go through 200+ of those almost daily as moderator ...\n> > > imagine if\n> > > > > we had the lists open? :)\n> > > > I picked up a copy of McAfee's vscan for FreeBSD from one of my contract\n> > > > people, and have amavisd-milter running to prevent them from even\n> > > > getting in the door.\n> > > >\n> > > > Mayhaps pgsql.org should do the same?\n> > > >\n> > > >\n> > > > >\n> > > > >\n> > > > > On Sat, 27 Jul 2002, Christopher Kings-Lynne wrote:\n> > > > >\n> > > > > > Hi guys,\n> > > > > >\n> > > > > > I seem to be getting virus emails that pretend to be one of\n> > > your guys. eg.\n> > > > > > I get them from T.Ishii and N.Conway, etc. Anyone out\n> > > there on the list who\n> > > > > > should perhaps scan their computer? :)\n> > > > > >\n> > > > > > Chris\n> > > > > >\n> > > > > >\n> > > > > >\n> > > > > > ---------------------------(end of\n> > > broadcast)---------------------------\n> > > > > > TIP 5: Have you checked our extensive FAQ?\n> > > > > >\n> > > > > > http://www.postgresql.org/users-lounge/docs/faq.html\n> > > > > >\n> > > > >\n> > > > >\n> > > > > ---------------------------(end of\n> > > broadcast)---------------------------\n> > > > > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > > > > subscribe-nomail command to majordomo@postgresql.org so that your\n> > > > > message can get through to the mailing list cleanly\n> > > > >\n> > > > --\n> > > > Larry Rosenman http://www.lerctr.org/~ler\n> > > > Phone: +1 972-414-9812 E-Mail: ler@lerctr.org\n> > > > US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> > > >\n> > > >\n> > >\n> > >\n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 5: Have you checked our extensive FAQ?\n> > >\n> > > http://www.postgresql.org/users-lounge/docs/faq.html\n> > >\n> >\n> >\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n",
"msg_date": "31 Jul 2002 00:05:07 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: Virus Emails"
},
{
"msg_contents": "On Wed, 31 Jul 2002, Christopher Kings-Lynne wrote:\n\n> I would also like to know this! They don't mention it anywhere on their\n> site!\n\nThe FreeBSD command line version comes on the CD along with the windoze\nversions.\n\n\n>\n> Chris\n>\n> > -----Original Message-----\n> > From: pgsql-hackers-owner@postgresql.org\n> > [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Marc G. Fournier\n> > Sent: Wednesday, 31 July 2002 2:20 AM\n> > To: Larry Rosenman\n> > Cc: Christopher Kings-Lynne; pgsql-hackers@postgresql.org\n> > Subject: Re: [HACKERS] Virus Emails\n> >\n> >\n> >\n> > Okay, this is sweet ... but can someone tell me where I 'Buy' a copy of\n> > uvscan? I've searched McAfee, but can't seem to find it in their eStore\n> > anywhere ...\n> >\n> >\n> > On 28 Jul 2002, Larry Rosenman wrote:\n> >\n> > > On Sun, 2002-07-28 at 20:10, Marc G. Fournier wrote:\n> > > >\n> > > > God, I go through 200+ of those almost daily as moderator ...\n> > imagine if\n> > > > we had the lists open? :)\n> > > I picked up a copy of McAfee's vscan for FreeBSD from one of my contract\n> > > people, and have amavisd-milter running to prevent them from even\n> > > getting in the door.\n> > >\n> > > Mayhaps pgsql.org should do the same?\n> > >\n> > >\n> > > >\n> > > >\n> > > > On Sat, 27 Jul 2002, Christopher Kings-Lynne wrote:\n> > > >\n> > > > > Hi guys,\n> > > > >\n> > > > > I seem to be getting virus emails that pretend to be one of\n> > your guys. eg.\n> > > > > I get them from T.Ishii and N.Conway, etc. Anyone out\n> > there on the list who\n> > > > > should perhaps scan their computer? :)\n> > > > >\n> > > > > Chris\n> > > > >\n> > > > >\n> > > > >\n> > > > > ---------------------------(end of\n> > broadcast)---------------------------\n> > > > > TIP 5: Have you checked our extensive FAQ?\n> > > > >\n> > > > > http://www.postgresql.org/users-lounge/docs/faq.html\n> > > > >\n> > > >\n> > > >\n> > > > ---------------------------(end of\n> > broadcast)---------------------------\n> > > > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > > > subscribe-nomail command to majordomo@postgresql.org so that your\n> > > > message can get through to the mailing list cleanly\n> > > >\n> > > --\n> > > Larry Rosenman http://www.lerctr.org/~ler\n> > > Phone: +1 972-414-9812 E-Mail: ler@lerctr.org\n> > > US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> > >\n> > >\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> >\n> > http://www.postgresql.org/users-lounge/docs/faq.html\n> >\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Wed, 31 Jul 2002 06:06:27 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Virus Emails"
}
] |
[
{
"msg_contents": "Has anyone here thought about using the spread libraries for WAL\nreplication amongst mutliple hosts? With this library I think it'd be\npossible to have a multi-master replication system...\n\nhttp://www.spread.org/\n\n I'm not familiar enough with the guts of postgres to be able to\nimpliment this (yet), but thought that it might be something for someone\nelse to look into. At the very least, check out the libs, they're very\nimpressive and I've had good luck with them in the past. If push comes \nto shove, in a few months I may write a few ruby+spread utils to do \nthis, but we'll see... -sc\n\n-- \nSean Chittenden",
"msg_date": "Fri, 20 Jul 2001 20:32:43 -0700",
"msg_from": "Sean Chittenden <sean-pgsql-hackers@chittenden.org>",
"msg_from_op": true,
"msg_subject": "IDEA: Multi-master replication possible through spread (or even\n\tmaster-slave)..."
},
{
"msg_contents": "Sean Chittenden wrote:\n\n> Has anyone here thought about using the spread libraries for WAL\n> replication amongst mutliple hosts? With this library I think it'd be\n> possible to have a multi-master replication system..\n\nYes, there is some work being done to use Spread as the group \ncommunication system\nfor Postgres-R, but we are just getting started with this software. \nUsing a group\ncommunication system to establish total order messages is one of the \nbasic principles for\nsynchronous multi-master replication with Postgres-R. Currently \nEnsemble (form Cornell\nUniversity) is used, but Spread looks to be more robust and it appears \nto be supported\non most if not all of the PostgreSQL supported platforms. \n\nIt's very cool to see positive testimony for Spread, and I hope I will \nfeel the same way\nas I become more familiar with it.\n\nDarren\n\n> \n\n",
"msg_date": "Sat, 21 Jul 2001 01:01:58 -0400",
"msg_from": "Darren Johnson <djohnson@greatbridge.org>",
"msg_from_op": false,
"msg_subject": "Re: IDEA: Multi-master replication possible through spread (or even\n\tmaster-slave)..."
},
{
"msg_contents": "Howdy. Darren, I'd reply in person, but there are issues with\nyour mail account. ;~) At anyrate, is there a mailing list that the\nPostgres-R development is happening on so that I could drop in and\neither listen/contribute? Thanks. -sc\n\n<djohnson@greatbridge.org>:\n63.136.234.38 does not like recipient.\nRemote host said: 550 <djohnson@greatbridge.org>... User unknown\nGiving up on 63.136.234.38.\n \n--- Below this line is a copy of the message.\n \nReturn-Path: <sean@mailhost.tgd.net>\nReceived: (qmail 9478 invoked by uid 1001); 21 Jul 2001 05:09:21 -0000\nDate: Fri, 20 Jul 2001 22:09:21 -0700\nFrom: Sean Chittenden <sean@chittenden.org>\nTo: Darren Johnson <djohnson@greatbridge.org>\nSubject: Re: [HACKERS] IDEA: Multi-master replication possible through \nspread \n+(or even master-slave)...\nMessage-ID: <20010720220921.L5160@rand.tgd.net>\nReferences: <20010720203243.J5160@rand.tgd.net> \n+<3B590CC6.5040506@greatbridge.org>\nMime-Version: 1.0\nContent-Type: multipart/signed; micalg=pgp-sha1;\n protocol=\"application/pgp-signature\"; \nboundary=\"1ppIqr1kl39GnwQx\"\nContent-Disposition: inline\nIn-Reply-To: <3B590CC6.5040506@greatbridge.org>; from \n\"djohnson@greatbridge.org\"\n+on Sat, Jul 21, 2001 at = 01:01:58AM\n\n\nOn Sat, Jul 21, 2001 at 01:01:58AM -0400, Darren Johnson wrote:\n> Delivered-To: chittenden.org-sean-pgsql-hackers@chittenden.org\n> Date: Sat, 21 Jul 2001 01:01:58 -0400\n> From: Darren Johnson <djohnson@greatbridge.org>\n> User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; m18) Gecko/20010131 Netscape6/6.01\n> X-Accept-Language: en\n> To: Sean Chittenden <sean-pgsql-hackers@chittenden.org>\n> CC: pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] IDEA: Multi-master replication possible through spread (or even master-slave)...\n> \n> Sean Chittenden wrote:\n> \n> > Has anyone here thought about using the spread libraries for WAL\n> > replication amongst mutliple hosts? With this library I think it'd be\n> > possible to have a multi-master replication system..\n> \n> Yes, there is some work being done to use Spread as the group \n> communication system\n> for Postgres-R, but we are just getting started with this software. \n> Using a group\n> communication system to establish total order messages is one of the \n> basic principles for\n> synchronous multi-master replication with Postgres-R. Currently \n> Ensemble (form Cornell\n> University) is used, but Spread looks to be more robust and it appears \n> to be supported\n> on most if not all of the PostgreSQL supported platforms. \n> \n> It's very cool to see positive testimony for Spread, and I hope I will \n> feel the same way\n> as I become more familiar with it.\n\n-- \nSean Chittenden",
"msg_date": "Fri, 20 Jul 2001 22:12:04 -0700",
"msg_from": "Sean Chittenden <sean-pgsql-hackers@chittenden.org>",
"msg_from_op": true,
"msg_subject": "Re: IDEA: Multi-master replication possible through spread (or even\n\tmaster-slave)..."
},
{
"msg_contents": "Sure. The mailing list is\n\nhttp://www.greatbridge.org/mailman/listinfo/pgreplication-general\n\nIt's not only for Postgres-R, but any PostgreSQL\nreplication ideas, discussions, or projects.\nFeel free to listen or contribute.\n\nDarren\n\nBTW: My apologies for the email issues. Should be fixed now.\n\nSean Chittenden wrote:\n\n> \tHowdy. Darren, I'd reply in person, but there are issues with\n> your mail account. ;~) At anyrate, is there a mailing list that the\n> Postgres-R development is happening on so that I could drop in and\n> either listen/contribute? Thanks. -sc\n\n",
"msg_date": "Sat, 21 Jul 2001 01:58:26 -0400",
"msg_from": "Darren Johnson <djohnson@greatbridge.com>",
"msg_from_op": false,
"msg_subject": "Re: IDEA: Multi-master replication possible through spread (or even\n\tmaster-slave)..."
}
] |
[
{
"msg_contents": "CVSROOT:\t/home/projects/pgsql/cvsroot\nModule name:\tpgsql\nChanges by:\tmomjian@hub.org\t01/07/21 00:32:42\n\nModified files:\n\tsrc/interfaces/libpq: fe-connect.c win32.h \n\nLog message:\n\tI downloaded new source for lib (only few hours old !!!), and made\n\tchanges on this new source to make non-blocking connection work. I\n\ttested it, and PQSendQuery and PQGetResult are working fine.\n\t\n\tIn win32.h I added one line:\n\t#define snprintf _snprintf\n\t\n\tDarko Prenosil\n\n",
"msg_date": "Sat, 21 Jul 2001 00:32:42 -0400 (EDT)",
"msg_from": "Bruce Momjian - CVS <momjian@hub.org>",
"msg_from_op": true,
"msg_subject": "pgsql/src/interfaces/libpq fe-connect.c win32.h"
},
{
"msg_contents": "Upon review, I don't think these patches are very good at all.\n#defining errno as WSAGetLastError() is a fairly blunt instrument,\nand it breaks all the places that do actually need to use errno,\nsuch as PQoidValue, lo_import, lo_export. I'm also concerned that\nPQrequestCancel may need to save/restore both errno and\nWSAGetLastError() in order to be safe for use in a signal handler.\n\nIs errno a plain variable on WIN32, or is it a macro? If the former,\nwe could hack around this problem by doing\n\n\t#if WIN32\n\t#undef errno\n\t#endif\n\n\t...\n\n\t#if WIN32\n\t#define errno WSAGetLastError()\n\t#endif\n\naround the routines that need to access the real errno. While ugly,\nthis probably beats the alternative of ifdef'ing all the places that\ndo need to access the WSA error code.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 21 Jul 2001 16:45:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "WIN32 errno patch"
},
{
"msg_contents": "> Upon review, I don't think these patches are very good at all.\n> #defining errno as WSAGetLastError() is a fairly blunt instrument,\n> and it breaks all the places that do actually need to use errno,\n> such as PQoidValue, lo_import, lo_export. I'm also concerned that\n> PQrequestCancel may need to save/restore both errno and\n> WSAGetLastError() in order to be safe for use in a signal handler.\n> \n> Is errno a plain variable on WIN32, or is it a macro? If the former,\n> we could hack around this problem by doing\n> \n> \t#if WIN32\n> \t#undef errno\n> \t#endif\n> \n> \t...\n> \n> \t#if WIN32\n> \t#define errno WSAGetLastError()\n> \t#endif\n> \n> around the routines that need to access the real errno. While ugly,\n> this probably beats the alternative of ifdef'ing all the places that\n> do need to access the WSA error code.\n\nAt this point, I am just happy we have this WIN32 errno thing working. \nWe can now have people improve upon the implementation.\n\nI see the code in win32 you are complaining about:\n\t\n\t/*\n\t * assumes that errno is used for sockets only\n\t *\n\t */\n\t\n\t#undef errno\n\t#undef EINTR\n\t#undef EAGAIN /* doesn't apply on sockets */\n\t\n\t#define errno WSAGetLastError()\n\nWhat we really need is for someone with Win32 access to figure out which\nerrno tests are WSAGetLastError() calls and which are real errno calls. \n\nMy guess is that we should have two errno's. One the normal errno that\nis the same on Win32 and Unix and a sockerrno that is conditionally\ndefined:\n\n\t#ifndef WIN32\n\t#define sockerrno errno\n\t#else\n\t#define sockerrno WSAGetLastError()\n\nHow does that work for folks? Can someone do the legwork?\n\nSee a later message on patches that reports problems with multibyte and\nWin32 in libpq.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 21 Jul 2001 17:35:45 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: WIN32 errno patch"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> At this point, I am just happy we have this WIN32 errno thing working. \n\nMy point is that it isn't \"working\", it's broken.\n\n> My guess is that we should have two errno's. One the normal errno that\n> is the same on Win32 and Unix and a sockerrno that is conditionally\n> defined:\n\nI don't really want to uglify the code by replacing most of the \"errno\"\nuses with \"sockerrno\". People know what errno is, they don't know what\n\"sockerrno\" is, so we'd be reducing the readability of the code in order\nto cater to Windows cultural imperialism (usual M$ philosophy: embrace,\nextend, and make sure Windows-compatible code can't run anywhere else).\n\nSince there are only about three routines in libpq that need access to\n\"regular\" errno, it seems less invasive to #define errno for the rest\nof them, and do something special in just these places.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 21 Jul 2001 18:09:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: WIN32 errno patch "
},
{
"msg_contents": "\nAny idea where we are on this?\n\n\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > At this point, I am just happy we have this WIN32 errno thing working. \n> \n> My point is that it isn't \"working\", it's broken.\n> \n> > My guess is that we should have two errno's. One the normal errno that\n> > is the same on Win32 and Unix and a sockerrno that is conditionally\n> > defined:\n> \n> I don't really want to uglify the code by replacing most of the \"errno\"\n> uses with \"sockerrno\". People know what errno is, they don't know what\n> \"sockerrno\" is, so we'd be reducing the readability of the code in order\n> to cater to Windows cultural imperialism (usual M$ philosophy: embrace,\n> extend, and make sure Windows-compatible code can't run anywhere else).\n> \n> Since there are only about three routines in libpq that need access to\n> \"regular\" errno, it seems less invasive to #define errno for the rest\n> of them, and do something special in just these places.\n> \n> \t\t\tregards, tom lane\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 2 Aug 2001 11:47:55 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: WIN32 errno patch"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Any idea where we are on this?\n\nI'm planning to #undef errno around the three routines that need to get\nat plain errno. Not done yet though...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 02 Aug 2001 12:11:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: WIN32 errno patch "
},
{
"msg_contents": "One can't just #undef errno on windows because it is defined as follows:\n\n\n #if defined(_MT) || defined(_DLL)\n extern int * __cdecl _errno(void);\n #define errno (*_errno())\n #else /* ndef _MT && ndef _DLL */\n extern int errno;\n #endif /* _MT || _DLL */\n\nSo when building a dll or a multithreaded application it is not a plain \nvariable but call to the _errno() and after #undef errno one will lose \nerrno completely. For the same reason it is impossible to use something\nlike 'errno=0;'.\n\nMikhail Terekhov\n\n\nTom Lane wrote:\n> \n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Any idea where we are on this?\n> \n> I'm planning to #undef errno around the three routines that need to get\n> at plain errno. Not done yet though...\n> \n> regards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n",
"msg_date": "Thu, 09 Aug 2001 15:08:12 -0400",
"msg_from": "Mikhail Terekhov <terekhov@emc.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: WIN32 errno patch"
},
{
"msg_contents": "\nTom has applied these changes to the CVS snapshot. Can you try it and\nlet us know.\n\n> One can't just #undef errno on windows because it is defined as follows:\n> \n> \n> #if defined(_MT) || defined(_DLL)\n> extern int * __cdecl _errno(void);\n> #define errno (*_errno())\n> #else /* ndef _MT && ndef _DLL */\n> extern int errno;\n> #endif /* _MT || _DLL */\n> \n> So when building a dll or a multithreaded application it is not a plain \n> variable but call to the _errno() and after #undef errno one will lose \n> errno completely. For the same reason it is impossible to use something\n> like 'errno=0;'.\n> \n> Mikhail Terekhov\n> \n> \n> Tom Lane wrote:\n> > \n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > Any idea where we are on this?\n> > \n> > I'm planning to #undef errno around the three routines that need to get\n> > at plain errno. Not done yet though...\n> > \n> > regards, tom lane\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 9 Aug 2001 15:40:16 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: WIN32 errno patch"
},
{
"msg_contents": "Mikhail Terekhov <terekhov@emc.com> writes:\n> One can't just #undef errno on windows because it is defined as\n> follows:\n\nI was wondering if Windows might play any games with errno. However,\nwe've had at least one instance of \"errno = 0;\" in the libpq sources\nsince 7.0 or before, and no one has complained that it doesn't build\non Windows ... if errno is defined as a function call, that should\nyield a compile error, no?\n\nAlso, the patch is already in place, and I have a report that it works.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 09 Aug 2001 16:26:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: WIN32 errno patch "
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> Mikhail Terekhov <terekhov@emc.com> writes:\n> > One can't just #undef errno on windows because it is defined as\n> > follows:\n> \n> I was wondering if Windows might play any games with errno. However,\n> we've had at least one instance of \"errno = 0;\" in the libpq sources\n> since 7.0 or before, and no one has complained that it doesn't build\n> on Windows ... if errno is defined as a function call, that should\n> yield a compile error, no?\n\nANSI C requires that errno be a modifiable l-value. I don't know of\nany system which breaks that rule. In other words `errno = 0' is\nalways OK on any system, assuming you have done `#include <errno.h>'.\nThe statement may involve a function call, as in Mikhail's example:\n extern int * __cdecl _errno(void);\n #define errno (*_errno())\n\nI took a quick look at the current sources, and I have to admit that\nthe `#undef errno' looks very dubious to me. I see what the code is\ntrying to do: win32.h #defines errno to simplify matters, but the\nsimplification doesn't really work, so you have to #undef errno in a\ncouple of places. But this procedure can not work when errno is a\nmacro already, as it is when compiling multi-threaded code on Windows.\nYou wind up with the wrong value of errno after doing the #undef.\n\nSo I think the current code is broken. However, while I've done\nWindows development in the past, I don't have a Windows system now,\nand I haven't actually tested anything.\n\nI think the clean way to handle this is something along the lines of\nwhat the CVS client does. On Unix, do this:\n #define SOCK_ERRNO errno\n #define SOCK_STRERROR strerror\nOn Windows, do this:\n #define SOCK_ERRNO (WSAGetLastError ())\n #define SOCK_STRERROR sock_strerror\n(Then you have to write sock_strerror.)\n\nThen change any reference to errno after a socket call to use\nSOCK_ERRNO instead.\n\nNote that the current Postgres code appears broken in another way, as\nit passes WSAGetLastError() to strerror(), which doesn't work.\nHowever, I again have not tested anything here.\n\nIan\n",
"msg_date": "09 Aug 2001 13:58:41 -0700",
"msg_from": "Ian Lance Taylor <ian@airs.com>",
"msg_from_op": false,
"msg_subject": "Re: WIN32 errno patch"
},
{
"msg_contents": "Ian Lance Taylor <ian@airs.com> writes:\n> I think the clean way to handle this is something along the lines of\n> what the CVS client does. On Unix, do this:\n> #define SOCK_ERRNO errno\n> #define SOCK_STRERROR strerror\n> On Windows, do this:\n> #define SOCK_ERRNO (WSAGetLastError ())\n> #define SOCK_STRERROR sock_strerror\n\nI've been trying to avoid uglifying the code like that, but perhaps\nwe have no choice :-(.\n\n> (Then you have to write sock_strerror.)\n\nSurely Windows provides a suitable function?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 09 Aug 2001 17:18:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: WIN32 errno patch "
},
{
"msg_contents": "From: \"Ian Lance Taylor\" <ian@airs.com>\n(snip)\n> #define SOCK_STRERROR sock_strerror\n> (Then you have to write sock_strerror.)\n> \n(snip)\n\nFormatMessage(...) is good for strerror(errno) emulation i think.\nbah.\n\nMagnus\n\n",
"msg_date": "Thu, 9 Aug 2001 23:35:37 +0200",
"msg_from": "\"Magnus Naeslund\\(f\\)\" <mag@fbab.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: WIN32 errno patch"
},
{
"msg_contents": "\n\nTom Lane wrote:\n> \n> Mikhail Terekhov <terekhov@emc.com> writes:\n> > One can't just #undef errno on windows because it is defined as\n> > follows:\n> \n> I was wondering if Windows might play any games with errno. However,\n> we've had at least one instance of \"errno = 0;\" in the libpq sources\n> since 7.0 or before, and no one has complained that it doesn't build\n> on Windows ... if errno is defined as a function call, that should\n> yield a compile error, no?\n> \n\nIt complains but only if you build a dll or multithreaded app.\n\n> Also, the patch is already in place, and I have a report that it works.\n> \n\nIt works only when compiling static apps I think.\n\n\nRegards,\nMikhail Terekhov\n",
"msg_date": "Thu, 09 Aug 2001 18:34:23 -0400",
"msg_from": "Mikhail Terekhov <terekhov@emc.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: WIN32 errno patch"
},
{
"msg_contents": "\n\nBruce Momjian wrote:\n> \n> Tom has applied these changes to the CVS snapshot. Can you try it and\n> let us know.\n> \n\nIt does not even compile:\n\nDeleting intermediate files and output files for project 'libpqdll_current - Win32 Debug'.\n--------------------Configuration: libpqdll_current - Win32 Debug--------------------\nCompiling resources...\nCompiling...\nfe-auth.c\nc:\\home\\postgres\\current\\pgsql\\src\\interfaces\\libpq\\win32.h(34) : warning C4005: 'errno' : macro redefinition\n c:\\program files\\microsoft visual studio\\vc98\\include\\stdlib.h(176) : see previous definition of 'errno'\nfe-connect.c\nc:\\home\\postgres\\current\\pgsql\\src\\interfaces\\libpq\\win32.h(34) : warning C4005: 'errno' : macro redefinition\n c:\\program files\\microsoft visual studio\\vc98\\include\\stdlib.h(176) : see previous definition of 'errno'\nfe-exec.c\nc:\\home\\postgres\\current\\pgsql\\src\\interfaces\\libpq\\win32.h(34) : warning C4005: 'errno' : macro redefinition\n c:\\program files\\microsoft visual studio\\vc98\\include\\stdlib.h(176) : see previous definition of 'errno'\nc:\\home\\postgres\\current\\pgsql\\src\\interfaces\\libpq\\fe-exec.c(2058) : error C2065: 'errno' : undeclared identifier\nfe-lobj.c\nc:\\home\\postgres\\current\\pgsql\\src\\interfaces\\libpq\\win32.h(34) : warning C4005: 'errno' : macro redefinition\n c:\\program files\\microsoft visual studio\\vc98\\include\\stdlib.h(176) : see previous definition of 'errno'\nc:\\home\\postgres\\current\\pgsql\\src\\interfaces\\libpq\\fe-lobj.c(405) : error C2065: 'errno' : undeclared identifier\nfe-misc.c\nc:\\home\\postgres\\current\\pgsql\\src\\interfaces\\libpq\\win32.h(34) : warning C4005: 'errno' : macro redefinition\n c:\\program files\\microsoft visual studio\\vc98\\include\\stdlib.h(176) : see previous definition of 'errno'\nc:\\home\\postgres\\current\\pgsql\\src\\interfaces\\libpq\\fe-misc.c(128) : warning C4018: '>' : signed/unsigned mismatch\nc:\\home\\postgres\\current\\pgsql\\src\\interfaces\\libpq\\fe-misc.c(219) : warning C4018: '>' : signed/unsigned mismatch\nfe-print.c\nc:\\home\\postgres\\current\\pgsql\\src\\interfaces\\libpq\\win32.h(34) : warning C4005: 'errno' : macro redefinition\n c:\\program files\\microsoft visual studio\\vc98\\include\\stdlib.h(176) : see previous definition of 'errno'\nc:\\home\\postgres\\current\\pgsql\\src\\interfaces\\libpq\\fe-print.c(304) : warning C4090: 'function' : different 'const' qualifiers\nc:\\home\\postgres\\current\\pgsql\\src\\interfaces\\libpq\\fe-print.c(304) : warning C4022: 'free' : pointer mismatch for actual parameter 1\nlibpqdll.c\npqexpbuffer.c\nc:\\home\\postgres\\current\\pgsql\\src\\interfaces\\libpq\\win32.h(34) : warning C4005: 'errno' : macro redefinition\n c:\\program files\\microsoft visual studio\\vc98\\include\\stdlib.h(176) : see previous definition of 'errno'\nc:\\home\\postgres\\current\\pgsql\\src\\interfaces\\libpq\\pqexpbuffer.c(195) : warning C4018: '<' : signed/unsigned mismatch\nc:\\home\\postgres\\current\\pgsql\\src\\interfaces\\libpq\\pqexpbuffer.c(244) : warning C4018: '<' : signed/unsigned mismatch\npqsignal.c\nError executing cl.exe.\n\nlibpq.dll - 2 error(s), 13 warning(s)\n\nRegards,\nMikhail Terekhov\n",
"msg_date": "Thu, 09 Aug 2001 18:42:23 -0400",
"msg_from": "Mikhail Terekhov <terekhov@emc.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: WIN32 errno patch"
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> > (Then you have to write sock_strerror.)\n> \n> Surely Windows provides a suitable function?\n\nYes, but it doesn't have the same calling convention.\n\nIan\n",
"msg_date": "09 Aug 2001 16:06:22 -0700",
"msg_from": "Ian Lance Taylor <ian@airs.com>",
"msg_from_op": false,
"msg_subject": "Re: WIN32 errno patch"
},
{
"msg_contents": "\n\"Mikhail Terekhov\" <terekhov@emc.com> wrote in message\nnews:3B7311CF.4D1CC8B2@emc.com...\n>\n>\n> Bruce Momjian wrote:\n> >\n> > Tom has applied these changes to the CVS snapshot. Can you try it and\n> > let us know.\n> >\n>\n> It does not even compile:\nSame behaviour here. The last week's snapshots didn't compile on Windows, as\nI reported before...\n\nBest Regards,\nSteve Howe\n\n\n",
"msg_date": "Sat, 11 Aug 2001 19:36:49 -0300",
"msg_from": "\"Steve Howe\" <howe@carcass.dhs.org>",
"msg_from_op": false,
"msg_subject": "Re: Re: WIN32 errno patch"
},
{
"msg_contents": ">> It does not even compile:\n> Same behaviour here.\n\nWhen someone sends me a Windoze implementation of the proposed\nSOCK_STRERROR() macro, I'll see about fixing it. Till then\nI can't do much.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 11 Aug 2001 20:14:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: Re: WIN32 errno patch "
},
{
"msg_contents": "\n\nTom Lane wrote:\n> \n> When someone sends me a Windoze implementation of the proposed\n> SOCK_STRERROR() macro, I'll see about fixing it. Till then\n> I can't do much.\n> \n> regards, tom lane\n> \n\nCould you please review the following patch for libpq.\nI've implemented the SOCK_ERRNO macro only because\nboth strerror and FormatMessage functions know nothing\nabout sockets errors.\nI've compiled the current sources with this patch applied\non windows and Solaris without problems and tested it through\ntcl interface only. It seems to work correctly - I could insert\nand select large strings (>10k).\n\nRegards\nMikhail Terekhov\n\n\nIndex: libpq/fe-connect.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/interfaces/libpq/fe-connect.c,v\nretrieving revision 1.172\ndiff -C3 -r1.172 fe-connect.c\n*** libpq/fe-connect.c\t2001/08/03 22:11:39\t1.172\n--- libpq/fe-connect.c\t2001/08/15 13:58:32\n***************\n*** 711,717 ****\n \t{\n \t\tprintfPQExpBuffer(&conn->errorMessage,\n \t\t\t\t\t\t libpq_gettext(\"could not set socket to non-blocking mode: %s\\n\"),\n! \t\t\t\t\t\t strerror(errno));\n \t\treturn 0;\n \t}\n \n--- 711,717 ----\n \t{\n \t\tprintfPQExpBuffer(&conn->errorMessage,\n \t\t\t\t\t\t libpq_gettext(\"could not set socket to non-blocking mode: %s\\n\"),\n! \t\t\t\t\t\t strerror(SOCK_ERRNO));\n \t\treturn 0;\n \t}\n \n***************\n*** 735,741 ****\n \t{\n \t\tprintfPQExpBuffer(&conn->errorMessage,\n \t\t\t\t\t\t libpq_gettext(\"could not set socket to TCP no delay mode: %s\\n\"),\n! \t\t\t\t\t\t strerror(errno));\n \t\treturn 0;\n \t}\n \n--- 735,741 ----\n \t{\n \t\tprintfPQExpBuffer(&conn->errorMessage,\n \t\t\t\t\t\t libpq_gettext(\"could not set socket to TCP no delay mode: %s\\n\"),\n! \t\t\t\t\t\t strerror(SOCK_ERRNO));\n \t\treturn 0;\n \t}\n \n***************\n*** 890,896 ****\n \t{\n \t\tprintfPQExpBuffer(&conn->errorMessage,\n \t\t\t\t\t\t libpq_gettext(\"could not create socket: %s\\n\"),\n! \t\t\t\t\t\t strerror(errno));\n \t\tgoto connect_errReturn;\n \t}\n \n--- 890,896 ----\n \t{\n \t\tprintfPQExpBuffer(&conn->errorMessage,\n \t\t\t\t\t\t libpq_gettext(\"could not create socket: %s\\n\"),\n! \t\t\t\t\t\t strerror(SOCK_ERRNO));\n \t\tgoto connect_errReturn;\n \t}\n \n***************\n*** 922,928 ****\n \t */\n \tif (connect(conn->sock, &conn->raddr.sa, conn->raddr_len) < 0)\n \t{\n! \t\tif (errno == EINPROGRESS || errno == EWOULDBLOCK || errno == 0)\n \t\t{\n \t\t\t/*\n \t\t\t * This is fine - we're in non-blocking mode, and the\n--- 922,928 ----\n \t */\n \tif (connect(conn->sock, &conn->raddr.sa, conn->raddr_len) < 0)\n \t{\n! \t\tif (SOCK_ERRNO == EINPROGRESS || SOCK_ERRNO == EWOULDBLOCK || errno == 0)\n \t\t{\n \t\t\t/*\n \t\t\t * This is fine - we're in non-blocking mode, and the\n***************\n*** 933,939 ****\n \t\telse\n \t\t{\n \t\t\t/* Something's gone wrong */\n! \t\t\tconnectFailureMessage(conn, errno);\n \t\t\tgoto connect_errReturn;\n \t\t}\n \t}\n--- 933,939 ----\n \t\telse\n \t\t{\n \t\t\t/* Something's gone wrong */\n! \t\t\tconnectFailureMessage(conn, SOCK_ERRNO);\n \t\t\tgoto connect_errReturn;\n \t\t}\n \t}\n***************\n*** 1212,1218 ****\n \t\t\t\t{\n \t\t\t\t\tprintfPQExpBuffer(&conn->errorMessage,\n \t\t\t\t\t\t\t\t\t libpq_gettext(\"could not get socket error status: %s\\n\"),\n! \t\t\t\t\t\t\t\t\t strerror(errno));\n \t\t\t\t\tgoto error_return;\n \t\t\t\t}\n \t\t\t\telse if (optval != 0)\n--- 1212,1218 ----\n \t\t\t\t{\n \t\t\t\t\tprintfPQExpBuffer(&conn->errorMessage,\n \t\t\t\t\t\t\t\t\t libpq_gettext(\"could not get socket error status: %s\\n\"),\n! \t\t\t\t\t\t\t\t\t strerror(SOCK_ERRNO));\n \t\t\t\t\tgoto error_return;\n \t\t\t\t}\n \t\t\t\telse if (optval != 0)\n***************\n*** 1232,1238 ****\n \t\t\t\t{\n \t\t\t\t\tprintfPQExpBuffer(&conn->errorMessage,\n \t\t\t\t\t\t\t\t\t libpq_gettext(\"could not get client address from socket: %s\\n\"),\n! \t\t\t\t\t\t\t\t\t strerror(errno));\n \t\t\t\t\tgoto error_return;\n \t\t\t\t}\n \n--- 1232,1238 ----\n \t\t\t\t{\n \t\t\t\t\tprintfPQExpBuffer(&conn->errorMessage,\n \t\t\t\t\t\t\t\t\t libpq_gettext(\"could not get client address from socket: %s\\n\"),\n! \t\t\t\t\t\t\t\t\t strerror(SOCK_ERRNO));\n \t\t\t\t\tgoto error_return;\n \t\t\t\t}\n \n***************\n*** 1271,1277 ****\n \t\t\t\t{\n \t\t\t\t\tprintfPQExpBuffer(&conn->errorMessage,\n \t\t\t\t\t\t\t\t\t libpq_gettext(\"could not send startup packet: %s\\n\"),\n! \t\t\t\t\t\t\t\t\t strerror(errno));\n \t\t\t\t\tgoto error_return;\n \t\t\t\t}\n \n--- 1271,1277 ----\n \t\t\t\t{\n \t\t\t\t\tprintfPQExpBuffer(&conn->errorMessage,\n \t\t\t\t\t\t\t\t\t libpq_gettext(\"could not send startup packet: %s\\n\"),\n! \t\t\t\t\t\t\t\t\t strerror(SOCK_ERRNO));\n \t\t\t\t\tgoto error_return;\n \t\t\t\t}\n \n***************\n*** 2101,2107 ****\n int\n PQrequestCancel(PGconn *conn)\n {\n! \tint\t\t\tsave_errno = errno;\n \tint\t\t\ttmpsock = -1;\n \tstruct\n \t{\n--- 2101,2107 ----\n int\n PQrequestCancel(PGconn *conn)\n {\n! \tint\t\t\tsave_errno = SOCK_ERRNO;\n \tint\t\t\ttmpsock = -1;\n \tstruct\n \t{\n***************\n*** 2173,2179 ****\n \treturn TRUE;\n \n cancel_errReturn:\n! \tstrcat(conn->errorMessage.data, strerror(errno));\n \tstrcat(conn->errorMessage.data, \"\\n\");\n \tconn->errorMessage.len = strlen(conn->errorMessage.data);\n \tif (tmpsock >= 0)\n--- 2173,2179 ----\n \treturn TRUE;\n \n cancel_errReturn:\n! \tstrcat(conn->errorMessage.data, strerror(SOCK_ERRNO));\n \tstrcat(conn->errorMessage.data, \"\\n\");\n \tconn->errorMessage.len = strlen(conn->errorMessage.data);\n \tif (tmpsock >= 0)\nIndex: libpq/fe-exec.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/interfaces/libpq/fe-exec.c,v\nretrieving revision 1.105\ndiff -C3 -r1.105 fe-exec.c\n*** libpq/fe-exec.c\t2001/08/03 22:11:39\t1.105\n--- libpq/fe-exec.c\t2001/08/15 13:58:32\n***************\n*** 2037,2046 ****\n \treturn buf;\n }\n \n- #ifdef WIN32\t\t\t\t\t/* need to get at normal errno here */\n- #undef errno\n- #endif\n- \n /*\n PQoidValue -\n \t\ta perhaps preferable form of the above which just returns\n--- 2037,2042 ----\n***************\n*** 2055,2061 ****\n--- 2051,2061 ----\n \tif (!res || !res->cmdStatus || strncmp(res->cmdStatus, \"INSERT \", 7) != 0)\n \t\treturn InvalidOid;\n \n+ #ifdef WIN32\n+ SetLastError(0);\n+ #else\n \terrno = 0;\n+ #endif\n \tresult = strtoul(res->cmdStatus + 7, &endptr, 10);\n \n \tif (!endptr || (*endptr != ' ' && *endptr != '\\0') || errno == ERANGE)\n***************\n*** 2064,2072 ****\n \t\treturn (Oid) result;\n }\n \n- #ifdef WIN32\t\t\t\t\t/* back to socket errno */\n- #define errno WSAGetLastError()\n- #endif\n \n /*\n PQcmdTuples -\n--- 2064,2069 ----\nIndex: libpq/fe-lobj.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/interfaces/libpq/fe-lobj.c,v\nretrieving revision 1.36\ndiff -C3 -r1.36 fe-lobj.c\n*** libpq/fe-lobj.c\t2001/08/03 22:11:39\t1.36\n--- libpq/fe-lobj.c\t2001/08/15 13:58:32\n***************\n*** 30,41 ****\n \n #include \"libpq/libpq-fs.h\"\t\t/* must come after sys/stat.h */\n \n- \n- #ifdef WIN32\t\t\t\t\t/* need to use normal errno in this file */\n- #undef errno\n- #endif\n- \n- \n #define LO_BUFSIZE\t\t 8192\n \n static int\tlo_initialize(PGconn *conn);\n--- 30,35 ----\nIndex: libpq/fe-misc.c\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/interfaces/libpq/fe-misc.c,v\nretrieving revision 1.52\ndiff -C3 -r1.52 fe-misc.c\n*** libpq/fe-misc.c\t2001/07/20 17:45:06\t1.52\n--- libpq/fe-misc.c\t2001/08/15 13:58:32\n***************\n*** 347,359 ****\n \tif (select(conn->sock + 1, &input_mask, (fd_set *) NULL, (fd_set *) NULL,\n \t\t\t &timeout) < 0)\n \t{\n! \t\tif (errno == EINTR)\n \t\t\t/* Interrupted system call - we'll just try again */\n \t\t\tgoto retry;\n \n \t\tprintfPQExpBuffer(&conn->errorMessage,\n \t\t\t\t\t\t libpq_gettext(\"select() failed: %s\\n\"),\n! \t\t\t\t\t\t strerror(errno));\n \t\treturn -1;\n \t}\n \n--- 347,359 ----\n \tif (select(conn->sock + 1, &input_mask, (fd_set *) NULL, (fd_set *) NULL,\n \t\t\t &timeout) < 0)\n \t{\n! \t\tif (SOCK_ERRNO == EINTR)\n \t\t\t/* Interrupted system call - we'll just try again */\n \t\t\tgoto retry;\n \n \t\tprintfPQExpBuffer(&conn->errorMessage,\n \t\t\t\t\t\t libpq_gettext(\"select() failed: %s\\n\"),\n! \t\t\t\t\t\t strerror(SOCK_ERRNO));\n \t\treturn -1;\n \t}\n \n***************\n*** 381,393 ****\n \tif (select(conn->sock + 1, (fd_set *) NULL, &input_mask, (fd_set *) NULL,\n \t\t\t &timeout) < 0)\n \t{\n! \t\tif (errno == EINTR)\n \t\t\t/* Interrupted system call - we'll just try again */\n \t\t\tgoto retry;\n \n \t\tprintfPQExpBuffer(&conn->errorMessage,\n \t\t\t\t\t\t libpq_gettext(\"select() failed: %s\\n\"),\n! \t\t\t\t\t\t strerror(errno));\n \t\treturn -1;\n \t}\n \treturn FD_ISSET(conn->sock, &input_mask) ? 1 : 0;\n--- 381,393 ----\n \tif (select(conn->sock + 1, (fd_set *) NULL, &input_mask, (fd_set *) NULL,\n \t\t\t &timeout) < 0)\n \t{\n! \t\tif (SOCK_ERRNO == EINTR)\n \t\t\t/* Interrupted system call - we'll just try again */\n \t\t\tgoto retry;\n \n \t\tprintfPQExpBuffer(&conn->errorMessage,\n \t\t\t\t\t\t libpq_gettext(\"select() failed: %s\\n\"),\n! \t\t\t\t\t\t strerror(SOCK_ERRNO));\n \t\treturn -1;\n \t}\n \treturn FD_ISSET(conn->sock, &input_mask) ? 1 : 0;\n***************\n*** 466,490 ****\n \t\t\t\t\t conn->inBufSize - conn->inEnd, 0);\n \tif (nread < 0)\n \t{\n! \t\tif (errno == EINTR)\n \t\t\tgoto tryAgain;\n \t\t/* Some systems return EAGAIN/EWOULDBLOCK for no data */\n #ifdef EAGAIN\n! \t\tif (errno == EAGAIN)\n \t\t\treturn someread;\n #endif\n #if defined(EWOULDBLOCK) && (!defined(EAGAIN) || (EWOULDBLOCK != EAGAIN))\n! \t\tif (errno == EWOULDBLOCK)\n \t\t\treturn someread;\n #endif\n \t\t/* We might get ECONNRESET here if using TCP and backend died */\n #ifdef ECONNRESET\n! \t\tif (errno == ECONNRESET)\n \t\t\tgoto definitelyFailed;\n #endif\n \t\tprintfPQExpBuffer(&conn->errorMessage,\n \t\t\t\t\t\t libpq_gettext(\"could not receive data from server: %s\\n\"),\n! \t\t\t\t\t\t strerror(errno));\n \t\treturn -1;\n \t}\n \tif (nread > 0)\n--- 466,490 ----\n \t\t\t\t\t conn->inBufSize - conn->inEnd, 0);\n \tif (nread < 0)\n \t{\n! \t\tif (SOCK_ERRNO == EINTR)\n \t\t\tgoto tryAgain;\n \t\t/* Some systems return EAGAIN/EWOULDBLOCK for no data */\n #ifdef EAGAIN\n! \t\tif (SOCK_ERRNO == EAGAIN)\n \t\t\treturn someread;\n #endif\n #if defined(EWOULDBLOCK) && (!defined(EAGAIN) || (EWOULDBLOCK != EAGAIN))\n! \t\tif (SOCK_ERRNO == EWOULDBLOCK)\n \t\t\treturn someread;\n #endif\n \t\t/* We might get ECONNRESET here if using TCP and backend died */\n #ifdef ECONNRESET\n! \t\tif (SOCK_ERRNO == ECONNRESET)\n \t\t\tgoto definitelyFailed;\n #endif\n \t\tprintfPQExpBuffer(&conn->errorMessage,\n \t\t\t\t\t\t libpq_gettext(\"could not receive data from server: %s\\n\"),\n! \t\t\t\t\t\t strerror(SOCK_ERRNO));\n \t\treturn -1;\n \t}\n \tif (nread > 0)\n***************\n*** 552,576 ****\n \t\t\t\t\t conn->inBufSize - conn->inEnd, 0);\n \tif (nread < 0)\n \t{\n! \t\tif (errno == EINTR)\n \t\t\tgoto tryAgain2;\n \t\t/* Some systems return EAGAIN/EWOULDBLOCK for no data */\n #ifdef EAGAIN\n! \t\tif (errno == EAGAIN)\n \t\t\treturn 0;\n #endif\n #if defined(EWOULDBLOCK) && (!defined(EAGAIN) || (EWOULDBLOCK != EAGAIN))\n! \t\tif (errno == EWOULDBLOCK)\n \t\t\treturn 0;\n #endif\n \t\t/* We might get ECONNRESET here if using TCP and backend died */\n #ifdef ECONNRESET\n! \t\tif (errno == ECONNRESET)\n \t\t\tgoto definitelyFailed;\n #endif\n \t\tprintfPQExpBuffer(&conn->errorMessage,\n \t\t\t\t\t\t libpq_gettext(\"could not receive data from server: %s\\n\"),\n! \t\t\t\t\t\t strerror(errno));\n \t\treturn -1;\n \t}\n \tif (nread > 0)\n--- 552,576 ----\n \t\t\t\t\t conn->inBufSize - conn->inEnd, 0);\n \tif (nread < 0)\n \t{\n! \t\tif (SOCK_ERRNO == EINTR)\n \t\t\tgoto tryAgain2;\n \t\t/* Some systems return EAGAIN/EWOULDBLOCK for no data */\n #ifdef EAGAIN\n! \t\tif (SOCK_ERRNO == EAGAIN)\n \t\t\treturn 0;\n #endif\n #if defined(EWOULDBLOCK) && (!defined(EAGAIN) || (EWOULDBLOCK != EAGAIN))\n! \t\tif (SOCK_ERRNO == EWOULDBLOCK)\n \t\t\treturn 0;\n #endif\n \t\t/* We might get ECONNRESET here if using TCP and backend died */\n #ifdef ECONNRESET\n! \t\tif (SOCK_ERRNO == ECONNRESET)\n \t\t\tgoto definitelyFailed;\n #endif\n \t\tprintfPQExpBuffer(&conn->errorMessage,\n \t\t\t\t\t\t libpq_gettext(\"could not receive data from server: %s\\n\"),\n! \t\t\t\t\t\t strerror(SOCK_ERRNO));\n \t\treturn -1;\n \t}\n \tif (nread > 0)\n***************\n*** 652,658 ****\n \t\t\t * EPIPE or ECONNRESET, assume we've lost the backend\n \t\t\t * connection permanently.\n \t\t\t */\n! \t\t\tswitch (errno)\n \t\t\t{\n #ifdef EAGAIN\n \t\t\t\tcase EAGAIN:\n--- 652,658 ----\n \t\t\t * EPIPE or ECONNRESET, assume we've lost the backend\n \t\t\t * connection permanently.\n \t\t\t */\n! \t\t\tswitch (SOCK_ERRNO)\n \t\t\t{\n #ifdef EAGAIN\n \t\t\t\tcase EAGAIN:\n***************\n*** 688,694 ****\n \t\t\t\tdefault:\n \t\t\t\t\tprintfPQExpBuffer(&conn->errorMessage,\n \t\t\t\t\t\t\t\t\t libpq_gettext(\"could not send data to server: %s\\n\"),\n! \t\t\t\t\t\t\t\t\t strerror(errno));\n \t\t\t\t\t/* We don't assume it's a fatal error... */\n \t\t\t\t\treturn EOF;\n \t\t\t}\n--- 688,694 ----\n \t\t\t\tdefault:\n \t\t\t\t\tprintfPQExpBuffer(&conn->errorMessage,\n \t\t\t\t\t\t\t\t\t libpq_gettext(\"could not send data to server: %s\\n\"),\n! \t\t\t\t\t\t\t\t\t strerror(SOCK_ERRNO));\n \t\t\t\t\t/* We don't assume it's a fatal error... */\n \t\t\t\t\treturn EOF;\n \t\t\t}\n***************\n*** 771,781 ****\n \t\tif (select(conn->sock + 1, &input_mask, &output_mask, &except_mask,\n \t\t\t\t (struct timeval *) NULL) < 0)\n \t\t{\n! \t\t\tif (errno == EINTR)\n \t\t\t\tgoto retry;\n \t\t\tprintfPQExpBuffer(&conn->errorMessage,\n \t\t\t\t\t\t\t libpq_gettext(\"select() failed: %s\\n\"),\n! \t\t\t\t\t\t\t strerror(errno));\n \t\t\treturn EOF;\n \t\t}\n \t}\n--- 771,781 ----\n \t\tif (select(conn->sock + 1, &input_mask, &output_mask, &except_mask,\n \t\t\t\t (struct timeval *) NULL) < 0)\n \t\t{\n! \t\t\tif (SOCK_ERRNO == EINTR)\n \t\t\t\tgoto retry;\n \t\t\tprintfPQExpBuffer(&conn->errorMessage,\n \t\t\t\t\t\t\t libpq_gettext(\"select() failed: %s\\n\"),\n! \t\t\t\t\t\t\t strerror(SOCK_ERRNO));\n \t\t\treturn EOF;\n \t\t}\n \t}\nIndex: libpq/libpq-fe.h\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/interfaces/libpq/libpq-fe.h,v\nretrieving revision 1.71\ndiff -C3 -r1.71 libpq-fe.h\n*** libpq/libpq-fe.h\t2001/03/22 04:01:27\t1.71\n--- libpq/libpq-fe.h\t2001/08/15 13:58:32\n***************\n*** 21,30 ****\n--- 21,39 ----\n #endif\n \n #include <stdio.h>\n+ \n+ #ifdef WIN32\n+ #define SOCK_ERRNO (WSAGetLastError ())\n+ #else\n+ #define SOCK_ERRNO errno\n+ #endif\n+ \n+ \n /* postgres_ext.h defines the backend's externally visible types,\n * such as Oid.\n */\n #include \"postgres_ext.h\"\n+ \n #ifdef USE_SSL\n #include <openssl/ssl.h>\n #endif\nIndex: libpq/win32.h\n===================================================================\nRCS file: /home/projects/pgsql/cvsroot/pgsql/src/interfaces/libpq/win32.h,v\nretrieving revision 1.15\ndiff -C3 -r1.15 win32.h\n*** libpq/win32.h\t2001/08/03 22:11:39\t1.15\n--- libpq/win32.h\t2001/08/15 13:58:32\n***************\n*** 23,38 ****\n */\n #define crypt(a,b) (a)\n \n- /*\n- * Most of libpq uses \"errno\" to access error conditions from socket calls,\n- * so on Windows we want to redirect those usages to WSAGetLastError().\n- * Rather than #ifdef'ing every single place that has \"errno\", hack it up\n- * with a macro instead. But there are a few places that do need to touch\n- * the regular errno variable. For them, we #undef and then redefine errno.\n- */\n- \n- #define errno WSAGetLastError()\n- \n #undef EAGAIN\t/* doesn't apply on sockets */\n #undef EINTR\n #define EINTR WSAEINTR\n--- 23,28 ----\n",
"msg_date": "Wed, 15 Aug 2001 12:58:14 -0400",
"msg_from": "Mikhail Terekhov <terekhov@emc.com>",
"msg_from_op": false,
"msg_subject": "Re: WIN32 errno patch"
},
{
"msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n> \n> \n> Tom Lane wrote:\n> > \n> > When someone sends me a Windoze implementation of the proposed\n> > SOCK_STRERROR() macro, I'll see about fixing it. Till then\n> > I can't do much.\n> > \n> > regards, tom lane\n> > \n> \n> Could you please review the following patch for libpq.\n> I've implemented the SOCK_ERRNO macro only because\n> both strerror and FormatMessage functions know nothing\n> about sockets errors.\n> I've compiled the current sources with this patch applied\n> on windows and Solaris without problems and tested it through\n> tcl interface only. It seems to work correctly - I could insert\n> and select large strings (>10k).\n> \n> Regards\n> Mikhail Terekhov\n> \n> \n> Index: libpq/fe-connect.c\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/interfaces/libpq/fe-connect.c,v\n> retrieving revision 1.172\n> diff -C3 -r1.172 fe-connect.c\n> *** libpq/fe-connect.c\t2001/08/03 22:11:39\t1.172\n> --- libpq/fe-connect.c\t2001/08/15 13:58:32\n> ***************\n> *** 711,717 ****\n> \t{\n> \t\tprintfPQExpBuffer(&conn->errorMessage,\n> \t\t\t\t\t\t libpq_gettext(\"could not set socket to non-blocking mode: %s\\n\"),\n> ! \t\t\t\t\t\t strerror(errno));\n> \t\treturn 0;\n> \t}\n> \n> --- 711,717 ----\n> \t{\n> \t\tprintfPQExpBuffer(&conn->errorMessage,\n> \t\t\t\t\t\t libpq_gettext(\"could not set socket to non-blocking mode: %s\\n\"),\n> ! \t\t\t\t\t\t strerror(SOCK_ERRNO));\n> \t\treturn 0;\n> \t}\n> \n> ***************\n> *** 735,741 ****\n> \t{\n> \t\tprintfPQExpBuffer(&conn->errorMessage,\n> \t\t\t\t\t\t libpq_gettext(\"could not set socket to TCP no delay mode: %s\\n\"),\n> ! \t\t\t\t\t\t strerror(errno));\n> \t\treturn 0;\n> \t}\n> \n> --- 735,741 ----\n> \t{\n> \t\tprintfPQExpBuffer(&conn->errorMessage,\n> \t\t\t\t\t\t libpq_gettext(\"could not set socket to TCP no delay mode: %s\\n\"),\n> ! \t\t\t\t\t\t strerror(SOCK_ERRNO));\n> \t\treturn 0;\n> \t}\n> \n> ***************\n> *** 890,896 ****\n> \t{\n> \t\tprintfPQExpBuffer(&conn->errorMessage,\n> \t\t\t\t\t\t libpq_gettext(\"could not create socket: %s\\n\"),\n> ! \t\t\t\t\t\t strerror(errno));\n> \t\tgoto connect_errReturn;\n> \t}\n> \n> --- 890,896 ----\n> \t{\n> \t\tprintfPQExpBuffer(&conn->errorMessage,\n> \t\t\t\t\t\t libpq_gettext(\"could not create socket: %s\\n\"),\n> ! \t\t\t\t\t\t strerror(SOCK_ERRNO));\n> \t\tgoto connect_errReturn;\n> \t}\n> \n> ***************\n> *** 922,928 ****\n> \t */\n> \tif (connect(conn->sock, &conn->raddr.sa, conn->raddr_len) < 0)\n> \t{\n> ! \t\tif (errno == EINPROGRESS || errno == EWOULDBLOCK || errno == 0)\n> \t\t{\n> \t\t\t/*\n> \t\t\t * This is fine - we're in non-blocking mode, and the\n> --- 922,928 ----\n> \t */\n> \tif (connect(conn->sock, &conn->raddr.sa, conn->raddr_len) < 0)\n> \t{\n> ! \t\tif (SOCK_ERRNO == EINPROGRESS || SOCK_ERRNO == EWOULDBLOCK || errno == 0)\n> \t\t{\n> \t\t\t/*\n> \t\t\t * This is fine - we're in non-blocking mode, and the\n> ***************\n> *** 933,939 ****\n> \t\telse\n> \t\t{\n> \t\t\t/* Something's gone wrong */\n> ! \t\t\tconnectFailureMessage(conn, errno);\n> \t\t\tgoto connect_errReturn;\n> \t\t}\n> \t}\n> --- 933,939 ----\n> \t\telse\n> \t\t{\n> \t\t\t/* Something's gone wrong */\n> ! \t\t\tconnectFailureMessage(conn, SOCK_ERRNO);\n> \t\t\tgoto connect_errReturn;\n> \t\t}\n> \t}\n> ***************\n> *** 1212,1218 ****\n> \t\t\t\t{\n> \t\t\t\t\tprintfPQExpBuffer(&conn->errorMessage,\n> \t\t\t\t\t\t\t\t\t libpq_gettext(\"could not get socket error status: %s\\n\"),\n> ! \t\t\t\t\t\t\t\t\t strerror(errno));\n> \t\t\t\t\tgoto error_return;\n> \t\t\t\t}\n> \t\t\t\telse if (optval != 0)\n> --- 1212,1218 ----\n> \t\t\t\t{\n> \t\t\t\t\tprintfPQExpBuffer(&conn->errorMessage,\n> \t\t\t\t\t\t\t\t\t libpq_gettext(\"could not get socket error status: %s\\n\"),\n> ! \t\t\t\t\t\t\t\t\t strerror(SOCK_ERRNO));\n> \t\t\t\t\tgoto error_return;\n> \t\t\t\t}\n> \t\t\t\telse if (optval != 0)\n> ***************\n> *** 1232,1238 ****\n> \t\t\t\t{\n> \t\t\t\t\tprintfPQExpBuffer(&conn->errorMessage,\n> \t\t\t\t\t\t\t\t\t libpq_gettext(\"could not get client address from socket: %s\\n\"),\n> ! \t\t\t\t\t\t\t\t\t strerror(errno));\n> \t\t\t\t\tgoto error_return;\n> \t\t\t\t}\n> \n> --- 1232,1238 ----\n> \t\t\t\t{\n> \t\t\t\t\tprintfPQExpBuffer(&conn->errorMessage,\n> \t\t\t\t\t\t\t\t\t libpq_gettext(\"could not get client address from socket: %s\\n\"),\n> ! \t\t\t\t\t\t\t\t\t strerror(SOCK_ERRNO));\n> \t\t\t\t\tgoto error_return;\n> \t\t\t\t}\n> \n> ***************\n> *** 1271,1277 ****\n> \t\t\t\t{\n> \t\t\t\t\tprintfPQExpBuffer(&conn->errorMessage,\n> \t\t\t\t\t\t\t\t\t libpq_gettext(\"could not send startup packet: %s\\n\"),\n> ! \t\t\t\t\t\t\t\t\t strerror(errno));\n> \t\t\t\t\tgoto error_return;\n> \t\t\t\t}\n> \n> --- 1271,1277 ----\n> \t\t\t\t{\n> \t\t\t\t\tprintfPQExpBuffer(&conn->errorMessage,\n> \t\t\t\t\t\t\t\t\t libpq_gettext(\"could not send startup packet: %s\\n\"),\n> ! \t\t\t\t\t\t\t\t\t strerror(SOCK_ERRNO));\n> \t\t\t\t\tgoto error_return;\n> \t\t\t\t}\n> \n> ***************\n> *** 2101,2107 ****\n> int\n> PQrequestCancel(PGconn *conn)\n> {\n> ! \tint\t\t\tsave_errno = errno;\n> \tint\t\t\ttmpsock = -1;\n> \tstruct\n> \t{\n> --- 2101,2107 ----\n> int\n> PQrequestCancel(PGconn *conn)\n> {\n> ! \tint\t\t\tsave_errno = SOCK_ERRNO;\n> \tint\t\t\ttmpsock = -1;\n> \tstruct\n> \t{\n> ***************\n> *** 2173,2179 ****\n> \treturn TRUE;\n> \n> cancel_errReturn:\n> ! \tstrcat(conn->errorMessage.data, strerror(errno));\n> \tstrcat(conn->errorMessage.data, \"\\n\");\n> \tconn->errorMessage.len = strlen(conn->errorMessage.data);\n> \tif (tmpsock >= 0)\n> --- 2173,2179 ----\n> \treturn TRUE;\n> \n> cancel_errReturn:\n> ! \tstrcat(conn->errorMessage.data, strerror(SOCK_ERRNO));\n> \tstrcat(conn->errorMessage.data, \"\\n\");\n> \tconn->errorMessage.len = strlen(conn->errorMessage.data);\n> \tif (tmpsock >= 0)\n> Index: libpq/fe-exec.c\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/interfaces/libpq/fe-exec.c,v\n> retrieving revision 1.105\n> diff -C3 -r1.105 fe-exec.c\n> *** libpq/fe-exec.c\t2001/08/03 22:11:39\t1.105\n> --- libpq/fe-exec.c\t2001/08/15 13:58:32\n> ***************\n> *** 2037,2046 ****\n> \treturn buf;\n> }\n> \n> - #ifdef WIN32\t\t\t\t\t/* need to get at normal errno here */\n> - #undef errno\n> - #endif\n> - \n> /*\n> PQoidValue -\n> \t\ta perhaps preferable form of the above which just returns\n> --- 2037,2042 ----\n> ***************\n> *** 2055,2061 ****\n> --- 2051,2061 ----\n> \tif (!res || !res->cmdStatus || strncmp(res->cmdStatus, \"INSERT \", 7) != 0)\n> \t\treturn InvalidOid;\n> \n> + #ifdef WIN32\n> + SetLastError(0);\n> + #else\n> \terrno = 0;\n> + #endif\n> \tresult = strtoul(res->cmdStatus + 7, &endptr, 10);\n> \n> \tif (!endptr || (*endptr != ' ' && *endptr != '\\0') || errno == ERANGE)\n> ***************\n> *** 2064,2072 ****\n> \t\treturn (Oid) result;\n> }\n> \n> - #ifdef WIN32\t\t\t\t\t/* back to socket errno */\n> - #define errno WSAGetLastError()\n> - #endif\n> \n> /*\n> PQcmdTuples -\n> --- 2064,2069 ----\n> Index: libpq/fe-lobj.c\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/interfaces/libpq/fe-lobj.c,v\n> retrieving revision 1.36\n> diff -C3 -r1.36 fe-lobj.c\n> *** libpq/fe-lobj.c\t2001/08/03 22:11:39\t1.36\n> --- libpq/fe-lobj.c\t2001/08/15 13:58:32\n> ***************\n> *** 30,41 ****\n> \n> #include \"libpq/libpq-fs.h\"\t\t/* must come after sys/stat.h */\n> \n> - \n> - #ifdef WIN32\t\t\t\t\t/* need to use normal errno in this file */\n> - #undef errno\n> - #endif\n> - \n> - \n> #define LO_BUFSIZE\t\t 8192\n> \n> static int\tlo_initialize(PGconn *conn);\n> --- 30,35 ----\n> Index: libpq/fe-misc.c\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/interfaces/libpq/fe-misc.c,v\n> retrieving revision 1.52\n> diff -C3 -r1.52 fe-misc.c\n> *** libpq/fe-misc.c\t2001/07/20 17:45:06\t1.52\n> --- libpq/fe-misc.c\t2001/08/15 13:58:32\n> ***************\n> *** 347,359 ****\n> \tif (select(conn->sock + 1, &input_mask, (fd_set *) NULL, (fd_set *) NULL,\n> \t\t\t &timeout) < 0)\n> \t{\n> ! \t\tif (errno == EINTR)\n> \t\t\t/* Interrupted system call - we'll just try again */\n> \t\t\tgoto retry;\n> \n> \t\tprintfPQExpBuffer(&conn->errorMessage,\n> \t\t\t\t\t\t libpq_gettext(\"select() failed: %s\\n\"),\n> ! \t\t\t\t\t\t strerror(errno));\n> \t\treturn -1;\n> \t}\n> \n> --- 347,359 ----\n> \tif (select(conn->sock + 1, &input_mask, (fd_set *) NULL, (fd_set *) NULL,\n> \t\t\t &timeout) < 0)\n> \t{\n> ! \t\tif (SOCK_ERRNO == EINTR)\n> \t\t\t/* Interrupted system call - we'll just try again */\n> \t\t\tgoto retry;\n> \n> \t\tprintfPQExpBuffer(&conn->errorMessage,\n> \t\t\t\t\t\t libpq_gettext(\"select() failed: %s\\n\"),\n> ! \t\t\t\t\t\t strerror(SOCK_ERRNO));\n> \t\treturn -1;\n> \t}\n> \n> ***************\n> *** 381,393 ****\n> \tif (select(conn->sock + 1, (fd_set *) NULL, &input_mask, (fd_set *) NULL,\n> \t\t\t &timeout) < 0)\n> \t{\n> ! \t\tif (errno == EINTR)\n> \t\t\t/* Interrupted system call - we'll just try again */\n> \t\t\tgoto retry;\n> \n> \t\tprintfPQExpBuffer(&conn->errorMessage,\n> \t\t\t\t\t\t libpq_gettext(\"select() failed: %s\\n\"),\n> ! \t\t\t\t\t\t strerror(errno));\n> \t\treturn -1;\n> \t}\n> \treturn FD_ISSET(conn->sock, &input_mask) ? 1 : 0;\n> --- 381,393 ----\n> \tif (select(conn->sock + 1, (fd_set *) NULL, &input_mask, (fd_set *) NULL,\n> \t\t\t &timeout) < 0)\n> \t{\n> ! \t\tif (SOCK_ERRNO == EINTR)\n> \t\t\t/* Interrupted system call - we'll just try again */\n> \t\t\tgoto retry;\n> \n> \t\tprintfPQExpBuffer(&conn->errorMessage,\n> \t\t\t\t\t\t libpq_gettext(\"select() failed: %s\\n\"),\n> ! \t\t\t\t\t\t strerror(SOCK_ERRNO));\n> \t\treturn -1;\n> \t}\n> \treturn FD_ISSET(conn->sock, &input_mask) ? 1 : 0;\n> ***************\n> *** 466,490 ****\n> \t\t\t\t\t conn->inBufSize - conn->inEnd, 0);\n> \tif (nread < 0)\n> \t{\n> ! \t\tif (errno == EINTR)\n> \t\t\tgoto tryAgain;\n> \t\t/* Some systems return EAGAIN/EWOULDBLOCK for no data */\n> #ifdef EAGAIN\n> ! \t\tif (errno == EAGAIN)\n> \t\t\treturn someread;\n> #endif\n> #if defined(EWOULDBLOCK) && (!defined(EAGAIN) || (EWOULDBLOCK != EAGAIN))\n> ! \t\tif (errno == EWOULDBLOCK)\n> \t\t\treturn someread;\n> #endif\n> \t\t/* We might get ECONNRESET here if using TCP and backend died */\n> #ifdef ECONNRESET\n> ! \t\tif (errno == ECONNRESET)\n> \t\t\tgoto definitelyFailed;\n> #endif\n> \t\tprintfPQExpBuffer(&conn->errorMessage,\n> \t\t\t\t\t\t libpq_gettext(\"could not receive data from server: %s\\n\"),\n> ! \t\t\t\t\t\t strerror(errno));\n> \t\treturn -1;\n> \t}\n> \tif (nread > 0)\n> --- 466,490 ----\n> \t\t\t\t\t conn->inBufSize - conn->inEnd, 0);\n> \tif (nread < 0)\n> \t{\n> ! \t\tif (SOCK_ERRNO == EINTR)\n> \t\t\tgoto tryAgain;\n> \t\t/* Some systems return EAGAIN/EWOULDBLOCK for no data */\n> #ifdef EAGAIN\n> ! \t\tif (SOCK_ERRNO == EAGAIN)\n> \t\t\treturn someread;\n> #endif\n> #if defined(EWOULDBLOCK) && (!defined(EAGAIN) || (EWOULDBLOCK != EAGAIN))\n> ! \t\tif (SOCK_ERRNO == EWOULDBLOCK)\n> \t\t\treturn someread;\n> #endif\n> \t\t/* We might get ECONNRESET here if using TCP and backend died */\n> #ifdef ECONNRESET\n> ! \t\tif (SOCK_ERRNO == ECONNRESET)\n> \t\t\tgoto definitelyFailed;\n> #endif\n> \t\tprintfPQExpBuffer(&conn->errorMessage,\n> \t\t\t\t\t\t libpq_gettext(\"could not receive data from server: %s\\n\"),\n> ! \t\t\t\t\t\t strerror(SOCK_ERRNO));\n> \t\treturn -1;\n> \t}\n> \tif (nread > 0)\n> ***************\n> *** 552,576 ****\n> \t\t\t\t\t conn->inBufSize - conn->inEnd, 0);\n> \tif (nread < 0)\n> \t{\n> ! \t\tif (errno == EINTR)\n> \t\t\tgoto tryAgain2;\n> \t\t/* Some systems return EAGAIN/EWOULDBLOCK for no data */\n> #ifdef EAGAIN\n> ! \t\tif (errno == EAGAIN)\n> \t\t\treturn 0;\n> #endif\n> #if defined(EWOULDBLOCK) && (!defined(EAGAIN) || (EWOULDBLOCK != EAGAIN))\n> ! \t\tif (errno == EWOULDBLOCK)\n> \t\t\treturn 0;\n> #endif\n> \t\t/* We might get ECONNRESET here if using TCP and backend died */\n> #ifdef ECONNRESET\n> ! \t\tif (errno == ECONNRESET)\n> \t\t\tgoto definitelyFailed;\n> #endif\n> \t\tprintfPQExpBuffer(&conn->errorMessage,\n> \t\t\t\t\t\t libpq_gettext(\"could not receive data from server: %s\\n\"),\n> ! \t\t\t\t\t\t strerror(errno));\n> \t\treturn -1;\n> \t}\n> \tif (nread > 0)\n> --- 552,576 ----\n> \t\t\t\t\t conn->inBufSize - conn->inEnd, 0);\n> \tif (nread < 0)\n> \t{\n> ! \t\tif (SOCK_ERRNO == EINTR)\n> \t\t\tgoto tryAgain2;\n> \t\t/* Some systems return EAGAIN/EWOULDBLOCK for no data */\n> #ifdef EAGAIN\n> ! \t\tif (SOCK_ERRNO == EAGAIN)\n> \t\t\treturn 0;\n> #endif\n> #if defined(EWOULDBLOCK) && (!defined(EAGAIN) || (EWOULDBLOCK != EAGAIN))\n> ! \t\tif (SOCK_ERRNO == EWOULDBLOCK)\n> \t\t\treturn 0;\n> #endif\n> \t\t/* We might get ECONNRESET here if using TCP and backend died */\n> #ifdef ECONNRESET\n> ! \t\tif (SOCK_ERRNO == ECONNRESET)\n> \t\t\tgoto definitelyFailed;\n> #endif\n> \t\tprintfPQExpBuffer(&conn->errorMessage,\n> \t\t\t\t\t\t libpq_gettext(\"could not receive data from server: %s\\n\"),\n> ! \t\t\t\t\t\t strerror(SOCK_ERRNO));\n> \t\treturn -1;\n> \t}\n> \tif (nread > 0)\n> ***************\n> *** 652,658 ****\n> \t\t\t * EPIPE or ECONNRESET, assume we've lost the backend\n> \t\t\t * connection permanently.\n> \t\t\t */\n> ! \t\t\tswitch (errno)\n> \t\t\t{\n> #ifdef EAGAIN\n> \t\t\t\tcase EAGAIN:\n> --- 652,658 ----\n> \t\t\t * EPIPE or ECONNRESET, assume we've lost the backend\n> \t\t\t * connection permanently.\n> \t\t\t */\n> ! \t\t\tswitch (SOCK_ERRNO)\n> \t\t\t{\n> #ifdef EAGAIN\n> \t\t\t\tcase EAGAIN:\n> ***************\n> *** 688,694 ****\n> \t\t\t\tdefault:\n> \t\t\t\t\tprintfPQExpBuffer(&conn->errorMessage,\n> \t\t\t\t\t\t\t\t\t libpq_gettext(\"could not send data to server: %s\\n\"),\n> ! \t\t\t\t\t\t\t\t\t strerror(errno));\n> \t\t\t\t\t/* We don't assume it's a fatal error... */\n> \t\t\t\t\treturn EOF;\n> \t\t\t}\n> --- 688,694 ----\n> \t\t\t\tdefault:\n> \t\t\t\t\tprintfPQExpBuffer(&conn->errorMessage,\n> \t\t\t\t\t\t\t\t\t libpq_gettext(\"could not send data to server: %s\\n\"),\n> ! \t\t\t\t\t\t\t\t\t strerror(SOCK_ERRNO));\n> \t\t\t\t\t/* We don't assume it's a fatal error... */\n> \t\t\t\t\treturn EOF;\n> \t\t\t}\n> ***************\n> *** 771,781 ****\n> \t\tif (select(conn->sock + 1, &input_mask, &output_mask, &except_mask,\n> \t\t\t\t (struct timeval *) NULL) < 0)\n> \t\t{\n> ! \t\t\tif (errno == EINTR)\n> \t\t\t\tgoto retry;\n> \t\t\tprintfPQExpBuffer(&conn->errorMessage,\n> \t\t\t\t\t\t\t libpq_gettext(\"select() failed: %s\\n\"),\n> ! \t\t\t\t\t\t\t strerror(errno));\n> \t\t\treturn EOF;\n> \t\t}\n> \t}\n> --- 771,781 ----\n> \t\tif (select(conn->sock + 1, &input_mask, &output_mask, &except_mask,\n> \t\t\t\t (struct timeval *) NULL) < 0)\n> \t\t{\n> ! \t\t\tif (SOCK_ERRNO == EINTR)\n> \t\t\t\tgoto retry;\n> \t\t\tprintfPQExpBuffer(&conn->errorMessage,\n> \t\t\t\t\t\t\t libpq_gettext(\"select() failed: %s\\n\"),\n> ! \t\t\t\t\t\t\t strerror(SOCK_ERRNO));\n> \t\t\treturn EOF;\n> \t\t}\n> \t}\n> Index: libpq/libpq-fe.h\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/interfaces/libpq/libpq-fe.h,v\n> retrieving revision 1.71\n> diff -C3 -r1.71 libpq-fe.h\n> *** libpq/libpq-fe.h\t2001/03/22 04:01:27\t1.71\n> --- libpq/libpq-fe.h\t2001/08/15 13:58:32\n> ***************\n> *** 21,30 ****\n> --- 21,39 ----\n> #endif\n> \n> #include <stdio.h>\n> + \n> + #ifdef WIN32\n> + #define SOCK_ERRNO (WSAGetLastError ())\n> + #else\n> + #define SOCK_ERRNO errno\n> + #endif\n> + \n> + \n> /* postgres_ext.h defines the backend's externally visible types,\n> * such as Oid.\n> */\n> #include \"postgres_ext.h\"\n> + \n> #ifdef USE_SSL\n> #include <openssl/ssl.h>\n> #endif\n> Index: libpq/win32.h\n> ===================================================================\n> RCS file: /home/projects/pgsql/cvsroot/pgsql/src/interfaces/libpq/win32.h,v\n> retrieving revision 1.15\n> diff -C3 -r1.15 win32.h\n> *** libpq/win32.h\t2001/08/03 22:11:39\t1.15\n> --- libpq/win32.h\t2001/08/15 13:58:32\n> ***************\n> *** 23,38 ****\n> */\n> #define crypt(a,b) (a)\n> \n> - /*\n> - * Most of libpq uses \"errno\" to access error conditions from socket calls,\n> - * so on Windows we want to redirect those usages to WSAGetLastError().\n> - * Rather than #ifdef'ing every single place that has \"errno\", hack it up\n> - * with a macro instead. But there are a few places that do need to touch\n> - * the regular errno variable. For them, we #undef and then redefine errno.\n> - */\n> - \n> - #define errno WSAGetLastError()\n> - \n> #undef EAGAIN\t/* doesn't apply on sockets */\n> #undef EINTR\n> #define EINTR WSAEINTR\n> --- 23,28 ----\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 17 Aug 2001 10:47:49 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: WIN32 errno patch"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I will try to apply it within the next 48 hours.\n\nThis isn't the right patch to apply, since Mikhail didn't fix the\nstrerror problem. I have some code from Magnus Naeslund that purports\nto handle the strerror issue, and will work it up into a patch real soon\nnow.\n\n>> I've implemented the SOCK_ERRNO macro only because\n>> both strerror and FormatMessage functions know nothing\n>> about sockets errors.\n\nFWIW, Magnus says this works:\n\n#define SOCK_STRERROR my_sock_strerror\n\nconst char* my_sock_strerror(unsigned long eno){\n static char buf[512]; // i know, not threadsafe\n if (!FormatMessage( \n FORMAT_MESSAGE_FROM_SYSTEM | \n FORMAT_MESSAGE_IGNORE_INSERTS,\n 0,eno,\n MAKELANGID(LANG_NEUTRAL, SUBLANG_DEFAULT),\n buf,sizeof(buf)-1,\n 0\n )){\n sprintf(buf,\"Unknown socket error(%u)\",eno);\n }\n buf[sizeof(buf)-1]='\\0';\n return buf;\n}\n\n\nAnyone have any objections to it?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 17 Aug 2001 11:38:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: [HACKERS] Re: WIN32 errno patch "
},
{
"msg_contents": "\nOK. Patch removed.\n\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I will try to apply it within the next 48 hours.\n> \n> This isn't the right patch to apply, since Mikhail didn't fix the\n> strerror problem. I have some code from Magnus Naeslund that purports\n> to handle the strerror issue, and will work it up into a patch real soon\n> now.\n> \n> >> I've implemented the SOCK_ERRNO macro only because\n> >> both strerror and FormatMessage functions know nothing\n> >> about sockets errors.\n> \n> FWIW, Magnus says this works:\n> \n> #define SOCK_STRERROR my_sock_strerror\n> \n> const char* my_sock_strerror(unsigned long eno){\n> static char buf[512]; // i know, not threadsafe\n> if (!FormatMessage( \n> FORMAT_MESSAGE_FROM_SYSTEM | \n> FORMAT_MESSAGE_IGNORE_INSERTS,\n> 0,eno,\n> MAKELANGID(LANG_NEUTRAL, SUBLANG_DEFAULT),\n> buf,sizeof(buf)-1,\n> 0\n> )){\n> sprintf(buf,\"Unknown socket error(%u)\",eno);\n> }\n> buf[sizeof(buf)-1]='\\0';\n> return buf;\n> }\n> \n> \n> Anyone have any objections to it?\n> \n> \t\t\tregards, tom lane\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 17 Aug 2001 11:38:58 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: [HACKERS] Re: WIN32 errno patch"
},
{
"msg_contents": "\n\nTom Lane wrote:\n> \n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I will try to apply it within the next 48 hours.\n> \n> This isn't the right patch to apply, since Mikhail didn't fix the\n> strerror problem. I have some code from Magnus Naeslund that purports\n> to handle the strerror issue, and will work it up into a patch real soon\n> now.\n> \n> >> I've implemented the SOCK_ERRNO macro only because\n> >> both strerror and FormatMessage functions know nothing\n> >> about sockets errors.\n> \n> FWIW, Magnus says this works:\n> \n> #define SOCK_STRERROR my_sock_strerror\n> \n> const char* my_sock_strerror(unsigned long eno){\n> static char buf[512]; // i know, not threadsafe\n> if (!FormatMessage(\n> FORMAT_MESSAGE_FROM_SYSTEM |\n> FORMAT_MESSAGE_IGNORE_INSERTS,\n> 0,eno,\n> MAKELANGID(LANG_NEUTRAL, SUBLANG_DEFAULT),\n> buf,sizeof(buf)-1,\n> 0\n> )){\n> sprintf(buf,\"Unknown socket error(%u)\",eno);\n> }\n> buf[sizeof(buf)-1]='\\0';\n> return buf;\n> }\n> \n> Anyone have any objections to it?\n> \n\nOn my system (NT4 sp.6, VC6.0) the FormatMessage function always\nreturns 0 for errno in the [10000 - 10100] range (winsock errors).\nThat's why i've wrote that this function knows nothing about\nsockets errors. Using this function looks very impressive but the\nnet result is null. If Magnus could get some meaningfull messages\nfor winsock errors from FormatMessage I'd be glad to know what is\nmissing from my setup.\n\nRegards,\nMikhail Terekhov\n",
"msg_date": "Fri, 17 Aug 2001 14:51:59 -0400",
"msg_from": "Mikhail Terekhov <terekhov@emc.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: WIN32 errno patch"
},
{
"msg_contents": "\n\nTom Lane wrote:\n> \n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I will try to apply it within the next 48 hours.\n> \n> This isn't the right patch to apply, since Mikhail didn't fix the\n> strerror problem. I have some code from Magnus Naeslund that purports\n> to handle the strerror issue, and will work it up into a patch real soon\n> now.\n> \n\nAs i've said in my other message this code doesn't solve the strerror\nproblem. I'd suggest to separate these two problems because\n\t1. Without errno patch the code doesn't even compile,\n\t but with errno patch the code compiles and works.\n\t2. Without strerror patch we will have the same result as before,\n\t i.e. error code but no error message. IMHO, it is impossible\n\t to get winsock error messages without some system or version\n\t dependent hack.\n\nRegards,\nMikhail Terekhov\n",
"msg_date": "Fri, 17 Aug 2001 15:21:48 -0400",
"msg_from": "Mikhail Terekhov <terekhov@emc.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: WIN32 errno patch"
},
{
"msg_contents": "From: \"Mikhail Terekhov\" <terekhov@emc.com>\n[snip]\n> On my system (NT4 sp.6, VC6.0) the FormatMessage function always\n> returns 0 for errno in the [10000 - 10100] range (winsock errors).\n> That's why i've wrote that this function knows nothing about\n> sockets errors. Using this function looks very impressive but the\n> net result is null. If Magnus could get some meaningfull messages\n> for winsock errors from FormatMessage I'd be glad to know what is\n> missing from my setup.\n>\n\nYou can load the \"netmsg.dll\" and get the messages from that hmodule.\nThat should probably be done and then fallback to plain FormatMessage\nwithout hmodule parameter.\nDo you have an netsmg.dll on nt4 (i think so)?\nOn win2k it's not needed, anyhoo, thats what i'm running.\n\nMagnus\n\n> Regards,\n> Mikhail Terekhov\n>\n\n\n",
"msg_date": "Sat, 18 Aug 2001 15:53:40 +0200",
"msg_from": "\"Magnus Naeslund\\(f\\)\" <mag@fbab.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: [HACKERS] Re: WIN32 errno patch"
},
{
"msg_contents": "Ok, where's a \"system dependent hack\" :)\nIt seems that win9x doesn't have the \"netmsg.dll\" so it defaults to \"normal\"\nFormatMessage.\nI wonder if one could load wsock32.dll or winsock.dll on those systems\ninstead of netmsg.dll.\n\nMikhail, could you please test this code on your nt4 system?\nCould someone else test this code on a win98/95 system?\n\nIt works on win2k over here.\n\n(code attached)\n\nMagnus",
"msg_date": "Sat, 18 Aug 2001 16:15:31 +0200",
"msg_from": "\"Magnus Naeslund\\(f\\)\" <mag@fbab.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: [HACKERS] Re: WIN32 errno patch"
},
{
"msg_contents": "----- Original Message ----- \nFrom: Magnus Naeslund(f) <mag@fbab.net>\nSent: Saturday, August 18, 2001 10:15 AM\n\n\n> Ok, where's a \"system dependent hack\" :)\n> It seems that win9x doesn't have the \"netmsg.dll\" so it defaults to \"normal\"\n> FormatMessage.\n> I wonder if one could load wsock32.dll or winsock.dll on those systems\n> instead of netmsg.dll.\n\nWindows 98SE, M$ Visual C++ 6.0\nIn the project settings in the Link library list\nyou can put 'wsock32.lib'.\n \n> Could someone else test this code on a win98/95 system?\n\nI compiled and run it. I didn't produce any error messages for IP 10.10.10.3\nyou put there. I used localhost instead, and here is the\nresult:\n\nsocket error(10061):Unknown socket error(10061)\n\nI don't know whether this is what you expected or not.\n\nS.\n\n\n\n",
"msg_date": "Sat, 18 Aug 2001 12:33:19 -0400",
"msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>",
"msg_from_op": false,
"msg_subject": "Re: WIN32 errno patch"
},
{
"msg_contents": "From: \"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>\n\n[snip]\n\n> Windows 98SE, M$ Visual C++ 6.0\n> In the project settings in the Link library list\n> you can put 'wsock32.lib'.\n>\n> > Could someone else test this code on a win98/95 system?\n>\n> I compiled and run it. I didn't produce any error messages for IP\n10.10.10.3\n> you put there. I used localhost instead, and here is the\n> result:\n>\n> socket error(10061):Unknown socket error(10061)\n>\n> I don't know whether this is what you expected or not.\n>\n\nWell this was about what i thought...\nIt's not working too good :(\nI'll try to find out a good way of doing this on win9x.\n\n> S.\n\nMagnus\n\n\n\n",
"msg_date": "Sat, 18 Aug 2001 19:07:42 +0200",
"msg_from": "\"Magnus Naeslund\\(f\\)\" <mag@fbab.net>",
"msg_from_op": false,
"msg_subject": "Re: WIN32 errno patch"
},
{
"msg_contents": "\"Magnus Naeslund(f)\" wrote:\n> \n> This is a multi-part message in MIME format.\n> \n> ------=_NextPart_000_0106_01C12801.00DAC460\n> Content-Type: text/plain; charset=\"iso-8859-1\"\n> Content-Transfer-Encoding: 7bit\n> \n> Ok, where's a \"system dependent hack\" :)\n> It seems that win9x doesn't have the \"netmsg.dll\" so it defaults to \"normal\"\n> FormatMessage.\n> I wonder if one could load wsock32.dll or winsock.dll on those systems\n> instead of netmsg.dll.\n> \n> Mikhail, could you please test this code on your nt4 system?\n> Could someone else test this code on a win98/95 system?\n> \n> It works on win2k over here.\n> \n> (code attached)\n> \n> Magnus\n\nIt works on win2k here too but not on win98/95 or winNT.\nAnyway, attached is the patch which uses Magnus's my_sock_strerror\nfunction (renamed to winsock_strerror). The only difference is that \nI put the code to load and unload netmsg.dll in the libpqdll.c\n(is this OK Magnus?).\n\nRegards\nMikhail Terekhov",
"msg_date": "Mon, 20 Aug 2001 12:31:49 -0400",
"msg_from": "Mikhail Terekhov <terekhov@emc.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: WIN32 errno patch"
},
{
"msg_contents": "Any chance to review/apply this patch?\n\nRegards,\nMikhail Terekhov\n\n\nMikhail Terekhov wrote:\n> \n> \"Magnus Naeslund(f)\" wrote:\n> >\n> > This is a multi-part message in MIME format.\n> >\n> > ------=_NextPart_000_0106_01C12801.00DAC460\n> > Content-Type: text/plain; charset=\"iso-8859-1\"\n> > Content-Transfer-Encoding: 7bit\n> >\n> > Ok, where's a \"system dependent hack\" :)\n> > It seems that win9x doesn't have the \"netmsg.dll\" so it defaults to \"normal\"\n> > FormatMessage.\n> > I wonder if one could load wsock32.dll or winsock.dll on those systems\n> > instead of netmsg.dll.\n> >\n> > Mikhail, could you please test this code on your nt4 system?\n> > Could someone else test this code on a win98/95 system?\n> >\n> > It works on win2k over here.\n> >\n> > (code attached)\n> >\n> > Magnus\n> \n> It works on win2k here too but not on win98/95 or winNT.\n> Anyway, attached is the patch which uses Magnus's my_sock_strerror\n> function (renamed to winsock_strerror). The only difference is that\n> I put the code to load and unload netmsg.dll in the libpqdll.c\n> (is this OK Magnus?).\n> \n> Regards\n> Mikhail Terekhov\n> \n> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Name: libpq.patch.gz\n> libpq.patch.gz Type: Gsview32 File (application/x-gzip)\n> Encoding: base64\n",
"msg_date": "Tue, 21 Aug 2001 16:31:45 -0400",
"msg_from": "Mikhail Terekhov <terekhov@emc.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: WIN32 errno patch"
},
{
"msg_contents": "\nCan I assume the current CVS has all the Win32 issues resolved?\n\n> Tom Lane wrote:\n> > \n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > I will try to apply it within the next 48 hours.\n> > \n> > This isn't the right patch to apply, since Mikhail didn't fix the\n> > strerror problem. I have some code from Magnus Naeslund that purports\n> > to handle the strerror issue, and will work it up into a patch real soon\n> > now.\n> > \n> \n> As i've said in my other message this code doesn't solve the strerror\n> problem. I'd suggest to separate these two problems because\n> \t1. Without errno patch the code doesn't even compile,\n> \t but with errno patch the code compiles and works.\n> \t2. Without strerror patch we will have the same result as before,\n> \t i.e. error code but no error message. IMHO, it is impossible\n> \t to get winsock error messages without some system or version\n> \t dependent hack.\n> \n> Regards,\n> Mikhail Terekhov\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 23 Aug 2001 13:00:00 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: [HACKERS] Re: WIN32 errno patch"
},
{
"msg_contents": "See [COMMITTERS] pgsql/ /configure /configure.in oc/FAQ oc/FAQ_ ...] tread.\n\nRegards,\nMikhail Terekhov\n\nBruce Momjian wrote:\n> \n> Can I assume the current CVS has all the Win32 issues resolved?\n>\n",
"msg_date": "Fri, 24 Aug 2001 14:25:34 -0400",
"msg_from": "Mikhail Terekhov <terekhov@emc.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: WIN32 errno patch"
}
] |
[
{
"msg_contents": "Muggins here volunteered to get RAISE to accept any expression that\nevaluates to a string rather than just a string constant. Think I can see\nwhy it wasn't that way already.\n\nHad a look, and this is easy enough:\n\nRAISE NOTICE ''Hello '' || $1 || '' World'';\n\nAlso, I can do:\n\nRAISE NOTICE ''Hello '' || ''% World'',$1;\n\nBut I don't think I can do both. Haven't looked at lex+9yacc since\nuniversity days, but I think we've only got one token of look-ahead. This\nmeans we can't read the expression and *then* decide which option we are in.\n\n(For those who haven't looked at that bit of the code recently\nplpgsql_read_expression() slurps up to and including a closing token -\neither a ';' or ',' above. You've then lost that finishing token)\n\nThe closest I can get is something like:\n\nRAISE NOTICE ''Hello '' || ''% World'',$1; -- Old style\nRAISE IN NOTICE ''Hello '' || $1 || '' World''; -- New \"INline\" style\n\nObviously we can use a new token rather than \"IN\", but it'll do while I'm\ntesting.\n\nAFAICT there are 4 options:\n\n1. Break the old % format - it's redundant now anyway\n2. Add a new token as above\n3. Add another parameter to plpgsql_read_expression() so we can unget the\nclosing token.\n4. Alter plpgsql_read_expression() to accept more than one closing token,\nand it stops when it reads any of them.\n\nI'm averse to #1 (unless maybe it was accompanied with a small sed/perl\nscript to automatically fix any code in a pg_dump file)\n\nI don't like gratuitously adding syntax noise with #2.\n\nI don't know whether #3 is even possible. Does anyone more familiar with\nyacc/bison know?\n\nThe solution for #4 is going to add complexity to read_expression() -\npresumably not a speed problem (we're only parsing once) but it'd be nice to\nkeep things as simple as possible.\n\nThe only other things I need to look at are checking I've freed up any store\nthat's been allocated and casting the expression to text where PG can't\nfigure that out for itself. These are obviously just a matter of getting a\nlittle more familiar with the code.\n\nAny advice/suggestions gratefully received people.\n\n- Richard Huxton\n\n",
"msg_date": "Sat, 21 Jul 2001 15:20:26 +0100",
"msg_from": "\"Richard Huxton\" <dev@archonet.com>",
"msg_from_op": true,
"msg_subject": "plpgsql: RAISE <level> <expr> <params>"
},
{
"msg_contents": "\"Richard Huxton\" <dev@archonet.com> writes:\n> (For those who haven't looked at that bit of the code recently\n> plpgsql_read_expression() slurps up to and including a closing token -\n> either a ';' or ',' above. You've then lost that finishing token)\n\nThe real problem is that this *isn't* yacc ... if plpgsql had an actual\ngrammar symbol for \"expression\" then the change would be trivial.\n\nplpgsql_read_expression is not usable as-is for this purpose, because\n\"read until token X\" is far too simplistic (consider a function call\nwith two parameters --- the comma between the parameters would be\ntaken as ending the whole expression).\n\nIt might work to add some understanding of nested-parenthesis counting\nto the routine; not sure if there are any other shortcomings besides\nthat one. But in any case, you need to do significant surgery on that\nroutine, so adding another return parameter shouldn't bother you.\n\n> 4. Alter plpgsql_read_expression() to accept more than one closing token,\n> and it stops when it reads any of them.\n\nAFAICT it already stops on ';' (hardwired into the routine). So if you\nmake it pass back what it stopped on, you're set: the grammar entry\nbecomes just\n\nstmt_raise : K_RAISE lno raise_level\n\nand then the code takes care of swallowing expressions until ';',\nsimilarly to the way SQL commands are handled. (plpgsql's parsing\nmethodology is sinfully ugly, isn't it? But I don't suppose you\nwant to try to replace it...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 21 Jul 2001 11:53:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: plpgsql: RAISE <level> <expr> <params> "
},
{
"msg_contents": "Tom Lane wrote:\n> and then the code takes care of swallowing expressions until ';',\n> similarly to the way SQL commands are handled. (plpgsql's parsing\n> methodology is sinfully ugly, isn't it? But I don't suppose you\n> want to try to replace it...)\n\n It is, indeed, and I'm sorry for that. But it was the only\n way I saw to make new features in the PostgreSQL main query\n engine automatically available in PL/pgSQL without a single\n change.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Mon, 23 Jul 2001 10:37:29 -0400 (EDT)",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: plpgsql: RAISE <level> <expr> <params>"
},
{
"msg_contents": "From: \"Jan Wieck\" <JanWieck@yahoo.com>\n\n> Tom Lane wrote:\n> > and then the code takes care of swallowing expressions until ';',\n> > similarly to the way SQL commands are handled. (plpgsql's parsing\n> > methodology is sinfully ugly, isn't it? But I don't suppose you\n> > want to try to replace it...)\n>\n> It is, indeed, and I'm sorry for that. But it was the only\n> way I saw to make new features in the PostgreSQL main query\n> engine automatically available in PL/pgSQL without a single\n> change.\n\nActually, I like the idea of using the SQL system to evaluate expressions -\nwhy reinvent the wheel?\n\nThe only thing needed for this is a grammar for expressions so we can mix\nand match with RAISE a bit better. First draft doesn't look too bad - I can\nnot deal with function-calls and brackets and still have something useful.\n\n- Richard Huxton\n\n",
"msg_date": "Mon, 23 Jul 2001 17:14:38 +0100",
"msg_from": "\"Richard Huxton\" <dev@archonet.com>",
"msg_from_op": true,
"msg_subject": "Re: plpgsql: RAISE <level> <expr> <params>"
},
{
"msg_contents": "\"Richard Huxton\" <dev@archonet.com> writes:\n> Actually, I like the idea of using the SQL system to evaluate expressions -\n> why reinvent the wheel?\n\nSure, that part is great --- it's just the parsing (or lack of it,\nto be more accurate) that's an ugly hack.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 24 Jul 2001 20:17:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: plpgsql: RAISE <level> <expr> <params> "
},
{
"msg_contents": "\nCan I ask where we are on this?\n\n\n> \"Richard Huxton\" <dev@archonet.com> writes:\n> > Actually, I like the idea of using the SQL system to evaluate expressions -\n> > why reinvent the wheel?\n> \n> Sure, that part is great --- it's just the parsing (or lack of it,\n> to be more accurate) that's an ugly hack.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 2 Aug 2001 11:46:36 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: plpgsql: RAISE <level> <expr> <params>"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Can I ask where we are on this?\n\nSure - posted a follow up to the list a while ago. Subject was\n\n\"RAISE <level> <expr> <params>: state of play and request for advice\"\n\nCurrently, this works:\n\nCREATE FUNCTION foo_raise_loop(text) RETURNS text AS '\nDECLARE\n a ALIAS FOR $1;\n i integer;\n myrec RECORD;\nBEGIN\n i:=0;\n FOR myrec IN SELECT * FROM colours LOOP\n i:=i+1;\n RAISE NOTICE a || '' : '' || '' colour % is '' || myrec.c_name ||\n''.'', i, myrec.c_id;\n END LOOP;\n RETURN ''done''::text;\nEND;' LANGUAGE 'plpgsql';\n\nMore details in the msg of a few days ago. Busy at the moment, probably\nfor the next week at least. If you'd like the patch against current CVS\nlet me know and I'll try and do it this weekend.\n\n- Richard Huxton\n",
"msg_date": "Thu, 02 Aug 2001 23:01:27 +0100",
"msg_from": "Richard Huxton <dev@archonet.com>",
"msg_from_op": false,
"msg_subject": "Re: plpgsql: RAISE <level> <expr> <params>"
}
] |
[
{
"msg_contents": "Take these queries:\n\nselect * from foo as F, (select * from bar where name = 'bla') as B where\nF.name = B.name\nunion all\nselect * from foo as F, (select * from bar where name = 'bla') as B where\nF.type = B.type\n\nOR \n\ncreate temp table B as select * from bar where name = 'bla';\nselect * from foo as F, B where F.name = B.name\nunion all\nselect * from foo as F, B where F.type = B.type;\ndrop table B;\n\nMy question is, which would be more efficient, or is it a wash?\n(A note, I will be calling this from an external programming laguage, PHP, so\nthe first query would be one Postgres pq_exec call, while the second query\nwould be three separate calls.)\n\n\n-- \n5-4-3-2-1 Thunderbirds are GO!\n------------------------\nhttp://www.mohawksoft.com\n",
"msg_date": "Sat, 21 Jul 2001 14:48:58 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "sub queries and caching."
},
{
"msg_contents": "Andrew McMillan wrote:\n> \n> mlw wrote:\n> >\n> > Take these queries:\n> >\n> > select * from foo as F, (select * from bar where name = 'bla') as B where\n> > F.name = B.name\n> > union all\n> > select * from foo as F, (select * from bar where name = 'bla') as B where\n> > F.type = B.type\n> >\n> > OR\n> >\n> > create temp table B as select * from bar where name = 'bla';\n> > select * from foo as F, B where F.name = B.name\n> > union all\n> > select * from foo as F, B where F.type = B.type;\n> > drop table B;\n> >\n> > My question is, which would be more efficient, or is it a wash?\n> > (A note, I will be calling this from an external programming laguage, PHP, so\n> > the first query would be one Postgres pq_exec call, while the second query\n> > would be three separate calls.)\n> \n> The second could also be done as a single PHP call, given that you should be able to\n> \"create temp table ...; select ...\" in a single pg_Exec call.\n> \n> You don't need a 'DROP TABLE B' because it's a temp table and will be dropped\n> anyway, won't it, unless you're using pg_pconnect.\n\nFor a high volume website, where processing is done and latency are important\nconsiderations.\n\nSuppose, you have a few apache/php systems load balanced on top of a single\ndatabase system. (This is a very standard configuration.) The apache/php\nmachine cycles are cheaper than the database machine cycles because they can\nusually be scaled easily. A database system is very difficult to scale. While\nadding an apache/php box to this configuration is usually trivial, setting up a\ndatabase system across two or more machines is a hugely more complex problem.\n\nThen there is latency, the longer a web pages takes to process, it holds its\nresources longer, this means you will probably have more web server processes,\nand if each process holds a database connection, you will probably have more\nopen database connections. So, latency costs you the ram on the local web\nserver and the resources on the back-end application service machines.\n\nSo the real trick to getting good salability is to reduce latency AND move as\nmuch processing to the boxes which can be scaled.\n\nPersistent connections to a database are vital in this scenario. Creation of a\nnew connection to a database impacts backend processing time and page latency.\nSo one has to drop the temporary table.\n\nSo, which is more expensive? Issuing the subquery multiple times within the\nlarger query, or creating a temporary table, performing the simpler query, and\nthen dropping the temp table.\n\nMy guess would be that creating the temp table and dropping it again do use\nbackend processing cycles. I wonder if PostgreSQL would be smart enough to\nperform that query only once? \n\n\n-- \n5-4-3-2-1 Thunderbirds are GO!\n------------------------\nhttp://www.mohawksoft.com\n",
"msg_date": "Sun, 22 Jul 2001 14:30:15 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: sub queries and caching."
},
{
"msg_contents": "Kevin wrote:\n> \n> While I'm sure it's just because of the simplicity of this example, it\n> seems that the query could be reorganized to avoid this double query:\n> \n> select * from foo F, bar B where B.name = 'bla' and (B.name = F.name or\n> B.type = F.type);\n\nThat was the original format of the query, and while not obvious, forced a full\ntable scan on foo and bar. The query presented is the result of many iterations\nof \"explain\" and execution timings. Even with sequential scans disabled,\nPostgres still does them. \n\n> \n> (granted that this gives a slightly different results, rows matching\n> both conditions don't appear twice, which I would imagine to be an\n> unwanted side effect of the original query, anyway).\n> \n> I would guess it's easier for the query writer to figure this out than\n> the db itself, however, since the two queries look very different (and I\n> suppose the database wouldn't come up with this query since the result\n> /is/ different). Here's a new question: Would it be useful for the\n> database to try and simplify queries before executing them? Or would\n> this just take more time than it's worth for most cases?\n> \n> According to EXPLAIN, it /plans/ on doing the query twice, but I don't\n> know enough about the internal workings to know if it caches results (so\n> I can't answer the original question, sorry).\n\nThat's the problem I see as well. You would think that if Postgres sees the\nsame subquery, it should only do it once. Oh well, neither does it seem Oracle.\n\n-- \n5-4-3-2-1 Thunderbirds are GO!\n------------------------\nhttp://www.mohawksoft.com\n",
"msg_date": "Tue, 24 Jul 2001 05:24:10 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: sub queries and caching."
}
] |
[
{
"msg_contents": "I like the idea of adding an INSERT ... RETURNING capability,\nper Philip Warner's suggestion of about a year ago\n(http://fts.postgresql.org/db/mw/msg.html?mid=68704). We did not\nfigure out what to do if the INSERT operation is rewritten by a rule,\nbut I have an idea about that. ISTM that to support INSERT RETURNING\non a view, we should require an ON INSERT DO INSTEAD rule to end with a\nSELECT, and it is the results of that SELECT that are used to compute\nthe RETURNING values. This gives the author of a view the ability and\nresponsibility to determine what is seen when an INSERT RETURNING is\ndone into the view.\n\nIt further seems a good idea to mark a SELECT intended for this purpose\nin a special way, to flag that it's only needed to support RETURNING and\nisn't a fundamental part of the rule. This would allow us to suppress\nexecution of the SELECT when the original query is a plain INSERT and\nnot INSERT RETURNING. I suggest that we do this by using \"RETURNS\"\ninstead of \"SELECT\" --- the rest of the query is just like a select,\nonly the initial keyword is different. So you'd write something like\n\n\tCREATE RULE foorule AS ON INSERT TO fooview DO INSTEAD\n\t(\n\t\tinsert into underlying tables;\n\t\tRETURNS a,b,c FROM ...\n\t);\n\nIf you don't provide the RETURNS query, the rule will still work for\nsimple inserts, but an error would be raised for INSERT RETURNING.\nWhen you do provide RETURNS, it's only executed if the rule is used\nto rewrite INSERT RETURNING. The output columns of the RETURNS query\nhave to match the column datatypes of the table (view) the rule is\nattached to.\n\nWhile this all seems good at first glance, I am wondering just how\nuseful it really would be in practice. The problem is: how do you know\nwhich rows to return in the RETURNS query? If you don't qualify the\nselection then you'll get all the rows in the view, which is surely not\nwhat you want. You could restrict the select with clauses like \"WHERE\ncol1 = NEW.col1\", but this is not necessarily going to be efficient, and\nwhat's worse it only works for columns that are supplied by the initial\ninsert into the view. For example, suppose an underlying table has a\nSERIAL primary key that's generated on the fly when you insert to it.\nThe RETURNS query has no way to know what that serial number is, and so\nno way to select the right row. It seems like the rule author is up\nagainst the very same problem that we wanted INSERT RETURNING to solve.\n\nSo I'm still baffled, unless someone sees a way around that problem.\n\nCould we get away with restricting INSERT RETURNING to work only on\ninserts directly to tables (no ON INSERT DO INSTEAD allowed)? Or is\nthat too much of a kluge?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 21 Jul 2001 18:03:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Incomplete idea about views and INSERT...RETURNING"
},
{
"msg_contents": "At 18:03 21/07/01 -0400, Tom Lane wrote:\n>\n>Could we get away with restricting INSERT RETURNING to work only on\n>inserts directly to tables (no ON INSERT DO INSTEAD allowed)? Or is\n>that too much of a kluge?\n>\n\nI don't see it as a kludge, just a limitation on the first pass. If people\nneed the feature then they can recode their DO INSTEAD as a trigger (I\nthink that works...). You probably need to return useful information to the\napplication to let it know what has happened, however.\n\n \n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Sun, 22 Jul 2001 10:22:48 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Incomplete idea about views and\n INSERT...RETURNING"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> While this all seems good at first glance, I am wondering just how\n> useful it really would be in practice. The problem is: how do you know\n> which rows to return in the RETURNS query? If you don't qualify the\n> selection then you'll get all the rows in the view, which is surely not\n> what you want. You could restrict the select with clauses like \"WHERE\n> col1 = NEW.col1\", but this is not necessarily going to be efficient, and\n> what's worse it only works for columns that are supplied by the initial\n> insert into the view. For example, suppose an underlying table has a\n> SERIAL primary key that's generated on the fly when you insert to it.\n> The RETURNS query has no way to know what that serial number is, and so\n> no way to select the right row. It seems like the rule author is up\n> against the very same problem that we wanted INSERT RETURNING to solve.\n> \n> So I'm still baffled, unless someone sees a way around that problem.\n> \n> Could we get away with restricting INSERT RETURNING to work only on\n> inserts directly to tables (no ON INSERT DO INSTEAD allowed)? Or is\n> that too much of a kluge?\n\nIsn't it likely that the person writing the RULE would want to internally use an\nINSERT ... RETURNING query and that the RETURNS ... should either use values from\nthat, or use a SELECT clause keyed on values from that?\n\nCheers,\n\t\t\t\t\tAndrew.\n-- \n_____________________________________________________________________\n Andrew McMillan, e-mail: Andrew@catalyst.net.nz\nCatalyst IT Ltd, PO Box 10-225, Level 22, 105 The Terrace, Wellington\nMe: +64(27)246-7091, Fax:+64(4)499-5596, Office: +64(4)499-2267xtn709\n",
"msg_date": "Mon, 23 Jul 2001 06:02:42 +1200",
"msg_from": "Andrew McMillan <andrew@catalyst.net.nz>",
"msg_from_op": false,
"msg_subject": "Re: Incomplete idea about views and INSERT...RETURNING"
},
{
"msg_contents": "Andrew McMillan <andrew@catalyst.net.nz> writes:\n> Tom Lane wrote:\n>> Could we get away with restricting INSERT RETURNING to work only on\n>> inserts directly to tables (no ON INSERT DO INSTEAD allowed)? Or is\n>> that too much of a kluge?\n\n> Isn't it likely that the person writing the RULE would want to\n> internally use an INSERT ... RETURNING query and that the RETURNS\n> ... should either use values from that, or use a SELECT clause keyed\n> on values from that?\n\nHmm, so we'd allow INSERT RETURNING to be the last statement of an\nON INSERT DO INSTEAD rule, and the RETURNING clause would either be\ndropped (if rewriting a plain INSERT) or used to form the outputs\n(if rewriting INSERT RETURNING). Kind of limited maybe, but it would\nwork for simple cases, which is a lot better than none at all...\n\nThe trouble with INSERT RETURNING followed by SELECT is that a rule\nhas noplace to keep the results: it hasn't got any local variables.\n(And I don't think I want to invent such a feature, at least not on\nthe spur of the moment.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 22 Jul 2001 18:50:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Incomplete idea about views and INSERT...RETURNING "
}
] |
[
{
"msg_contents": "----- Original Message ----- \nFrom: eCommerce Software Solutions Inc. \nTo: pgsql-bugs@postgresql.org \nCc: pgsql-cygwin-request@postgresql.org \nSent: Saturday, July 21, 2001 9:15 PM\nSubject: Leaking Handles in Postgres 7.1.2 on Cygwin dll 1.3.2 on Win 2000\n\n\n\n\nThe situation is this:\n\nI have cygwin with ( dll 1.3.2 ) and latest Postgresql 7.1.2 on Win 2000\nwith SP1.\n\nI use Java and JDBC to connect from a Windows to Postgresql server to do a\nvery simple select:\nselect count(*) from table1; // returns count of 2\n\nIt works fine.\n\nNow I run this on 10 threads in my Java program. Each thread loops for\n100000000 times.\n\nWhen I do this every thing appears to be ok at first. Then, I realize that I\nam loosing free memory really fast.\n\nI go in the \"task manager\" in windows 2000 and look at the memory usage for\neach process. It is fine i.e not growing.\n\nBut Available physical memory is going down really fast. I have no clue at\nfirst.\n\nThen I notice that in Performance tab of Windows task manager, under Totals,\nthe handles is running very fast.\n\nI discovered that it begins from 4080 and goes on incrimenting ( to a very\nlarge number ) until I run out of memory.\n\nSince both client and DB server are on the same machine it is hard to tell\nwhich is leaking handles!\n\nNow I moved the client to another machine. The client uses JDBC to connect\nto the PG Database running in Win2000 Cygwin environment on another Machine.\nI looked at the Windows Task Monitor to notice that there are no leaking\nhandles on the client Machine. Therefore leak is not in my Program.\n\nThe handles are being leaked by PG on the Machine acting as DB Server in\nCygwin environment.\n\nI hope this isolates the problem further to PG and Cygwin and not JDBC and\nClient code.\n\nLets fix this problem.\n\nThanks,\n\nVinay\n\n\n\n\n\n\n\n\n\n\n \n----- Original Message ----- \nFrom: eCommerce Software \nSolutions Inc. \nTo: pgsql-bugs@postgresql.org \nCc: pgsql-cygwin-request@postgresql.org\n\nSent: Saturday, July 21, 2001 9:15 PM\nSubject: Leaking Handles in Postgres 7.1.2 on Cygwin dll 1.3.2 on \nWin 2000\n\nThe situation is this:I have cygwin \nwith ( dll 1.3.2 ) and latest Postgresql 7.1.2 on Win 2000with SP1.I \nuse Java and JDBC to connect from a Windows to Postgresql server to do avery \nsimple select:select count(*) from table1; // returns count of 2It \nworks fine.Now I run this on 10 threads in my Java program. Each thread \nloops for100000000 times.When I do this every thing appears to be ok \nat first. Then, I realize that Iam loosing free memory really fast.I \ngo in the \"task manager\" in windows 2000 and look at the memory usage \nforeach process. It is fine i.e not growing.But Available physical \nmemory is going down really fast. I have no clue atfirst.Then I \nnotice that in Performance tab of Windows task manager, under Totals,the \nhandles is running very fast.I discovered that it begins from 4080 and \ngoes on incrimenting ( to a verylarge number ) until I run out of \nmemory.Since both client and DB server are on the same machine it is \nhard to tellwhich is leaking handles!Now I moved the client to \nanother machine. The client uses JDBC to connectto the PG Database \nrunning in Win2000 Cygwin environment on another Machine.I looked at the \nWindows Task Monitor to notice that there are no leakinghandles on the \nclient Machine. Therefore leak is not in my Program.The handles are \nbeing leaked by PG on the Machine acting as DB Server inCygwin \nenvironment.I hope this isolates the problem further to PG and Cygwin \nand not JDBC andClient code.Lets fix this \nproblem.Thanks,Vinay",
"msg_date": "Sat, 21 Jul 2001 21:19:11 -0700",
"msg_from": "\"eCommerce Software Solutions Inc.\" <vinaysoni1@home.com>",
"msg_from_op": true,
"msg_subject": "Fw: Leaking Handles in Postgres 7.1.2 on Cygwin dll 1.3.2 on Win 2000"
}
] |
[
{
"msg_contents": "I'm trying create a unique index using more than one field and\napplying a function in one field to achieve case insensitive\nuniqueness but postgresql doesn't accept. \n\ncreate table a( \n id int primary key,\n id2 int not null,\n name varchar(50),\n unique(id2, lower(name))\n ); \n\nAnyone have an idea ?\n",
"msg_date": "22 Jul 2001 03:16:55 -0700",
"msg_from": "domingo@dad-it.com (Domingo Alvarez Duarte)",
"msg_from_op": true,
"msg_subject": "unique index doesn't accept functions on fields"
},
{
"msg_contents": "Domingo Alvarez Duarte wrote:\n\n> I'm trying create a unique index using more than one field and\n> applying a function in one field to achieve case insensitive\n> uniqueness but postgresql doesn't accept.\n> \n> create table a(\n> id int primary key,\n> id2 int not null,\n> name varchar(50),\n> unique(id2, lower(name))\n> );\n\nHave you tried to just CREATE TABLE and later CREATE INDEX UNIQUE\nUSING... ?\n\n-- \nAlessio F. Bragadini\t\talessio@albourne.com\nAPL Financial Services\t\thttp://village.albourne.com\nNicosia, Cyprus\t\t \tphone: +357-2-755750\n\n\"It is more complicated than you think\"\n\t\t-- The Eighth Networking Truth from RFC 1925\n",
"msg_date": "Mon, 23 Jul 2001 11:04:09 +0300",
"msg_from": "Alessio Bragadini <alessio@albourne.com>",
"msg_from_op": false,
"msg_subject": "Re: unique index doesn't accept functions on fields"
},
{
"msg_contents": "> > I'm trying create a unique index using more than one field and\n> > applying a function in one field to achieve case insensitive\n> > uniqueness but postgresql doesn't accept.\n> > \n> > create table a(\n> > id int primary key,\n> > id2 int not null,\n> > name varchar(50),\n> > unique(id2, lower(name))\n> > );\n> \n> Have you tried to just CREATE TABLE and later CREATE INDEX UNIQUE\n> USING... ?\n\nPostgres does not support functional indexing on multi-key indices.\n\nChris\n\n",
"msg_date": "Tue, 24 Jul 2001 09:23:08 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "RE: Re: unique index doesn't accept functions on fields"
}
] |
[
{
"msg_contents": "> I notice from your postings on the PGSQL-hackers archives that you have been\n> experimenting with PGSQL and Pgbench. I have not been able to get PGSQL to\n> scale on multiple processors. Tom Lane says this is due to the\n> implementation of spin-locks. Have you made any fixes or additions to PGSQL\n> to improve scalability on multiple processors?\n\nNo.\n\n> Is there anything else I\n> should try?\n\nThe only thing I noticed to increase the performance so far is extend\nthe deadlock_timeout paramterter.\n\nBTW, wal_sync_method parameter seems also impact the performance. I'm\nnot sure what kind of platform you are using, but from my experience\nopen_sync is the best for Unix, while fdatasync is good for Solaris.\n--\nTatsuo Ishii\n",
"msg_date": "Sun, 22 Jul 2001 20:09:52 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Pgbench Performance on PGSQL"
}
] |
[
{
"msg_contents": "Sorry, but I'm not familiar with TH8TISASCII encoding at all. Is there\nany documentation for it on the web?\n--\nTatsuo Ishii\n\n> Dear Sir,\n> \n> I have obtained your email address through the PostgreSQL documentation. I\n> understood you are the specialist of the localisation on PostgreSQL, and was\n> hoping you could help me on a specific problem I am working at the moment,\n> or maybe redirect me to someone who could help me.\n> \n> We have developed a software solution based on Microsoft Windows and Oracle\n> database. One of our new external module though (Point of sale) is using the\n> Linux environment (Red Hat 7.1 distribution) and PostgreSQL 7.1 database.\n> One of our customer is based in Bangkok and is using Oracle database in the\n> Thai character set. The Oracle character set used by our customer is\n> TH8TISASCII (NLS_NCHAR_CHARACTERSET & NLS_CHARACTERSET). They are using a\n> Thai version of Windows, and it stores and retrieve the data correctly. We\n> still have some trouble exporting the Thai characters, but my question for\n> you is more related to how I should setup the PostgreSQL database for the\n> import. I noticed that there is no locale for Thai defined in the\n> documentation. What do you think would be the best way to set up PostgreSQL\n> ? SQL_ASCII ? UNICODE ? Something else ?\n> \n> I thank you in advance for your time.\n> \n> Best Regards,\n> Alex Crettol\n> \n",
"msg_date": "Sun, 22 Jul 2001 20:10:16 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Thai data import into PostgreSQL"
}
] |
[
{
"msg_contents": "> Sorry for writting to you but I can�t go further because we are having\n> trouble trying to set multibyte in our environment. When I tried to run\n> ./configure --enable-multibyte=LATIN2 and after this gmake, I receive lots\n> of errors and I don�t know how to proceed. I�m from Brazil and I�m using\n> slackware 7.2 english.\n\nWhat kind of errors are you having exactly?\n\n> When I�m using a normal user at tty and set LC_ALL=\"pt_BR\" everything works\n> and I can write all the characters in our language.\n> The problem is when I was accessing the database with a window$ machine and\n> I write somo characters, when I close my application and open it again, this\n> characteres are different, � becomes # and so on.\n> Please help me because we have lots of people trying not to use linux here\n> and everything is a moto to don�t use it.\n> Thank you very much!\n> Regards,\n> \n> Neme Adas Neto\n> Sao Paulo - Brazil\n> \n",
"msg_date": "Sun, 22 Jul 2001 20:10:26 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Multibyte in postgresql"
}
] |
[
{
"msg_contents": "When I delete a table that has an OID, the OID does not get deleted\ncorrect? How can I delete the data from the large object?\n\n",
"msg_date": "Mon, 23 Jul 2001 13:49:07 +1000 (EST)",
"msg_from": "Grant <grant@conprojan.com.au>",
"msg_from_op": true,
"msg_subject": "Large objects and table deletion."
}
] |
[
{
"msg_contents": "Would it be possible to offer an option for the OID column to get its value\nfrom an int4 primary key (settable on a per table basis maybe)?\n- Stuart\n\n> -----Original Message-----\n> From:\tHiroshi Inoue [SMTP:Inoue@tpf.co.jp]\n> Sent:\tSaturday, July 21, 2001 7:31 AM\n> To:\tZeugswetter Andreas SB\n> Cc:\tPostgreSQL-development\n> Subject:\tRE: OID wraparound (was Re: pg_depend)\n> \n> > -----Original Message-----\n> > Zeugswetter Andreas SB\n> > \n> > > As I mentioned already I'm implementing updatable cursors\n> > > in ODBC and have half done it. If OIDs would be optional\n> > > my trial loses its validity but I would never try another\n> > > implementation.\n> > \n> > But how can you do that ? The oid index is only created by \n> > the dba for specific tables, thus your update would do an update\n> > with a where restriction, that is not indexed. \n> > This would be darn slow, no ?\n> > \n> \n> Please look at my another(previous ?) posting to pgsql-hackers.\n> I would use both TIDs and OIDs, TIDs for fast access, OIDs\n> for identification.\n> \n> > How about instead selecting the primary key and one of the tid's \n> > (I never remember which, was it ctid ?) instead, so you can validate\n> > when a row changed between the select and the update ? \n> > \n> \n> Xmin is also available for row-versioning. But now I'm wondering\n> if TID/xmin are guranteed to keep such characteriscs.\n> Even Object IDentifier is about to lose the existence. \n> Probably all-purpose application mustn't use system columns\n> at all though I've never heard of it in other dbms-s.\n> \n> regards,\n> Hiroshi Inoue\n",
"msg_date": "Mon, 23 Jul 2001 12:10:28 +0100",
"msg_from": "\"Henshall, Stuart - WCP\" <SHenshall@westcountrypublications.co.uk>",
"msg_from_op": true,
"msg_subject": "RE: OID wraparound (was Re: pg_depend)"
},
{
"msg_contents": "\"Henshall, Stuart - WCP\" wrote:\n> \n> Would it be possible to offer an option for the OID column to get its value\n> from an int4 primary key (settable on a per table basis maybe)?\n> - Stuart\n> \n\nSorry I don't understand well what you mean.\nWhat kind of advantages are there if we let OIDs be optional\nand allow such options like you offer ?\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Tue, 24 Jul 2001 10:37:08 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: RE: OID wraparound (was Re: pg_depend)"
}
] |
[
{
"msg_contents": "We are evaluating PostgreSQL for a possible port to our\nproprietary hardware platform. The hardware is a very\nhigh end (processing power, I/O throughput, storage capacity)\nstorage system, attached to a host machine running Windows2K.\nThe question is what's the right way to do it. The following\nis a brief description of what we think could be done, we would\nlike to know your opinion about whether we are on the right\ntrack.\n\nThe plan is to extend PostgresSql with data access functions\nto be executed on the storage hardware. Most of the backend\ncode would be running on the host machine under Win2K, but \nuser data queries would be dispatched to the storage system, \nwhere the user tables will be searched and then the results\nwill be returned to the host.\n\nOn the host, most the PostGreSQL will run unchanged, including \nthe front end, the backend servers: the parser, planner, catalog,\nand the executor. The existing heapam interface is still used\nto access system tables. The system tables will be stored and\naccessed using the existing storage functions from files into \nthe host machine memory and accessed through the buffer cache\non the host machine.\n\nFor user tables, the plan is to modify all the components that\ncall heapam interface (mainly Command and Executor) for user data\nto call a new 'extended heapam', which basically has the same\ninterface of the heapam but will send the request to the storage\nsystem. Page/record locking will also be changed to call the \nextended heapam. \n\nWe would like to get your feedback about this aproach - are we on the\nright track or is it a waste of time?\n\nHsin H. Lee\nPyxsys Corporation\n142 Q North Road\nSudbury, MA 01776\nTel: 978-371-9115 ext. 116\nEmail: hlee@pyxsys.net\n\n",
"msg_date": "Mon, 23 Jul 2001 11:03:29 -0400",
"msg_from": "\"Hsin Lee\" <hlee@pyxsys.net>",
"msg_from_op": true,
"msg_subject": "Question about porting the PostgreSQL "
},
{
"msg_contents": "\"Hsin Lee\" <hlee@pyxsys.net> writes:\n> We would like to get your feedback about this aproach - are we on the\n> right track or is it a waste of time?\n\nImpossible to tell, since you haven't said word one about what this\nbox is or what it can do. If it were plain storage hardware, why do\nyou need to muck with the innards of Postgres at all? Just use it\nas disk. If it's not plain storage, you'll need to be a lot more\nspecific about what you expect the box to do. If there's lots of\nprocessing power in the box, why don't you just run *all* of Postgres\ninside the box? (Running any part of PG on Win2K is not my idea\nof the correct solution in any case ;-).)\n\nFWIW, I find it very hard to visualize a case where I'd think that\nreplacing heapam is the right approach. Replacing the storage\nmanager could be the right approach for certain situations, but\nreplacing heapam means replacing a lot of extremely critical\n(read breakable) code for no obvious reason.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 25 Jul 2001 03:01:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Question about porting the PostgreSQL "
},
{
"msg_contents": "\n>Impossible to tell, since you haven't said word one about what this\n>box is or what it can do. If it were plain storage hardware, why do\n\nthanks for your reply. Yes I know it's hard to explain\nwhy we plan to do what I described without explaining more\nabout the hardware we have. So it sounds like it's possible\nbut very difficult due to the sensitive code to be changed.\n\nHsin H. Lee\n\n\n",
"msg_date": "Fri, 27 Jul 2001 16:37:17 -0400",
"msg_from": "\"Hsin Lee\" <hlee@pyxsys.net>",
"msg_from_op": true,
"msg_subject": "RE: Question about porting the PostgreSQL "
}
] |
[
{
"msg_contents": "I'd like to have statistics on when my database was last backed up or\nvacuumed. Currently, I'm implementing this by using simple shell\nscripts that write a date stamp to ascii files. I was wondering\nwhether this is or could be a feature added to Postgres?\n\nFor example, could one of the pg_* tables contain the fields\n'last_vacuum' or 'last_backup' (which would be updated every time the\nvacuum or pg_dump command was executed).\n\nPerhaps something like this exists that I'm unaware of?\n\n-Tony\n",
"msg_date": "23 Jul 2001 15:37:54 -0700",
"msg_from": "reina@nsi.edu (Tony Reina)",
"msg_from_op": true,
"msg_subject": "Does/Can PG store administrative statistics?"
}
] |
[
{
"msg_contents": "Hi there,\n\t\n\tI remember that in earlier versions of Postgres.\n\tYou have to do something (which I cannnot remember) to enable\n\ta user to create plpgsql functions.\n\t\n\twhich versions of postgres were they?\n\nthanks in advance.\n\nBill\n-- \nThe mark of a good party is that you wake up the\nnext morning wanting to change your name and start\na new life in different city.\n\t\t-- Vance Bourjaily, \"Esquire\"\n---------------------------------------------\nBill Shui\t\tBioinformatics Programmer\nEmail: wshui@cse.unsw.edu.au\n",
"msg_date": "Tue, 24 Jul 2001 11:51:43 +1000",
"msg_from": "Bill Shui <wshui@cse.unsw.edu.au>",
"msg_from_op": true,
"msg_subject": "plpgsql."
},
{
"msg_contents": "Bill Shui wrote:\n\n>Hi there,\n>\t\n>\tI remember that in earlier versions of Postgres.\n>\tYou have to do something (which I cannnot remember) to enable\n>\ta user to create plpgsql functions.\n>\t\n>\twhich versions of postgres were they?\n>\n>thanks in advance.\n>\n>Bill\n>\nCREATELANG as a command\n\n\n",
"msg_date": "Tue, 24 Jul 2001 00:54:46 -0500",
"msg_from": "Thomas Swan <tswan@olemiss.edu>",
"msg_from_op": false,
"msg_subject": "Re: plpgsql."
}
] |
[
{
"msg_contents": "Try this:\n\ntest=# create table test (a int4);\nCREATE\ntest=# grant select, update on te\n\nStop there and press 'TAB' to complete the word 'test'.\n\nYour command line then gets rewritten to :\n\ngrant select, update on SET\n\nIt seems that it occurs when you have commas in there...\n\nChris\n\n",
"msg_date": "Tue, 24 Jul 2001 10:55:31 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Bug in psql tab completion"
},
{
"msg_contents": "\nI just checked and putting any text in place of 'te' generates 'SET'. \nThis is becausee the tab completion code thinks 'UPDATE ON te' is an\nupdate on table ON. The tasble comletion stuff is pretty good, but not\nperfect.\n\n\n> Try this:\n> \n> test=# create table test (a int4);\n> CREATE\n> test=# grant select, update on te\n> \n> Stop there and press 'TAB' to complete the word 'test'.\n> \n> Your command line then gets rewritten to :\n> \n> grant select, update on SET\n> \n> It seems that it occurs when you have commas in there...\n> \n> Chris\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 25 Jul 2001 18:48:04 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bug in psql tab completion"
}
] |
[
{
"msg_contents": "I was thinking that this would help stop OID wrap around while not totally\nbreaking clients that used OIDs as row identifiers as they'd now have the\nint4 primary key value (although I guess there could be risks if the client\nassumes there'd be globally unique). Also the primary key would have to be\nplaced into the OID in all places it could be referenced (for WHERE\nclauses,etc...). It'd only work on those tables that had int4 priamary keys,\nbut I suspect thats a fair few. I don't know wether this'd be worth while,\nbut was rather throwing it out for thought.\n- Stuart\n\n> -----Original Message-----\n> From:\tHiroshi Inoue [SMTP:Inoue@tpf.co.jp]\n> Sent:\tTuesday, July 24, 2001 2:37 AM\n> To:\tHenshall, Stuart - WCP\n> Cc:\t'pgsql-hackers@postgresql.org'\n> Subject:\tRe: [HACKERS] RE: OID wraparound (was Re: pg_depend)\n> \n> \"Henshall, Stuart - WCP\" wrote:\n> > \n> > Would it be possible to offer an option for the OID column to get its\n> value\n> > from an int4 primary key (settable on a per table basis maybe)?\n> > - Stuart\n> > \n> \n> Sorry I don't understand well what you mean.\n> What kind of advantages are there if we let OIDs be optional\n> and allow such options like you offer ?\n> \n> regards,\n> Hiroshi Inoue\n",
"msg_date": "Tue, 24 Jul 2001 08:53:51 +0100",
"msg_from": "\"Henshall, Stuart - WCP\" <SHenshall@westcountrypublications.co.uk>",
"msg_from_op": true,
"msg_subject": "RE: RE: OID wraparound (was Re: pg_depend)"
}
] |
[
{
"msg_contents": "Hi,\n\nJust created a db from a pg_dump file and got this error:\n\nERROR: copy: line 602, Bad timestamp external representation '2000-10-03\n09:01:60.00+00'\n\nI guess its a bad representation because 09:01:60.00+00 is actually 09:02,\nbut how could it have got into my database/can I do anything about it? The\nvalue must have been inserted by my app via JDBC, I can't insert that value\ndirectly via psql.\n\nThanks,\nTamsin\n\n version\n---------------------------------------------------------------------\n PostgreSQL 7.0.2 on i686-pc-linux-gnu, compiled by gcc egcs-2.91.66\n(1 row)\n\n",
"msg_date": "Tue, 24 Jul 2001 10:15:42 +0100",
"msg_from": "\"tamsin\" <tg_mail@bryncadfan.co.uk>",
"msg_from_op": true,
"msg_subject": "Bad timestamp external representation"
},
{
"msg_contents": "From: \"tamsin\" <tg_mail@bryncadfan.co.uk>\n\n> Hi,\n>\n> Just created a db from a pg_dump file and got this error:\n>\n> ERROR: copy: line 602, Bad timestamp external representation '2000-10-03\n> 09:01:60.00+00'\n>\n> I guess its a bad representation because 09:01:60.00+00 is actually 09:02,\n> but how could it have got into my database/can I do anything about it?\nThe\n> value must have been inserted by my app via JDBC, I can't insert that\nvalue\n> directly via psql.\n\nSeem to remember a bug in either pg_dump or timestamp rendering causing\nrounding-up problems like this. If no-one else comes up with a definitive\nanswer, check the list archives. If you're not running the latest release,\ncheck the change-log.\n\nHTH\n\n- Richard Huxton\n\n",
"msg_date": "Tue, 24 Jul 2001 13:58:29 +0100",
"msg_from": "\"Richard Huxton\" <dev@archonet.com>",
"msg_from_op": false,
"msg_subject": "Re: Bad timestamp external representation"
},
{
"msg_contents": "It's a bug in timestamp output.\n\n# select '2001-07-24 15:55:59.999'::timestamp;\n ?column? \n---------------------------\n 2001-07-24 15:55:60.00-04\n(1 row)\n\nRichard Huxton wrote:\n> \n> From: \"tamsin\" <tg_mail@bryncadfan.co.uk>\n> \n> > Hi,\n> >\n> > Just created a db from a pg_dump file and got this error:\n> >\n> > ERROR: copy: line 602, Bad timestamp external representation '2000-10-03\n> > 09:01:60.00+00'\n> >\n> > I guess its a bad representation because 09:01:60.00+00 is actually 09:02,\n> > but how could it have got into my database/can I do anything about it?\n> The\n> > value must have been inserted by my app via JDBC, I can't insert that\n> value\n> > directly via psql.\n> \n> Seem to remember a bug in either pg_dump or timestamp rendering causing\n> rounding-up problems like this. If no-one else comes up with a definitive\n> answer, check the list archives. If you're not running the latest release,\n> check the change-log.\n> \n> HTH\n> \n> - Richard Huxton\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \nJoseph Shraibman\njks@selectacast.net\nIncrease signal to noise ratio. http://www.targabot.com\n",
"msg_date": "Tue, 24 Jul 2001 15:57:00 -0400",
"msg_from": "Joseph Shraibman <jks@selectacast.net>",
"msg_from_op": false,
"msg_subject": "Re: Bad timestamp external representation"
},
{
"msg_contents": "\nI can confirm that current CVS sources have the same bug.\n\n> It's a bug in timestamp output.\n> \n> # select '2001-07-24 15:55:59.999'::timestamp;\n> ?column? \n> ---------------------------\n> 2001-07-24 15:55:60.00-04\n> (1 row)\n> \n> Richard Huxton wrote:\n> > \n> > From: \"tamsin\" <tg_mail@bryncadfan.co.uk>\n> > \n> > > Hi,\n> > >\n> > > Just created a db from a pg_dump file and got this error:\n> > >\n> > > ERROR: copy: line 602, Bad timestamp external representation '2000-10-03\n> > > 09:01:60.00+00'\n> > >\n> > > I guess its a bad representation because 09:01:60.00+00 is actually 09:02,\n> > > but how could it have got into my database/can I do anything about it?\n> > The\n> > > value must have been inserted by my app via JDBC, I can't insert that\n> > value\n> > > directly via psql.\n> > \n> > Seem to remember a bug in either pg_dump or timestamp rendering causing\n> > rounding-up problems like this. If no-one else comes up with a definitive\n> > answer, check the list archives. If you're not running the latest release,\n> > check the change-log.\n> > \n> > HTH\n> > \n> > - Richard Huxton\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to majordomo@postgresql.org so that your\n> > message can get through to the mailing list cleanly\n> \n> -- \n> Joseph Shraibman\n> jks@selectacast.net\n> Increase signal to noise ratio. http://www.targabot.com\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://www.postgresql.org/search.mpl\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 25 Jul 2001 18:53:21 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Bad timestamp external representation"
},
{
"msg_contents": "On Wed, Jul 25, 2001 at 06:53:21PM -0400, Bruce Momjian wrote:\n> \n> I can confirm that current CVS sources have the same bug.\n> \n> > It's a bug in timestamp output.\n> > \n> > # select '2001-07-24 15:55:59.999'::timestamp;\n> > ?column? \n> > ---------------------------\n> > 2001-07-24 15:55:60.00-04\n> > (1 row)\n> > \n> > Richard Huxton wrote:\n> > > \n> > > From: \"tamsin\" <tg_mail@bryncadfan.co.uk>\n> > > \n> > > > Hi,\n> > > >\n> > > > Just created a db from a pg_dump file and got this error:\n> > > >\n> > > > ERROR: copy: line 602, Bad timestamp external representation '2000-10-03\n> > > > 09:01:60.00+00'\n> > > >\n> > > > I guess its a bad representation because 09:01:60.00+00 is actually 09:02,\n> > > > but how could it have got into my database/can I do anything about it?\n> > > The\n> > > > value must have been inserted by my app via JDBC, I can't insert that\n> > > value\n> > > > directly via psql.\n> > > \n> > > Seem to remember a bug in either pg_dump or timestamp rendering causing\n> > > rounding-up problems like this. If no-one else comes up with a definitive\n> > > answer, check the list archives. If you're not running the latest release,\n> > > check the change-log.\n\nIt is not a bug, in general, to generate or accept times like 09:01:60. \nLeap seconds are inserted as the 60th second of a minute. ANSI C \ndefines the range of struct member tm.tm_sec as \"seconds after the \nminute [0-61]\", inclusive, and strftime format %S as \"the second\nas a decimal number (00-61)\". A footnote mentions \"the range [0-61]\nfor tm_sec allows for as many as two leap seconds\".\n\nThis is not to say that pg_dump should misrepresent stored times,\nbut rather that PG should not reject those misrepresented times as \nbeing ill-formed. We were lucky that PG has the bug which causes\nit to reject these times, as it led to the other bug in pg_dump being\nnoticed.\n\nNathan Myers\nncm@zembu.com\n",
"msg_date": "Wed, 25 Jul 2001 16:31:31 -0700",
"msg_from": "ncm@zembu.com (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: Bad timestamp external representation"
},
{
"msg_contents": "> On Wed, Jul 25, 2001 at 06:53:21PM -0400, Bruce Momjian wrote:\n> > \n> > I can confirm that current CVS sources have the same bug.\n> > \n> > > It's a bug in timestamp output.\n> > > \n> > > # select '2001-07-24 15:55:59.999'::timestamp;\n> > > ?column? \n> > > ---------------------------\n> > > 2001-07-24 15:55:60.00-04\n> > > (1 row)\n> > > \n> > > Richard Huxton wrote:\n> > > > \n> > > > From: \"tamsin\" <tg_mail@bryncadfan.co.uk>\n> > > > \n> > > > > Hi,\n> > > > >\n> > > > > Just created a db from a pg_dump file and got this error:\n> > > > >\n> > > > > ERROR: copy: line 602, Bad timestamp external representation '2000-10-03\n> > > > > 09:01:60.00+00'\n> > > > >\n> > > > > I guess its a bad representation because 09:01:60.00+00 is actually 09:02,\n> > > > > but how could it have got into my database/can I do anything about it?\n> > > > The\n> > > > > value must have been inserted by my app via JDBC, I can't insert that\n> > > > value\n> > > > > directly via psql.\n> > > > \n> > > > Seem to remember a bug in either pg_dump or timestamp rendering causing\n> > > > rounding-up problems like this. If no-one else comes up with a definitive\n> > > > answer, check the list archives. If you're not running the latest release,\n> > > > check the change-log.\n> \n> It is not a bug, in general, to generate or accept times like 09:01:60. \n> Leap seconds are inserted as the 60th second of a minute. ANSI C \n> defines the range of struct member tm.tm_sec as \"seconds after the \n> minute [0-61]\", inclusive, and strftime format %S as \"the second\n> as a decimal number (00-61)\". A footnote mentions \"the range [0-61]\n> for tm_sec allows for as many as two leap seconds\".\n> \n> This is not to say that pg_dump should misrepresent stored times,\n> but rather that PG should not reject those misrepresented times as \n> being ill-formed. We were lucky that PG has the bug which causes\n> it to reject these times, as it led to the other bug in pg_dump being\n> noticed.\n\nWe should access :60 seconds but we should round 59.99 to 1:00, right?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 26 Jul 2001 17:38:23 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bad timestamp external representation"
},
{
"msg_contents": "On Thu, Jul 26, 2001 at 05:38:23PM -0400, Bruce Momjian wrote:\n> Nathan Myers wrote:\n> > Bruce wrote:\n> > > \n> > > I can confirm that current CVS sources have the same bug.\n> > > \n> > > > It's a bug in timestamp output.\n> > > > \n> > > > # select '2001-07-24 15:55:59.999'::timestamp;\n> > > > ?column? \n> > > > ---------------------------\n> > > > 2001-07-24 15:55:60.00-04\n> > > > (1 row)\n> > > > \n> > > > Richard Huxton wrote:\n> > > > > \n> > > > > From: \"tamsin\" <tg_mail@bryncadfan.co.uk>\n> > > > > \n> > > > > > Hi,\n> > > > > >\n> > > > > > Just created a db from a pg_dump file and got this error:\n> > > > > >\n> > > > > > ERROR: copy: line 602, Bad timestamp external representation \n> > > > > > '2000-10-03 09:01:60.00+00'\n> > > > > >\n> > > > > > I guess its a bad representation because 09:01:60.00+00\n> > > > > > is actually 09:02, but how could it have got into my\n> > > > > > database/can I do anything about it? The value must have\n> > > > > > been inserted by my app via JDBC, I can't insert that value\n> > > > > > directly via psql.\n> > > > >\n> > > > > Seem to remember a bug in either pg_dump or timestamp\n> > > > > rendering causing rounding-up problems like this. If no-one\n> > > > > else comes up with a definitive answer, check the list\n> > > > > archives. If you're not running the latest release, check the\n> > > > > change-log.\n> >\n> > It is not a bug, in general, to generate or accept times like\n> > 09:01:60. Leap seconds are inserted as the 60th second of a minute.\n> > ANSI C defines the range of struct member tm.tm_sec as \"seconds\n> > after the minute [0-61]\", inclusive, and strftime format %S as \"the\n> > second as a decimal number (00-61)\". A footnote mentions \"the range\n> > [0-61] for tm_sec allows for as many as two leap seconds\".\n> >\n> > This is not to say that pg_dump should misrepresent stored times,\n> > but rather that PG should not reject those misrepresented times as\n> > being ill-formed. We were lucky that PG has the bug which causes it\n> > to reject these times, as it led to the other bug in pg_dump being\n> > noticed.\n>\n> We should access :60 seconds but we should round 59.99 to 1:00, right?\n\nIf the xx:59.999 occurred immediately before a leap second, rounding it\nup to (xx+1):00.00 would introduce an error of 1.001 seconds.\n\nAs I understand it, the problem is in trying to round 59.999 to two\ndigits. My question is, why is pg_dump representing times with less \nprecision than PostgreSQL's internal format? Should pg_dump be lossy?\n\nNathan Myers\nncm@zembu.com\n",
"msg_date": "Thu, 26 Jul 2001 15:13:34 -0700",
"msg_from": "ncm@zembu.com (Nathan Myers)",
"msg_from_op": false,
"msg_subject": "Re: Bad timestamp external representation"
},
{
"msg_contents": "> > > It is not a bug, in general, to generate or accept times like\n> > > 09:01:60. Leap seconds are inserted as the 60th second of a minute.\n> > > ANSI C defines the range of struct member tm.tm_sec as \"seconds\n> > > after the minute [0-61]\", inclusive, and strftime format %S as \"the\n> > > second as a decimal number (00-61)\". A footnote mentions \"the range\n> > > [0-61] for tm_sec allows for as many as two leap seconds\".\n> > >\n> > > This is not to say that pg_dump should misrepresent stored times,\n> > > but rather that PG should not reject those misrepresented times as\n> > > being ill-formed. We were lucky that PG has the bug which causes it\n> > > to reject these times, as it led to the other bug in pg_dump being\n> > > noticed.\n> >\n> > We should access :60 seconds but we should round 59.99 to 1:00, right?\n> \n> If the xx:59.999 occurred immediately before a leap second, rounding it\n> up to (xx+1):00.00 would introduce an error of 1.001 seconds.\n\nOh, so there is a good reason for showing :60.\n\n> As I understand it, the problem is in trying to round 59.999 to two\n> digits. My question is, why is pg_dump representing times with less \n> precision than PostgreSQL's internal format? Should pg_dump be lossy?\n\nNo idea.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 26 Jul 2001 19:42:54 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bad timestamp external representation"
},
{
"msg_contents": "At 15:13 26/07/01 -0700, Nathan Myers wrote:\n>Should pg_dump be lossy?\n\nNo it shouldn't, but it already is because it uses decimal text reps of\neverything; we lose data when dumping floats as well. In the latter case we\nshould dump the hex text reps to get the full bit width. Something similar\nis probably true for times etc. It's just a lot less readable.\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Fri, 27 Jul 2001 12:18:56 +1000",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Bad timestamp external representation"
}
] |
[
{
"msg_contents": "Hello friends,\n\nWhat is the best way to parse and store an XML document in PostgreSQL?\nI would like to store fwbuilder (http://www.fwbuilder.org) objects in \nPostgreSQL.\n\nAny information is welcome.\n\nRegards, Jean-Michel POURE\npgAdmin Development Team\n\n",
"msg_date": "Tue, 24 Jul 2001 15:11:31 +0200",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": true,
"msg_subject": "Storing XML in PostgreSQL"
},
{
"msg_contents": "In article <4.2.2.20010724150449.00a9ea90@192.168.0.67>,\njm.poure@freesurf.fr (Jean-Michel POURE) wrote:\n> Hello friends,\n> \n> What is the best way to parse and store an XML document in PostgreSQL? I\n> would like to store fwbuilder (http://www.fwbuilder.org) objects in \n> PostgreSQL.\n> \n\nI think the best way depends on what you're trying to achieve with the\ndocument once you have it in the database. One approach is to have tables\nfor elements, attributes and cdata and use an XML parser to insert\nappropriate database records.\n\nI have used a schema such as the following- in the cdata table \"element\" is\nthe ID of the containing element, and itempos is just an integer used to\norder the entries. I used this with a bit of java which hooks up to the\nLark parser (using SAX) to do the parsing and fires off INSERT queries \nthrough the jdbc driver. \n\nCREATE SEQUENCE cdata_seq; \nCREATE SEQUENCE attribute_seq; \nCREATE SEQUENCE element_seq; \n\nCREATE TABLE element (\n document integer, element integer not null PRIMARY KEY\n default nextval('element_seq'), name text, parent integer, itempos\n integer\n );\n\nCREATE TABLE attribute (\n document integer, attribute integer not null default\n nextval('attribute_seq'), name text, value text, element integer,\n itempos integer\n );\n\n\nCREATE TABLE cdata (\n document integer, cdata integer not null default\n nextval('cdata_seq'), value text, element integer, itempos integer\n );\n\nIn my example, I was interested in selecting all the cdata content \nof a <type> tag immediately contained within a <feature> tag path.\n\nThe easiest solution is to create a view, which can then be queried to \nfind all cases where, for example, feature type = 'Ditch'.\n\nCREATE VIEW featuretype AS featuretype\nSELECT c.document,c.value\nFROM cdata c, element e, element e1\nWHERE c.element = e.element \nAND e.parent = e1.element\nAND e.name = 'type'\nAND e1.name = 'feature'\nAND c.document = e.document\nAND e.document = e1.document;\n\nif you are interested I can provide the very basic (java) code I used for \nthis.\n\nOR, depending on what these fwbuilder objects involve, you can of \ncourse just store XML documents in fields of type text (especially if \nyou use 7.1 which has TOAST, so you can store long documents). IT's \nnot difficult to hook up a parser (I'm using expat) to a PostgreSQL \nfunction written in C and parse on the fly.\n\nI haven't really finished that code, but after I've commented it, I can \ncertainly post it if anyone is interested. It does work, but probably\nneeds some tidying. It really wasn't difficult to write the functions \nthough. In fact, I've been surprised by how easy it is to write \nPostgreSQL C functions...\n\nPlease contact me if you have any questions -I've been away for a bit\nso haven't worked on that code for a couple of weeks -I'm hoping to \nget back into it soon.\n\nRegards\n\nJohn\n\n\n",
"msg_date": "Wed, 25 Jul 2001 00:15:21 +0000",
"msg_from": "\"John Gray\" <jgray@beansindustry.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: Storing XML in PostgreSQL"
},
{
"msg_contents": "* \"John Gray\" <jgray@beansindustry.co.uk> wrote:\n|\n\n| OR, depending on what these fwbuilder objects involve, you can of \n| course just store XML documents in fields of type text (especially if \n| you use 7.1 which has TOAST, so you can store long documents). IT's \n| not difficult to hook up a parser (I'm using expat) to a PostgreSQL \n| function written in C and parse on the fly.\n| \n\nDo you have any documentation on your C functions ? I'm just interested\nin knowing what functions they provide.\n\nregards, \n\n Gunnar\n\n-- \nGunnar R�nning - gunnar@polygnosis.com\nSenior Consultant, Polygnosis AS, http://www.polygnosis.com/\n",
"msg_date": "25 Jul 2001 14:49:18 +0200",
"msg_from": "Gunnar =?iso-8859-1?q?R=F8nning?= <gunnar@polygnosis.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: Storing XML in PostgreSQL"
},
{
"msg_contents": "In article <m2elr5b2cx.fsf@smaug.polygnosis.com>, gunnar@polygnosis.com\n(Gunnar =?iso-8859-1?q?R=F8nning?=) wrote:\n\n> Do you have any documentation on your C functions ? I'm just interested\n> in knowing what functions they provide.\n> \n\nThere are only two (so far). They're very basic. I have:\n\npgxml_parse(text) returns bool\n -parses the provided text and returns true or false if it is \nwell-formed or not.\n\npgxml_xpath(text doc, text xpath, int n) returns text\n -parses doc and returns the cdata of the nth occurence of\nthe \"XPath\" listed. This does handle relative and absolute paths \nbut nothing else at present. I have a few variants of this. \n\nSo, given a table docstore:\n\n Attribute | Type | Modifier \n-----------+---------+----------\n docid | integer | \n document | text | \n\ncontaining documents such as:\n\n<?XML version=\"1.0\"?>\n<site provider=\"Foundations\" sitecode=\"ak97\" version=\"1\">\n <name>Church Farm, Ashton Keynes</name>\n <invtype>watching brief</invtype>\n <location scheme=\"osgb\">SU04209424</location>\n</site>\n\nI can type:\nselect docid, \npgxml_xpath(document,'/site/name',1) as sitename,\npgxml_xpath(document,'/site/location',1) as location\n from docstore;\n \nand I get:\n\n docid | sitename | location \n-------+-----------------------------+------------\n 1 | Church Farm, Ashton Keynes | SU04209424\n 2 | Glebe Farm, Long Itchington | SP41506500\n(2 rows)\n\nThe next thing is to use the \"function as tuple source\" support which is\nunderway in order to allow the return of a list (in the DTD I'm using\n-and doubtless many others- certain elements might be repeated, and\nI think it would be good to be able to join against all the data from a \nparticular element.\n\nI hope this helps give a flavour. I'll try and tidy up the functions in the\nnext couple of days and then I can post what I've got so far. I'm keen to\nbuild on this, as it's part of an (unfunded, unfortunately) project we're \ndoing. Expat is MIT-licensed so I don't imagine there's a problem linking\nit into PostgreSQL.\n\nOne aim is to allow people to set pg functions as the handlers \"direct\"\nfrom the parser -the catch is that the expat API has lots of handlers\n(OK, so most of them are less commonly used), so it's a matter of \nworking out a) an efficient API for setting handlers on a particular \nparser and b) how persistent a parser instance should be (each expat\ninstance can only do one document). Of course, expat may not be the \nbest one to use -it would be great to be parser-agnostic and use SAX\nwith a java parser, but I don't think we have java as a language for \nuser functions yet :-)\n \nIncidentally, I'll be changing my email address over the next couple \nof daysto jgray@azuli.co.uk -just so you can follow this thread after\nI've done that....\n\nRegards\n\nJohn\nAzuli IT\n\n\n",
"msg_date": "Wed, 25 Jul 2001 18:12:28 +0000",
"msg_from": "\"John Gray\" <jgray@beansindustry.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: Re: Storing XML in PostgreSQL"
},
{
"msg_contents": "I've packaged up what I've done so far and you can find it at\nhttp://www.cabbage.uklinux.net/pgxml.tar.gz\n\nThe TODO file included indicates what still remains to be done (a lot!).\n\nIn particular, it would be good to implement more of the XPath grammar.\nHowever, once we get into the realm of more complex paths there becomes a\nquestion about optimisation of XPath selection. If the documents are\npre-parsed, then XPath query elements can be rewritten as SQL queries and\nyou get the optimisation of the planner on your side.\n\nI'd like to stick with the current solution if possible, because I think\nit delivers a very simple interface to the user and is (code-wise) also\nvery straightforward. Maybe less efficient queries are a penalty worth paying?\n\nAny thoughts?\n\nRegards\n\nJohn\n\n",
"msg_date": "Thu, 26 Jul 2001 17:40:57 +0000",
"msg_from": "\"John Gray\" <jgray@beansindustry.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: Re: Storing XML in PostgreSQL"
},
{
"msg_contents": "\nShould we add this to /contrib?\n\n> I've packaged up what I've done so far and you can find it at\n> http://www.cabbage.uklinux.net/pgxml.tar.gz\n> \n> The TODO file included indicates what still remains to be done (a lot!).\n> \n> In particular, it would be good to implement more of the XPath grammar.\n> However, once we get into the realm of more complex paths there becomes a\n> question about optimisation of XPath selection. If the documents are\n> pre-parsed, then XPath query elements can be rewritten as SQL queries and\n> you get the optimisation of the planner on your side.\n> \n> I'd like to stick with the current solution if possible, because I think\n> it delivers a very simple interface to the user and is (code-wise) also\n> very straightforward. Maybe less efficient queries are a penalty worth paying?\n> \n> Any thoughts?\n> \n> Regards\n> \n> John\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 26 Jul 2001 17:28:55 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: Re: Storing XML in PostgreSQL"
},
{
"msg_contents": "> Should we add this to /contrib?\n\nI think so, at least until we get something better.\n\nCheers,\n\nColin\n\n\n",
"msg_date": "Fri, 27 Jul 2001 14:33:33 +0200",
"msg_from": "\"Colin 't Hart\" <cthart@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: Re: Storing XML in PostgreSQL"
},
{
"msg_contents": "In article <9jrn78$pbv$1@news.tht.net>, \"Colin 't Hart\" <cthart@yahoo.com>\nwrote:\n>> Should we add this to /contrib?\n> \n> I think so, at least until we get something better.\n> \n\nI'm happy for you to add it, if you're willing to have it (It is meant to\nbe under the PostgreSQL license). I agree that there's still much to be\ndone... note that another thread (From TODO, XML?) has started up on this\nsubject as well.\n\nNo threads on XML for months, and then along come two at once :)\n\nRegards\n\nJohn\n\n",
"msg_date": "Fri, 27 Jul 2001 18:33:54 +0000",
"msg_from": "\"John Gray\" <jgray@beansindustry.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: Re: Re: Storing XML in PostgreSQL"
},
{
"msg_contents": "> In article <9jrn78$pbv$1@news.tht.net>, \"Colin 't Hart\" <cthart@yahoo.com>\n> wrote:\n> >> Should we add this to /contrib?\n> > \n> > I think so, at least until we get something better.\n> > \n> \n> I'm happy for you to add it, if you're willing to have it (It is meant to\n> be under the PostgreSQL license). I agree that there's still much to be\n> done... note that another thread (From TODO, XML?) has started up on this\n> subject as well.\n\nI figured we could add it to /contrib and use it as a starting point.\n\n> No threads on XML for months, and then along come two at once :)\n\nYep. Seems it is getting hot. I like the use of XML to transfer data\nand schema between databases from different vendors.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 29 Jul 2001 23:20:34 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: Re: Re: Storing XML in PostgreSQL"
},
{
"msg_contents": "Your patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n> I've packaged up what I've done so far and you can find it at\n> http://www.cabbage.uklinux.net/pgxml.tar.gz\n> \n> The TODO file included indicates what still remains to be done (a lot!).\n> \n> In particular, it would be good to implement more of the XPath grammar.\n> However, once we get into the realm of more complex paths there becomes a\n> question about optimisation of XPath selection. If the documents are\n> pre-parsed, then XPath query elements can be rewritten as SQL queries and\n> you get the optimisation of the planner on your side.\n> \n> I'd like to stick with the current solution if possible, because I think\n> it delivers a very simple interface to the user and is (code-wise) also\n> very straightforward. Maybe less efficient queries are a penalty worth paying?\n> \n> Any thoughts?\n> \n> Regards\n> \n> John\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 29 Jul 2001 23:20:50 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: Re: Storing XML in PostgreSQL"
},
{
"msg_contents": "\nAdded to /contrib, with small Makefile changes. Requires expat library.\nDoes not compile by default.\n\n> I've packaged up what I've done so far and you can find it at\n> http://www.cabbage.uklinux.net/pgxml.tar.gz\n> \n> The TODO file included indicates what still remains to be done (a lot!).\n> \n> In particular, it would be good to implement more of the XPath grammar.\n> However, once we get into the realm of more complex paths there becomes a\n> question about optimisation of XPath selection. If the documents are\n> pre-parsed, then XPath query elements can be rewritten as SQL queries and\n> you get the optimisation of the planner on your side.\n> \n> I'd like to stick with the current solution if possible, because I think\n> it delivers a very simple interface to the user and is (code-wise) also\n> very straightforward. Maybe less efficient queries are a penalty worth paying?\n> \n> Any thoughts?\n> \n> Regards\n> \n> John\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 30 Jul 2001 10:59:21 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re: Re: Storing XML in PostgreSQL"
}
] |
[
{
"msg_contents": "OK, so I've defined a grammar for string_expr, which means the following\ncurrently works:\n\nCREATE FUNCTION foo_raise_loop(text) RETURNS text AS '\nDECLARE\n a ALIAS FOR $1;\n i integer;\n myrec RECORD;\nBEGIN\n i:=0;\n FOR myrec IN SELECT * FROM colours LOOP\n i:=i+1;\n RAISE NOTICE a || '' : '' || '' colour % is '' || myrec.c_name ||\n''.'', i, myrec.c_id;\n END LOOP;\n RETURN ''done''::text;\nEND;' LANGUAGE 'plpgsql';\n\nSELECT foo_raise_loop('Looping (%)');\n\n\nWhich produces (note the % nr Looping gets evaluated):\n\nNOTICE: Looping (1) : colour 1 is red.\nNOTICE: Looping (2) : colour 2 is green.\nNOTICE: Looping (3) : colour 3 is blue.\n\n\nWhat you haven't got are: brackets, casts, function calls, other operators\n(can't do i+1).\n\nI'm going to be out of town for a few days then busy for a couple of weeks.\nThrow in a week to debug,document and apply against CVS and we're into\nAugust. So - do you want it with current functionality or should I press on?\n\n- Richard Huxton\n\n",
"msg_date": "Tue, 24 Jul 2001 14:55:18 +0100",
"msg_from": "\"Richard Huxton\" <dev@archonet.com>",
"msg_from_op": true,
"msg_subject": "RAISE <level> <expr> <params>: state of play and request for advice"
},
{
"msg_contents": "\nWas this completed?\n\n> OK, so I've defined a grammar for string_expr, which means the following\n> currently works:\n> \n> CREATE FUNCTION foo_raise_loop(text) RETURNS text AS '\n> DECLARE\n> a ALIAS FOR $1;\n> i integer;\n> myrec RECORD;\n> BEGIN\n> i:=0;\n> FOR myrec IN SELECT * FROM colours LOOP\n> i:=i+1;\n> RAISE NOTICE a || '' : '' || '' colour % is '' || myrec.c_name ||\n> ''.'', i, myrec.c_id;\n> END LOOP;\n> RETURN ''done''::text;\n> END;' LANGUAGE 'plpgsql';\n> \n> SELECT foo_raise_loop('Looping (%)');\n> \n> \n> Which produces (note the % nr Looping gets evaluated):\n> \n> NOTICE: Looping (1) : colour 1 is red.\n> NOTICE: Looping (2) : colour 2 is green.\n> NOTICE: Looping (3) : colour 3 is blue.\n> \n> \n> What you haven't got are: brackets, casts, function calls, other operators\n> (can't do i+1).\n> \n> I'm going to be out of town for a few days then busy for a couple of weeks.\n> Throw in a week to debug,document and apply against CVS and we're into\n> August. So - do you want it with current functionality or should I press on?\n> \n> - Richard Huxton\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 5 Sep 2001 17:24:43 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: RAISE <level> <expr> <params>: state of play and request"
}
] |
[
{
"msg_contents": "From: Alex Crettol <AlexC@BassSoftware.com>\nSubject: RE: Thai data import into PostgreSQL\nDate: Mon, 23 Jul 2001 16:39:20 +1000\nMessage-ID: <415719E2420CD4118EEF00001C1902FE339760@bassex.melb.basssoftware.com>\n\n> Tatsuo,\n> \n> Thank you for your reply. \n> \n> I unfortunately do not have much detail on the encoding used. \n> The details that Oracle provide are TH8TISASCII = Thai Industrial Standard\n> 620-2533-ASCII 8-bit (Single-byte encoding, Strict Superset of ASCII, EURO\n> symbol supported). I had a look on the web, but could not find anything more\n> detailed. \n> Do you know if there is any encoding existing for PostgreSQL which could\n> match this ?\n\nI'm not sure what \"Strict Superset of ASCII\" actually means, but it\nseems you could use SQL_ASCII or LATINn (where n = 1~5) encoding,\nsince they are also \"Strict Superset of ASCII\".\nOr even you don't need to enable the multibyte at all, I guess.\n--\nTatsuo Ishii\n\n",
"msg_date": "Wed, 25 Jul 2001 10:14:54 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "RE: Thai data import into PostgreSQL"
}
] |
[
{
"msg_contents": "I'm porting some stored procedures from a MSSQL server, and thought\nI'd use PL/pgSQL.\n\nThe original code is checking the insert with the line:\n\n if (@@Error != 0)\n\nHow do I do the same thing in PL/pgSQL?\n\n-- \n Turbo __ _ Debian GNU Unix _IS_ user friendly - it's just \n ^^^^^ / /(_)_ __ _ ___ __ selective about who its friends are \n / / | | '_ \\| | | \\ \\/ / Debian Certified Linux Developer \n _ /// / /__| | | | | |_| |> < Turbo Fredriksson turbo@tripnet.se\n \\\\\\/ \\____/_|_| |_|\\__,_/_/\\_\\ Stockholm/Sweden\n\nexplosion ammonium ammunition iodine strategic Rule Psix NORAD FBI\nfissionable Treasury cryptographic killed AK-47 Nazi Waco, Texas\n[See http://www.aclu.org/echelonwatch/index.html for more about this]\n",
"msg_date": "25 Jul 2001 11:59:21 +0200",
"msg_from": "Turbo Fredriksson <turbo@bayour.com>",
"msg_from_op": true,
"msg_subject": "plpgsql: Checking status on a 'INSERT INTO ...'"
},
{
"msg_contents": "> I'm porting some stored procedures from a MSSQL server, and thought I'd\n> use PL/pgSQL.\n> \n> The original code is checking the insert with the line:\n> \n> if (@@Error != 0)\n\nYou might want to use something like:\n\nSELECT INTO variable_name *\n FROM table\n WHERE field = some_value;\n\nIF FOUND THEN\n somevar := variable_name.fieldname ;\nELSE\n RAISE EXCEPTION ''ERROR blah blah'';\nEND IF;\n\nAnd you also want to look into the @@rowcount:\n\nGET DIAGNOSTICS v_rowcount = ROW_COUNT ;\n\nReinoud\n\n",
"msg_date": "Wed, 25 Jul 2001 12:09:07 +0200 (CEST)",
"msg_from": "\"Reinoud van Leeuwen\" <reinoud@xs4all.nl>",
"msg_from_op": false,
"msg_subject": "Re: plpgsql: Checking status on a 'INSERT INTO ...'"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.